New Log Entry Telemetry not working

Hi,

I added the sample log entry (log-entry.yaml)

It says the instances and handlers created correctly.I run the service endpoints a couple of times and see nothing in the logs.Same with the metrics too.

image

Any help please.

2019-06-25T01:37:46.069161Z info Publishing 23 events
2019-06-25T01:37:46.074144Z info Built new config.Snapshot: id=‘8’
2019-06-25T01:37:46.074204Z info adapters getting kubeconfig from: “” {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.074220Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-06-25T01:37:46.075302Z info adapters Waiting for kubernetes cache sync… {“adapter”: “kubernetesenv.istio-system”}
gc 151 @16515.750s 0%: 0.034+7.9+1.9 ms clock, 0.13+3.9/8.4/15+7.8 ms cpu, 17->17->8 MB, 18 MB goal, 6 P
2019-06-25T01:37:46.175504Z info adapters Cache sync successful. {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.175551Z info adapters getting kubeconfig from: “” {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.175557Z warn Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
2019-06-25T01:37:46.176593Z info Setting up event handlers
2019-06-25T01:37:46.176792Z info Starting Secrets controller
2019-06-25T01:37:46.176823Z info Waiting for informer caches to sync
2019-06-25T01:37:46.179377Z info Cleaning up handler table, with config ID:7
2019-06-25T01:37:46.179411Z info adapters deleted remote controller {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.179496Z info adapters shutting down daemon… {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.179542Z info adapters shutting down daemon… {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.179605Z info adapters shutting down daemon… {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.179809Z info adapters shutting down daemon… {“adapter”: “kubernetesenv.istio-system”}
2019-06-25T01:37:46.179426Z info adapters adapter closed all scheduled daemons and workers {“adapter”: “prometheus.istio-system”}
2019-06-25T01:37:47.179741Z info adapters adapter closed all scheduled daemons and workers {“adapter”: “kubernetesenv.istio-system”}

kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-citadel-657f5dd79c-nffrr 1/1 Running 0 4m
istio-cleanup-secrets-1.2.0-nzrnm 0/1 Completed 0 4m
istio-galley-9b7bbcdb7-8rb4c 1/1 Running 0 4m
istio-ingressgateway-578d68dc98-bl2nl 1/1 Running 0 4m
istio-init-crd-10-ttjfs 0/1 Completed 0 6m
istio-init-crd-11-g986d 0/1 Completed 0 6m
istio-init-crd-12-sclmj 0/1 Completed 0 6m
istio-pilot-6c598d484b-dkxjz 2/2 Running 0 4m
istio-policy-7b879f8c7c-m64qm 2/2 Running 2 4m
istio-security-post-install-1.2.0-sh9nf 0/1 Completed 0 4m
istio-sidecar-injector-5d4c9bffd6-sr9n5 1/1 Running 0 4m
istio-telemetry-848cd7967-b9tqw 2/2 Running 2 4m
prometheus-5b48f5d49-tmlwm 1/1 Running 0 4m

Thanks,
saravana

What was your YAML? And what version of Istio (looks like 1.2)?

I just followed the Collecting Logs task and was able to generate logs without issue (using an existing 1.2 cluster and the bookinfo sample app):

$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.2/samples/bookinfo/telemetry/log-entry.yaml
...

$ curl http://<my gateway ip addr>/productpage
...

$ kubectl -n istio-control logs <istio-telemtry-pod> mixer
...
019-06-25T18:15:48.737067Z	info	Publishing 25 events
2019-06-25T18:15:48.746649Z	info	Built new config.Snapshot: id='11177'
2019-06-25T18:15:48.746891Z	info	adapters	getting kubeconfig from: ""	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.746967Z	warn	Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2019-06-25T18:15:48.748417Z	info	adapters	Waiting for kubernetes cache sync...	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.849103Z	info	adapters	Cache sync successful.	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.849200Z	info	adapters	getting kubeconfig from: ""	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.849217Z	warn	Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2019-06-25T18:15:48.850484Z	info	Setting up event handlers
2019-06-25T18:15:48.850794Z	info	Starting Secrets controller
2019-06-25T18:15:48.852304Z	info	Waiting for informer caches to sync
2019-06-25T18:15:48.855708Z	info	Cleaning up handler table, with config ID:11176
2019-06-25T18:15:48.855778Z	info	adapters	deleted remote controller 	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.855814Z	info	adapters	adapter closed all scheduled daemons and workers	{"adapter": "stdio.istio-control"}
2019-06-25T18:15:48.855900Z	info	adapters	adapter closed all scheduled daemons and workers	{"adapter": "prometheus.istio-control"}
2019-06-25T18:15:48.856177Z	info	adapters	shutting down daemon...	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.856233Z	info	adapters	shutting down daemon...	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.856304Z	info	adapters	shutting down daemon...	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.856365Z	info	adapters	shutting down daemon...	{"adapter": "kubernetesenv.istio-control"}
2019-06-25T18:15:48.856443Z	info	adapters	adapter closed all scheduled daemons and workers	{"adapter": "kubernetesenv.istio-control"}
GC forced
gc 29443 @3364164.077s 0%: 0.11+3262+0.027 ms clock, 0.44+98/4645/3485+0.10 ms cpu, 964->966->897 MB, 1796 MB goal, 6 P
{"level":"info","time":"2019-06-25T18:16:14.421034Z","instance":"accesslog.instance.istio-control","apiClaims":"","apiKey":"","clientTraceId":"","connection_security_policy":"mutual_tls","destinationApp":"details","destinationIp":"10.44.2.12","destinationName":"details-v1-6886b56dc8-kwtkv","destinationNamespace":"default","destinationOwner":"kubernetes://apis/apps/v1/namespaces/default/deployments/details-v1","destinationPrincipal":"cluster.local/ns/default/sa/default","destinationServiceHost":"details.default.svc.cluster.local","destinationWorkload":"details-v1","grpcMessage":"","grpcStatus":"","httpAuthority":"details:9080","latency":"3.713706ms","method":"GET","permissiveResponseCode":"none","permissiveResponsePolicyID":"none","protocol":"http","receivedBytes":945,"referer":"","reporter":"destination","requestId":"ded9add2-c3b9-48c2-a081-82da5b13cc99","requestSize":0,"requestedServerName":"outbound_.9080_._.details.default.svc.cluster.local","responseCode":200,"responseFlags":"-","responseSize":178,"responseTimestamp":"2019-06-25T18:16:14.424538Z","sentBytes":313,"sourceApp":"productpage","sourceIp":"10.44.2.15","sourceName":"productpage-v1-6db7564db8-pvsnd","sourceNamespace":"default","sourceOwner":"kubernetes://apis/apps/v1/namespaces/default/deployments/productpage-v1","sourcePrincipal":"cluster.local/ns/default/sa/default","sourceWorkload":"productpage-v1","url":"/details/0","userAgent":"curl/7.60.0","xForwardedFor":"10.44.2.15"}
...

Hi Douglas,

Thanks.

1.2.0 is the version.

Yaml content below.Unable to post as text content.

I have deployed a HTTP server app in a container and has injection enabled and sidecar config ok.Am not using the BookingInfo app here.

Thanks,
Saravana

Blockquote

Saravana,

Did you perhaps install with a different configDefaultNamespace (should be istio-system to match your config), or maybe accidentally checked the logs for istio-policy instead of istio-telemetry?

I took your exact config and was able to generate:

...
{"level":"warn","time":"2019-06-26T17:05:19.555408Z","instance":"newlog1.instance.istio-system","latency":"12.936609ms","responseCode":200,"responseSize":5}
{"level":"warn","time":"2019-06-26T17:05:19.577334Z","instance":"newlog1.instance.istio-system","latency":"9.893999ms","responseCode":200,"responseSize":5}
{"level":"warn","time":"2019-06-26T17:05:19.586200Z","instance":"newlog1.instance.istio-system","latency":"8.770721ms","responseCode":200,"responseSize":5}
...

To further confirm, I went and made sure I was running with a 1.2.X release:

$ kubectl -n istio-system describe deploy istio-telemetry | grep Image | grep mixer
    Image:       istio/mixer:1.2.1

If you believe that everything is setup correctly and you are looking in the right place, I suggest checking some of the mixer self-monitoring metrics to ensure that there aren’t other errors.

Thanks Douglas,

I am checking the right logs.

kubectl logs istio-telemetry-848cd7967-b9tqw -n istio-system mixer | grep newlog1

Do you mean I installed my own HTTP server app with a default namespace? Yes I have not used any custom namespace for the app itself.However istio is installed under istio-system.

The self monitoring metrics show zero count for errors for mixer.

Thanks,
Saravana

Hi Douglas,

Any further setting I need to check.ShouldI configure Ingress GW and then only logs will be captured?

Thanks,
Saravana