Stackdriver adapter getting shutdown

I’ve installed Istio v1.2.2 via Helm on our GKE nodes, and mixer keeps dumping the following log after each restart and config change.
adapters shutting down daemon... {"adapter": "stackdriver.istio-system"}

I’m using the new compiled templates format and I copied them from here: https://github.com/istio/operator/blob/1670827469a376fd755f8d7425dc333bc00d6a9c/data/charts/istio-telemetry/mixer-telemetry/templates/stackdriver.yaml

When I make a request to my Istio enabled service during telemetry restart, it is able to push some metrics to Stackdriver temporarily such that it rules out the possibility of my configurations being invalid or any sort of other authentication related issues. It is more likely that it works till SD adapter gets shutdown.

You can find the complete telemetry log below, if it would be any use. I’ve also tried dumping debug level logs, but it didn’t add any extra information between the “Starting grpc server” and “shutting down daemon” logs.

gc 1 @0.020s 1%: 0.017+1.7+0.19 ms clock, 0.10+0.10/1.3/2.0+1.1 ms cpu, 4->4->2 MB, 5 MB goal, 6 P
gc 2 @0.031s 3%: 0.013+1.6+0.46 ms clock, 0.078+0.26/1.6/2.2+2.7 ms cpu, 4->4->3 MB, 5 MB goal, 6 P
Mixer started with
MaxMessageSize:  1048576
MaxConcurrentStreams:  1024
APIWorkerPoolSize:  1024
AdapterWorkerPoolSize:  1024
APIPort:  9091
APIAddress:  unix:///sock/mixer.socket
MonitoringPort:  15014
EnableProfiling:  true
SingleThreaded:  false
NumCheckCacheEntries:  1500000
ConfigStoreURL:  mcp://istio-galley.istio-system.svc:9901
CertificateFile:  /etc/certs/cert-chain.pem
KeyFile:  /etc/certs/key.pem
CACertificateFile:  /etc/certs/root-cert.pem
ConfigDefaultNamespace:  istio-system
ConfigWaitTimeout:  2m0s
LoggingOptions: log.Options{OutputPaths:[]string{"stdout"}, ErrorOutputPaths:[]string{"stderr"}, RotateOutputPath:"", RotationMaxSize:104857600, RotationMaxAge:30, RotationMaxBackups:1000, JSONEncoding:false, LogGrpc:true, outputLevels:"default:info", logCallers:"", stackTraceLevels:"default:none"}
TracingOptions: tracing.Options{ZipkinURL:"http://zipkin.istio-system:9411/api/v1/spans", JaegerURL:"", LogTraceSpans:false, SamplingRate:0}
IntrospectionOptions: ctrlz.Options{Port:0x2694, Address:"localhost"}
UseTemplateCRDs: false
LoadSheddingOptions: loadshedding.Options{Mode:2, AverageLatencyThreshold:100000000, SamplesPerSecond:1.7976931348623157e+308, SampleHalfLife:1000000000, MaxRequestsPerSecond:0, BurstSize:0}
UseAdapterCRDs: false

2019-07-17T00:31:26.645666Z	info	mcp	Requesting following collections:
2019-07-17T00:31:26.645715Z	info	mcp	  [0] istio/config/v1alpha2/adapters
2019-07-17T00:31:26.645724Z	info	mcp	  [1] istio/policy/v1beta1/attributemanifests
2019-07-17T00:31:26.645727Z	info	mcp	  [2] istio/policy/v1beta1/handlers
2019-07-17T00:31:26.645730Z	info	mcp	  [3] istio/policy/v1beta1/instances
2019-07-17T00:31:26.645737Z	info	mcp	  [4] istio/policy/v1beta1/rules
2019-07-17T00:31:26.645740Z	info	mcp	  [5] istio/config/v1alpha2/templates
2019-07-17T00:31:26.645760Z	info	parsed scheme: ""
2019-07-17T00:31:26.645782Z	info	scheme "" not registered, fallback to default scheme
2019-07-17T00:31:26.645801Z	info	ccResolverWrapper: sending update to cc: {[{istio-galley.istio-system.svc:9901 0  <nil>}] }
2019-07-17T00:31:26.645812Z	info	ClientConn switching balancer to "pick_first"
2019-07-17T00:31:26.645943Z	info	Awaiting for config store sync...
2019-07-17T00:31:26.646025Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc420259de0, CONNECTING
2019-07-17T00:31:26.646179Z	info	mcp	(re)trying to establish new MCP sink stream
gc 3 @0.049s 4%: 0.062+3.0+0.12 ms clock, 0.37+0.65/3.9/10+0.74 ms cpu, 6->6->4 MB, 7 MB goal, 6 P
2019-07-17T00:31:26.652354Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc420259de0, READY
2019-07-17T00:31:26.652481Z	info	mcp	New MCP sink stream created
2019-07-17T00:31:26.666200Z	info	Starting runtime config watch...
2019-07-17T00:31:26.666394Z	info	Built new config.Snapshot: id='0'
2019-07-17T00:31:26.666426Z	info	Cleaning up handler table, with config ID:-1
gc 4 @0.074s 4%: 0.017+4.8+0.21 ms clock, 0.10+0.83/4.5/8.9+1.2 ms cpu, 8->8->5 MB, 9 MB goal, 6 P
2019-07-17T00:31:26.704949Z	info	Built new config.Snapshot: id='1'
2019-07-17T00:31:26.735375Z	info	parsed scheme: ""
2019-07-17T00:31:26.735436Z	info	scheme "" not registered, fallback to default scheme
2019-07-17T00:31:26.735462Z	info	ccResolverWrapper: sending update to cc: {[{monitoring.googleapis.com:443 0  <nil>}] }
2019-07-17T00:31:26.735468Z	info	ClientConn switching balancer to "pick_first"
2019-07-17T00:31:26.735659Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc420f364e0, CONNECTING
gc 5 @0.159s 2%: 0.014+3.9+0.11 ms clock, 0.087+1.6/3.6/13+0.67 ms cpu, 10->10->5 MB, 11 MB goal, 6 P
2019-07-17T00:31:26.799678Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc420f364e0, READY
2019-07-17T00:31:27.031022Z	info	parsed scheme: ""
2019-07-17T00:31:27.031067Z	info	scheme "" not registered, fallback to default scheme
2019-07-17T00:31:27.031131Z	info	ccResolverWrapper: sending update to cc: {[{logging.googleapis.com:443 0  <nil>}] }
2019-07-17T00:31:27.031144Z	info	ClientConn switching balancer to "pick_first"
2019-07-17T00:31:27.031280Z	info	parsed scheme: ""
2019-07-17T00:31:27.031293Z	info	scheme "" not registered, fallback to default scheme
2019-07-17T00:31:27.031308Z	info	ccResolverWrapper: sending update to cc: {[{logging.googleapis.com:443 0  <nil>}] }
2019-07-17T00:31:27.031335Z	info	ClientConn switching balancer to "pick_first"
2019-07-17T00:31:27.032189Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc42116e540, CONNECTING
2019-07-17T00:31:27.032443Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc421196010, CONNECTING
gc 6 @0.433s 1%: 0.015+3.8+0.25 ms clock, 0.095+0.69/6.2/10+1.5 ms cpu, 10->11->6 MB, 11 MB goal, 6 P
2019-07-17T00:31:27.059561Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc42116e540, READY
2019-07-17T00:31:27.059813Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc421196010, READY
2019-07-17T00:31:27.230056Z	info	parsed scheme: ""
2019-07-17T00:31:27.230107Z	info	scheme "" not registered, fallback to default scheme
2019-07-17T00:31:27.230135Z	info	ccResolverWrapper: sending update to cc: {[{monitoring.googleapis.com:443 0  <nil>}] }
2019-07-17T00:31:27.230144Z	info	ClientConn switching balancer to "pick_first"
2019-07-17T00:31:27.230323Z	info	parsed scheme: ""
2019-07-17T00:31:27.230366Z	info	scheme "" not registered, fallback to default scheme
2019-07-17T00:31:27.230382Z	info	ccResolverWrapper: sending update to cc: {[{cloudtrace.googleapis.com:443 0  <nil>}] }
2019-07-17T00:31:27.230388Z	info	ClientConn switching balancer to "pick_first"
2019-07-17T00:31:27.230466Z	info	parsed scheme: ""
2019-07-17T00:31:27.230481Z	info	scheme "" not registered, fallback to default scheme
2019-07-17T00:31:27.230493Z	info	ccResolverWrapper: sending update to cc: {[{contextgraph.googleapis.com:443 0  <nil>}] }
2019-07-17T00:31:27.230507Z	info	ClientConn switching balancer to "pick_first"
2019-07-17T00:31:27.230543Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc42101e890, CONNECTING
2019-07-17T00:31:27.230611Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc4211a2110, CONNECTING
2019-07-17T00:31:27.230634Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc421145cc0, CONNECTING
2019-07-17T00:31:27.230652Z	info	adapters	getting kubeconfig from: ""	{"adapter": "kubernetesenv.istio-system"}
2019-07-17T00:31:27.230698Z	warn	Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2019-07-17T00:31:27.232373Z	info	adapters	Waiting for kubernetes cache sync...	{"adapter": "kubernetesenv.istio-system"}
2019-07-17T00:31:27.252671Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc4211a2110, READY
2019-07-17T00:31:27.253233Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc42101e890, READY
2019-07-17T00:31:27.263779Z	info	pickfirstBalancer: HandleSubConnStateChange: 0xc421145cc0, READY
gc 7 @0.672s 1%: 0.022+7.0+0.35 ms clock, 0.13+3.5/8.3/9.7+2.1 ms cpu, 12->13->8 MB, 13 MB goal, 6 P
2019-07-17T00:31:27.332874Z	info	adapters	Cache sync successful.	{"adapter": "kubernetesenv.istio-system"}
2019-07-17T00:31:27.332979Z	info	adapters	getting kubeconfig from: ""	{"adapter": "kubernetesenv.istio-system"}
2019-07-17T00:31:27.332985Z	warn	Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
2019-07-17T00:31:27.334182Z	info	Setting up event handlers
2019-07-17T00:31:27.334526Z	info	Starting Secrets controller
2019-07-17T00:31:27.334657Z	info	Waiting for informer caches to sync
2019-07-17T00:31:27.334849Z	info	adapters	serving prometheus metrics on 42422	{"adapter": "prometheus.istio-system"}
2019-07-17T00:31:27.341685Z	info	Cleaning up handler table, with config ID:0
2019-07-17T00:31:27.342171Z	info	Starting monitor server...
Istio Mixer: root@1a067c3a-98fa-11e9-95c6-bef141052020-docker.io/istio-1.2.2-cd4a148f37dc386986d6a6d899778849e686beea-dirty-Modified
Starting gRPC server on port 9091
2019-07-17T00:31:27.360355Z	info	ControlZ available at 127.0.0.1:9876
gc 8 @0.821s 1%: 0.017+5.6+0.33 ms clock, 0.10+1.0/5.5/18+2.0 ms cpu, 16->17->9 MB, 17 MB goal, 6 P
2019-07-17T00:31:36.735533Z	info	adapters	shutting down daemon...	{"adapter": "stackdriver.istio-system"}

Switching to v1.1.11 with the exact same configurations seems to resolve this issue. Then this is potentially a regression unless there is an extra configuration that I need to set.

Apparently, this has been a regression caused by https://github.com/istio/istio/pull/14803. The change in this PR fixes the issue.