Getting NACK in Galley logs


I have written a custom mcp client, following the guidelines in Pilot. I would wish to have the ability of listening to events like CRD creation for VirtualService in my client.
I have been able to successfully bring up the client and it’s is able to create a WATCH with the MCP server as well for the virtualservice type. But when i create a CRD for virtual service via kubectl, I get the following error :

2019-02-15T06:54:36.651359Z warn mcp MCP: connection {addr= id=1}: NACK collection=istio/networking/v1alpha3/virtualservices version= with nonce=“1” (w.nonce=“1”) error=&rpc.Status{Code: 2,
Message: “any: message type “istio.networking.v1alpha3.VirtualService” isn’t linked in”,

This gets continuously printed in the mcp server and my client logs.
As i understand, this might be happening because the RegisterType may have not got called and hence there’s possibly an umarshalling problem for this type. But i wonder what changes i will have to do in my client to make this work.
Can someone pls help?


@ozevren Hi, could you pls help answer this query?
I commented out the code in
and post this the client gets the updates just fine from the galley server for the Virtualservice resource.
I am wondering what could be missing from the client perspective that the above error is seen…

Thanks in advance.


I figured out the problem.
if i import the “” package, it automatically calls the init() that registers the types in the client.

This helped resolve the issue.



Sorry for the late reply, I was out of office. For deserialization to work, all the proto types that used needs to be registered. Typically this is done when the Go package containing the proto is referenced. The pacakge-level init code takes care of registration. However, Galley does not directly use these types, but depends on a reflection-like model to perform serialization operations. So, the protos needs to be explicitly registered.

Normally, the generated metadata code takes care of this:


Thanks @ozevren for the detailed answer. I could solve the problem by importing the package explicitly in the client.
I am willing to contribute to Istio - and would look forward to participate in the config discussion/work items.

Another observation:

It looks like Istio creates a couple of destination rules on boot up namely istio-policy and istio-telemetry. Both of these objects have an exportTo field in spec.


  • ‘*’
    host: istio-policy.istio-system.svc.cluster.local
    http2MaxRequests: 10000
    maxRequestsPerConnection: 10000

If this exportTo is specified, the mcp client that i wrote starts throwing up errors related to configScope. The error suggests that when we call the Unmarshal on the proto message, it looks for wire Type of 0 (for config scope), suggesting a value of varint but gets a wire Type of 2 (suggesting Length-delimited parameter).

rpc.Status{Code: 2,

Message: “proto: wrong wireType = 2 for field ConfigScope”,


I couldn’t trace how the exportTo field is related to configscope but removing it from these destination rule objects, solves the problem. Any hints?


I suspect you’re running into proto versioning differences between what is deployed and your client code. Particularly, I suspect either a field number is reused or its type has changed. This can typically happen during the development cycle (i.e. between releases) as there is a good amount of churn in the API protos and code. Once they get shipped, this should not change.

Try making sure that your deployed instances and the client is from the same build version.


@ozevren thanks! Must be a miss match of versions!