Failed to sign CSR

Hello,
I am using Istio 1.1.7 and trying to integrate it with Vault (version 1.1.3). I would like to achieve the service to service mtls communication, as explained here: https://istio.io/docs/tasks/security/vault-ca/

So I setup the Kubernetes auth backend in Vault, and the pki secret backend. I noticed Istio wants a specific path where to mount the pki, so I created a specific path for istio_ca. It seems the SDS is properly configured (the 3 node agent are running properly) and the communication to Vault is properly setup.

Once deployed the httpbin and sleep pods, the logs are saying:

[2019-07-22 13:44:36.837][20][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 2, failed to sign CSR: no certificate chain in the CSR response
[2019-07-22 13:44:37.269][20][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 2, failed to sign CSR: no certificate chain in the CSR response
[2019-07-22 13:44:38.054][20][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 2, failed to get root cert

Looking at the Vault logs, I found request and response from the node agent:

Jul 22 14:12:23 ip-10-13-1-167 vault[1807]: {"time":"2019-07-22T14:12:23.725021758Z","type":"request","auth":{"client_token":"hmac-sha256:5c6c3aedde96f3376223eda8efe911c145324ebde4246cb3238ce6eb7cdf2cc3","accessor":"hmac-sha256:de78be48051cb887036ff1e535d9ac20b3be00147833a23327b3855e61e11daf","display_name":"kubernetes-default-vault-citadel-sa","policies":["default","k8spolicy"],"token_policies":["default","k8spolicy"],"metadata":{"role":"istio-cert","service_account_name":"vault-citadel-sa","service_account_namespace":"default","service_account_secret_name":"vault-citadel-sa-token-4cnd7","service_account_uid":"aa933d86-a713-11e9-8fd1-0a4f26ebfca0"},"entity_id":"af53aca7-4ffe-517c-9424-ce96cd9ef00d","token_type":"service"},"request":{"id":"4bc4d0a8-633b-740c-7746-b4ed1bf0dd8a","operation":"update","client_token":"hmac-sha256:5c6c3aedde96f3376223eda8efe911c145324ebde4246cb3238ce6eb7cdf2cc3","client_token_accessor":"hmac-sha256:de78be48051cb887036ff1e535d9ac20b3be00147833a23327b3855e61e11daf","namespace":{"id":"root","path":""},"path":"istio_ca/sign/istio-pki-role","data":{"csr":"hmac-sha256:7f215bbe7ac8db3eda1f40f6ff286d86bc6abb8e157b713602c0902ab63dab41","exclude_cn_from_sans":true,"format":"hmac-sha256:9ba85113ce4c2691fda9236d57270669425a9377c02a1112929c844e61c44884","ttl":"hmac-sha256:299ffec4408d56f906dca9fcd1532caf0ab082262f68323ca1a6100a3d52af3f"},"policy_override":false,"remote_address":"10.13.4.206","wrap_ttl":0,"headers":{}},"error":""}
Jul 22 14:12:23 ip-10-13-1-167 vault[1807]: {"time":"2019-07-22T14:12:23.728651711Z","type":"response","auth":{"client_token":"hmac-sha256:5c6c3aedde96f3376223eda8efe911c145324ebde4246cb3238ce6eb7cdf2cc3","accessor":"hmac-sha256:de78be48051cb887036ff1e535d9ac20b3be00147833a23327b3855e61e11daf","display_name":"kubernetes-default-vault-citadel-sa","policies":["default","k8spolicy"],"token_policies":["default","k8spolicy"],"metadata":{"role":"istio-cert","service_account_name":"vault-citadel-sa","service_account_namespace":"default","service_account_secret_name":"vault-citadel-sa-token-4cnd7","service_account_uid":"aa933d86-a713-11e9-8fd1-0a4f26ebfca0"},"entity_id":"af53aca7-4ffe-517c-9424-ce96cd9ef00d","token_type":"service"},"request":{"id":"4bc4d0a8-633b-740c-7746-b4ed1bf0dd8a","operation":"update","client_token":"hmac-sha256:5c6c3aedde96f3376223eda8efe911c145324ebde4246cb3238ce6eb7cdf2cc3","client_token_accessor":"hmac-sha256:de78be48051cb887036ff1e535d9ac20b3be00147833a23327b3855e61e11daf","namespace":{"id":"root","path":""},"path":"istio_ca/sign/istio-pki-role","data":{"csr":"hmac-sha256:7f215bbe7ac8db3eda1f40f6ff286d86bc6abb8e157b713602c0902ab63dab41","exclude_cn_from_sans":true,"format":"hmac-sha256:9ba85113ce4c2691fda9236d57270669425a9377c02a1112929c844e61c44884","ttl":"hmac-sha256:299ffec4408d56f906dca9fcd1532caf0ab082262f68323ca1a6100a3d52af3f"},"policy_override":false,"remote_address":"10.13.4.206","wrap_ttl":0,"headers":{}},"response":{"data":{"certificate":"hmac-sha256:f12fba4c09e2df7f75d0303cb89cd45241930fb36f49252125165a9798720837","expiration":1563891143,"issuing_ca":"hmac-sha256:a804dd6874577d9ab85d3ec771c34acf36bb0c2221adda1e22ff7ab25caa2c85","serial_number":"hmac-sha256:e8a59b891a9f7e6866b344105aa080d69eafff68ea0b3c9de9d0ba29befb2160"},"headers":null},"error":""}

It seems to me that Vault is not returning the full chain - maybe is returning only the client cert, but not the CA and the intermediate.

Could you help me here please?
Cheers,
Simone

@leitang @Oliver for help with Vault

Hi Simone,
To debug the problem, I suggest you sign a CSR directly on Vault to see if the response from Vault is as expected, e.g., including the correct certificate chain, and etc.
Lei

@leitang Thanks for your suggestions… I think I’ve made some progresses.
I realized that, since I am using an intermediate ca, I had to update the VAULT_SIGN_CSR_PATH to point to the intermediate ca (something like istio_int/sign/istio-pki-role).
Logs look clear now, Envoy is not complaining anymore and SDS says:

2019-07-23T13:32:28.734788Z	info	SDS: push root cert from node agent to proxy connection: "sidecar~10.13.8.70~sleep-755bcb8746-9k7hn.default~default.svc.cluster.local-5"

2019-07-23T13:32:29.210576Z	info	SDS: push key/cert pair from node agent to proxy: "sidecar~10.13.8.70~sleep-755bcb8746-9k7hn.default~default.svc.cluster.local-4"

According to the doc, I should see certs and key under /etc/certs in the envoy sidecar, but the path doesn’t exist at all.

kubectl exec -ti httpbin-679c5bcf6c-mjn4s -c istio-proxy -- ls /etc/certs

Am I missing something here? Thanks!
Simone

In SDS, we store key and cert in Envoy memory instead of to a file. You can check the key and cert of a workload by port-forwarding Envoy port 15000 to a local port and check out the /certs endpoint.

Hi Philliple,
thanks a lot for your input, I can see the certs and the chain as you said.
I’ll be continuing with the Vault CA task during the next days and let you know the outcome.
Meanwhile, if you guys need, I am happy to contribute to the documentation for this task - it seems other people are struggling and there are a lot of things to configure (Vault, the auth for k8s, PKI etc…)

Cheers,
Simone

That sounds like a good plan, @Simone. Feel free to send a PR here https://github.com/istio/istio.io

Hi @philliple,
sorry for late reply, I have been busy lately. I saw there is already a PR opened to improve the doc: https://github.com/istio/istio.io/pull/4432 . Looks like it will be merged soon, so I can wait for it and maybe add/propose comments on it.

Have another issue though… the certs are mounted as you say, but when I curl the httpbin endpoint from the sleep container I get a 503.

kubectl exec -it $(kubectl get pod -l app=sleep -o jsonpath='{.items[0].metadata.name}') -c sleep -- curl -s -o /dev/null -w "%{http_code}" httpbin:8000/headers

503%

Here is my destination rule:

kc describe destinationrule default -n istio-system
Name:         default
Namespace:    istio-system
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"networking.istio.io/v1alpha3","kind":"DestinationRule","metadata":{"annotations":{},"name":"default","namespace":"istio-system"},"spec":...
API Version:  networking.istio.io/v1alpha3
Kind:         DestinationRule
Metadata:
  Creation Timestamp:  2019-07-29T09:40:26Z
  Generation:          3
  Resource Version:    8131154
  Self Link:           /apis/networking.istio.io/v1alpha3/namespaces/istio-system/destinationrules/default
  UID:                 e52c356d-b1e4-11e9-8fd1-0a4f26ebfca0
Spec:
  Host:  *
  Traffic Policy:
    Tls:
      Mode:  ISTIO_MUTUAL
Events:      <none>

The istio auth check looks good too:

SLEEP_POD=$(kubectl get pod -l app=sleep -o jsonpath={.items..metadata.name})
istioctl authn tls-check ${SLEEP_POD} httpbin.default.svc.cluster.local

HOST:PORT                                  STATUS     SERVER     CLIENT     AUTHN POLICY     DESTINATION RULE
httpbin.default.svc.cluster.local:8000     OK         mTLS       mTLS       default/         default/istio-system

And the mesh policy:

 kc describe meshpolicy
Name:         default
Namespace:
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"authentication.istio.io/v1alpha1","kind":"MeshPolicy","metadata":{"annotations":{},"name":"default","namespace":""},"spec":{"peers":[{"m...
API Version:  authentication.istio.io/v1alpha1
Kind:         MeshPolicy
Metadata:
  Creation Timestamp:  2019-06-28T12:43:55Z
  Generation:          2
  Resource Version:    8120500
  Self Link:           /apis/authentication.istio.io/v1alpha1/meshpolicies/default
  UID:                 643a33b9-99a2-11e9-b55a-02a21e0ac9b4
Spec:
  Peers:
    Mtls:
Events:  <none>

I think it’s all well setup, but there is something I missing I think. Looking at the docs, 503 seems to occur when the destination rules are not properly setup or missing, but I think mine looks ok.
Could you help me here please?

Cheers,
Simone

@philliple @leitang I enabled envoy debugging and found out the following:

[2019-08-01 12:47:27.512][90][debug][filter] [external/envoy/source/extensions/filters/listener/original_dst/original_dst.cc:18] original_dst: New connection accepted
[2019-08-01 12:47:27.512][90][debug][filter] [external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:72] tls inspector: new connection accepted
[2019-08-01 12:47:27.512][90][debug][filter] [external/envoy/source/extensions/filters/listener/tls_inspector/tls_inspector.cc:118] tls:onServerName(), requestedServerName: outbound_.8000_._.httpbin.default.svc.cluster.local
[2019-08-01 12:47:27.512][90][debug][main] [external/envoy/source/server/connection_handler_impl.cc:257] [C75] new connection
[2019-08-01 12:47:27.513][90][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C75] handshake error: 2
[2019-08-01 12:47:27.513][90][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C75] handshake error: 2
[2019-08-01 12:47:27.513][90][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C75] handshake error: 1
[2019-08-01 12:47:27.513][90][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:175] [C75] TLS error: 268436504:SSL routines:OPENSSL_internal:TLSV1_ALERT_UNKNOWN_CA
[2019-08-01 12:47:27.513][90][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C75] closing socket: 0

From the client side ( the sleep container ) I get:

[2019-08-01 13:15:16.849][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:92] creating a new connection
[2019-08-01 13:15:16.849][94][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C249] connecting
[2019-08-01 13:15:16.849][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C249] connecting to 10.13.5.231:80
[2019-08-01 13:15:16.849][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C249] connection in progress
[2019-08-01 13:15:16.849][94][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-08-01 13:15:16.849][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:517] [C249] connected
[2019-08-01 13:15:16.849][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C249] handshake error: 2
[2019-08-01 13:15:16.850][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C249] handshake error: 1
[2019-08-01 13:15:16.850][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:175] [C249] TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.850][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C249] closing socket: 0
[2019-08-01 13:15:16.850][94][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C249] disconnect. resetting 0 pending requests
[2019-08-01 13:15:16.850][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:133] [C249] client disconnected, failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.850][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:173] [C249] purge pending, failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.850][94][debug][router] [external/envoy/source/common/router/router.cc:644] [C248][S7806417423125524970] upstream reset: reset reason connection failure
[2019-08-01 13:15:16.850][94][debug][router] [external/envoy/source/common/router/router.cc:892] [C248][S7806417423125524970] performing retry
[2019-08-01 13:15:16.864][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:92] creating a new connection
[2019-08-01 13:15:16.864][94][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C250] connecting
[2019-08-01 13:15:16.864][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C250] connecting to 10.13.5.231:80
[2019-08-01 13:15:16.864][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C250] connection in progress
[2019-08-01 13:15:16.864][94][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-08-01 13:15:16.864][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:517] [C250] connected
[2019-08-01 13:15:16.864][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C250] handshake error: 2
[2019-08-01 13:15:16.865][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C250] handshake error: 1
[2019-08-01 13:15:16.865][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:175] [C250] TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.865][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C250] closing socket: 0
[2019-08-01 13:15:16.865][94][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C250] disconnect. resetting 0 pending requests
[2019-08-01 13:15:16.865][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:133] [C250] client disconnected, failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.865][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:173] [C250] purge pending, failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.865][94][debug][router] [external/envoy/source/common/router/router.cc:644] [C248][S7806417423125524970] upstream reset: reset reason connection failure
[2019-08-01 13:15:16.865][94][debug][router] [external/envoy/source/common/router/router.cc:892] [C248][S7806417423125524970] performing retry
[2019-08-01 13:15:16.926][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:92] creating a new connection
[2019-08-01 13:15:16.926][94][debug][client] [external/envoy/source/common/http/codec_client.cc:26] [C251] connecting
[2019-08-01 13:15:16.926][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:644] [C251] connecting to 10.13.5.231:80
[2019-08-01 13:15:16.926][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:653] [C251] connection in progress
[2019-08-01 13:15:16.926][94][debug][pool] [external/envoy/source/common/http/conn_pool_base.cc:20] queueing request due to no available connections
[2019-08-01 13:15:16.926][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:517] [C251] connected
[2019-08-01 13:15:16.926][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C251] handshake error: 2
[2019-08-01 13:15:16.927][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:142] [C251] handshake error: 1
[2019-08-01 13:15:16.927][94][debug][connection] [external/envoy/source/extensions/transport_sockets/tls/ssl_socket.cc:175] [C251] TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.927][94][debug][connection] [external/envoy/source/common/network/connection_impl.cc:183] [C251] closing socket: 0
[2019-08-01 13:15:16.927][94][debug][client] [external/envoy/source/common/http/codec_client.cc:82] [C251] disconnect. resetting 0 pending requests
[2019-08-01 13:15:16.927][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:133] [C251] client disconnected, failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.927][94][debug][pool] [external/envoy/source/common/http/http1/conn_pool.cc:173] [C251] purge pending, failure reason: TLS error: 268435581:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED
[2019-08-01 13:15:16.927][94][debug][router] [external/envoy/source/common/router/router.cc:644] [C248][S7806417423125524970] upstream reset: reset reason connection failure
[2019-08-01 13:15:16.927][94][debug][filter] [src/envoy/http/mixer/filter.cc:133] Called Mixer::Filter : encodeHeaders 2
[2019-08-01 13:15:16.927][94][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1305] [C248][S7806417423125524970] encoding headers via codec (end_stream=false):
':status', '503'
'content-length', '91'
'content-type', 'text/plain'
'date', 'Thu, 01 Aug 2019 13:15:16 GMT'
'server', 'envoy'

So it looks like the client can’t validate the certificate (which one?) but I am not sure why, since both CA and intermediate are generated by Vault (and both signed by the CA).

What am I missing here?

Cheers,
Simone

I think I bumped into this: https://github.com/istio/istio/issues/14853 did I?
I am using Istio 1.1.7…
This is from the sidecar of the sleep container…

[2019-08-01 13:44:30.627][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:376] [C1] writing 209 bytes, end_stream false
[2019-08-01 13:44:30.627][19][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:1443] [C1][S2465367718764142786] encoding data via codec (size=2584 end_stream=true)
[2019-08-01 13:44:30.627][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:376] [C1] writing 2596 bytes, end_stream false
[2019-08-01 13:44:30.627][19][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:133] item added to deferred deletion list (size=1)
[2019-08-01 13:44:30.627][19][trace][http] [external/envoy/source/common/http/conn_manager_impl.cc:811] [C1][S2465367718764142786] decode headers called: filter=0x3c2e5f0 status=1
[2019-08-01 13:44:30.627][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:384] [C1] parsed 110 bytes
[2019-08-01 13:44:30.627][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C1] socket event: 2
[2019-08-01 13:44:30.627][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C1] write ready
[2019-08-01 13:44:30.627][19][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:66] [C1] write returns: 2805
[2019-08-01 13:44:30.627][19][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:53] clearing deferred deletion list (size=1)
[2019-08-01 13:44:32.627][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C1] socket event: 3
[2019-08-01 13:44:32.627][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C1] write ready
[2019-08-01 13:44:32.627][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:478] [C1] read ready
[2019-08-01 13:44:32.627][19][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:23] [C1] read returns: 110
[2019-08-01 13:44:32.627][19][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:37] [C1] read error: Resource temporarily unavailable

The error TLSV1_ALERT_UNKNOWN_CA in the log indicates that the CA certificate is not recognizable. Quote from Python ssl server reporting TLSV1_ALERT_UNKNOWN_CA - Stack Overflow The TLv1 unknown CA alert is sent by some clients if they cannot verify the certificate of the server because it is signed by an unknown issuer CA. You can avoid this kind of exception if you use a certificate which is already trusted by the client or which can be validated against a root CA of the client (don’t forget to include the chain certificates too).

I suggest you dump the certificate chains received by client and server and verify whether the certificate chains are valid for both sides.

I’ve found something. I did a dump of the CA certs and both client and server and they are both mounting an intermediate CA instead of the root one. I don’t really know why this is happening.
Vault has the proper CA but the pods keep mounting the wrong one. I tried to restart the node agents and the pods too, but always the same result.
I know envoy certs are mounted in memory. So I expect that, when I kill the pod, the memory gets empty and, after restarts, I should see again the ca and the chain. What I see is a bunch of certs instead… always the same CA (the wrong one) and a bunch of different intermediates, i.e.

{
 "certificates": [
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7b7a593253921de59c665342a7fc5d8a6e7a0922",
     "subject_alt_names": [],
     "days_until_expiration": "1814",
     "valid_from": "2019-07-23T13:13:35Z",
     "expiration_time": "2024-07-21T13:14:05Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "2c1b8a9afa01654eea6760d7089cb2c46e2ea2bd",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/default/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-08-02T13:10:13Z",
     "expiration_time": "2019-08-03T13:10:43Z"
    }
   ]
  },
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7b7a593253921de59c665342a7fc5d8a6e7a0922",
     "subject_alt_names": [],
     "days_until_expiration": "1814",
     "valid_from": "2019-07-23T13:13:35Z",
     "expiration_time": "2024-07-21T13:14:05Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "2c1b8a9afa01654eea6760d7089cb2c46e2ea2bd",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/default/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-08-02T13:10:13Z",
     "expiration_time": "2019-08-03T13:10:43Z"
    }
   ]
  },
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7b7a593253921de59c665342a7fc5d8a6e7a0922",
     "subject_alt_names": [],
     "days_until_expiration": "1814",
     "valid_from": "2019-07-23T13:13:35Z",
     "expiration_time": "2024-07-21T13:14:05Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "2c1b8a9afa01654eea6760d7089cb2c46e2ea2bd",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/default/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-08-02T13:10:13Z",
     "expiration_time": "2019-08-03T13:10:43Z"
    }
   ]
  },
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7b7a593253921de59c665342a7fc5d8a6e7a0922",
     "subject_alt_names": [],
     "days_until_expiration": "1814",
     "valid_from": "2019-07-23T13:13:35Z",
     "expiration_time": "2024-07-21T13:14:05Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "2c1b8a9afa01654eea6760d7089cb2c46e2ea2bd",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/default/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-08-02T13:10:13Z",
     "expiration_time": "2019-08-03T13:10:43Z"
    }
   ]

I think there is something wrong in the way the SDS fetches the certs… if you look above, I posted a bug https://github.com/istio/istio/issues/14853 and maybe I am hitting that.
What do you think?
Thanks a lot for your help!
Simone

Can you try simplified configurations? For example:
– One CA that directly signs the workload certificates.
– One CA signs one intermediate CA, which signs the workload certificates.

Hi Leitang,
I just tried the first configuration - One CA that directly signs the workload certificates.

Unfortunately the pods cannot mount any certificate… the certificate list on envoy is empty. I enabled the trace logs and here it is:

  id: "sidecar~10.13.9.13~httpbin-679c5bcf6c-cqkp4.default~default.svc.cluster.local"
  cluster: "httpbin.default"
  metadata {
    fields {
      key: "CONFIG_NAMESPACE"
      value {
        string_value: "default"
      }
    }
    fields {
      key: "INTERCEPTION_MODE"
      value {
        string_value: "REDIRECT"
      }
    }
    fields {
      key: "ISTIO_META_INSTANCE_IPS"
      value {
        string_value: "10.13.9.13,10.13.9.13"
      }
    }
    fields {
      key: "ISTIO_PROXY_SHA"
      value {
        string_value: "istio-proxy:5ea236aa3f759df29ef9209d0cf8e85bf1c8fb2e"
      }
    }
    fields {
      key: "ISTIO_PROXY_VERSION"
      value {
        string_value: "1.1.3"
      }
    }
    fields {
      key: "ISTIO_VERSION"
      value {
        string_value: "release-1.1-20190807-09-16"
      }
    }
    fields {
      key: "POD_NAME"
      value {
        string_value: "httpbin-679c5bcf6c-cqkp4"
      }
    }
    fields {
      key: "app"
      value {
        string_value: "httpbin"
      }
    }
    fields {
      key: "istio"
      value {
        string_value: "sidecar"
      }
    }
    fields {
      key: "version"
      value {
        string_value: "v1"
      }
    }
  }
  locality {
  }
  build_version: "5ea236aa3f759df29ef9209d0cf8e85bf1c8fb2e/1.11.0-dev/Clean/RELEASE/BoringSSL"
}
resource_names: "default"
type_url: "type.googleapis.com/envoy.api.v2.auth.Secret"

[2019-08-07 13:51:15.753][19][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:198] Queued message to write (601 bytes)
[2019-08-07 13:51:15.753][23][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:48] completionThread CQ event 0 true
[2019-08-07 13:51:15.753][19][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:248] handleOpCompletion op=0 ok=true inflight=1
[2019-08-07 13:51:15.753][19][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:233] Write op dispatched
[2019-08-07 13:51:15.753][23][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:48] completionThread CQ event 3 true
[2019-08-07 13:51:15.753][19][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:248] handleOpCompletion op=3 ok=true inflight=2
[2019-08-07 13:51:15.843][19][trace][upstream] [external/envoy/source/common/upstream/upstream_impl.cc:1329] starting async DNS resolution for zipkin.istio-system
[2019-08-07 13:51:15.843][19][debug][upstream] [external/envoy/source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-08-07 13:51:15.844][19][debug][upstream] [external/envoy/source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-08-07 13:51:15.844][19][debug][upstream] [external/envoy/source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-08-07 13:51:15.844][19][debug][upstream] [external/envoy/source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-08-07 13:51:15.847][19][debug][upstream] [external/envoy/source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-08-07 13:51:15.848][19][trace][upstream] [external/envoy/source/common/upstream/upstream_impl.cc:1336] async DNS resolution complete for zipkin.istio-system
[2019-08-07 13:51:16.231][23][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:48] completionThread CQ event 1 true
[2019-08-07 13:51:16.231][19][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:248] handleOpCompletion op=1 ok=true inflight=1
[2019-08-07 13:51:16.231][23][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:48] completionThread CQ event 2 false
[2019-08-07 13:51:16.231][19][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:248] handleOpCompletion op=2 ok=false inflight=1
[2019-08-07 13:51:16.232][23][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:48] completionThread CQ event 5 true
[2019-08-07 13:51:16.232][19][trace][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:248] handleOpCompletion op=5 ok=true inflight=1
[2019-08-07 13:51:16.232][19][debug][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:332] Finish with grpc-status code 2
[2019-08-07 13:51:16.232][19][debug][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:189] notifyRemoteClose 2 failed to sign CSR: no certificate chain in the CSR response
[2019-08-07 13:51:16.232][19][warning][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_stream_lib/common/config/grpc_stream.h:86] gRPC config stream closed: 2, failed to sign CSR: no certificate chain in the CSR response
[2019-08-07 13:51:16.232][19][debug][config] [bazel-out/k8-opt/bin/external/envoy/source/common/config/_virtual_includes/grpc_mux_subscription_lib/common/config/grpc_mux_subscription_impl.h:74] gRPC update for type.googleapis.com/envoy.api.v2.auth.Secret failed
[2019-08-07 13:51:16.232][19][debug][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:368] Stream cleanup with 0 in-flight tags
[2019-08-07 13:51:16.232][19][debug][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:357] Deferred delete
[2019-08-07 13:51:16.232][19][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:133] item added to deferred deletion list (size=1)
[2019-08-07 13:51:16.232][19][trace][main] [external/envoy/source/common/event/dispatcher_impl.cc:53] clearing deferred deletion list (size=1)
[2019-08-07 13:51:16.232][19][debug][grpc] [external/envoy/source/common/grpc/google_async_client_impl.cc:136] GoogleAsyncStreamImpl destruct
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.499][19][trace][upstream] [external/envoy/source/common/upstream/original_dst_cluster.cc:163] Stale original dst hosts cleanup triggered.
[2019-08-07 13:51:16.834][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:440] [C1] socket event: 3
[2019-08-07 13:51:16.834][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:508] [C1] write ready
[2019-08-07 13:51:16.834][19][trace][connection] [external/envoy/source/common/network/connection_impl.cc:478] [C1] read ready
[2019-08-07 13:51:16.834][19][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:23] [C1] read returns: 110
[2019-08-07 13:51:16.834][19][trace][connection] [external/envoy/source/common/network/raw_buffer_socket.cc:37] [C1] read error: Resource temporarily unavailable
[2019-08-07 13:51:16.834][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:363] [C1] parsing 110 bytes
[2019-08-07 13:51:16.834][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:476] [C1] message begin
[2019-08-07 13:51:16.834][19][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:243] [C1] new stream
[2019-08-07 13:51:16.834][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:331] [C1] completed header: key=Host value=127.0.0.1:15000
[2019-08-07 13:51:16.834][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:331] [C1] completed header: key=User-Agent value=Go-http-client/1.1
[2019-08-07 13:51:16.834][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:442] [C1] headers complete
[2019-08-07 13:51:16.834][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:331] [C1] completed header: key=Accept-Encoding value=gzip
[2019-08-07 13:51:16.834][19][trace][http] [external/envoy/source/common/http/http1/codec_impl.cc:463] [C1] message complete
[2019-08-07 13:51:16.834][19][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:580] [C1][S8558499686068802120] request headers complete (end_stream=true):
':authority', '127.0.0.1:15000'
':path', '/stats?usedonly'
':method', 'GET'
'user-agent', 'Go-http-client/1.1'
'accept-encoding', 'gzip'

[2019-08-07 13:51:16.834][19][debug][http] [external/envoy/source/common/http/conn_manager_impl.cc:1040] [C1][S8558499686068802120] request end stream

SDS logs:

2019-08-07T14:03:06.784452Z	info	SDS: connection with "sidecar~10.13.9.13~httpbin-679c5bcf6c-cqkp4.default~default.svc.cluster.local-342" terminated rpc error: code = Canceled desc = context canceled
2019-08-07T14:03:09.504089Z	error	no certificate chain in the CSR response
2019-08-07T14:03:09.504116Z	error	CSR for "default" hit non-retryable error failed to sign CSR: no certificate chain in the CSR response
2019-08-07T14:03:09.504122Z	error	Failed to generate secret for proxy "sidecar~10.13.9.13~httpbin-679c5bcf6c-cqkp4.default~default.svc.cluster.local-343": failed to sign CSR: no certificate chain in the CSR response
2019-08-07T14:03:09.504129Z	error	Failed to get secret for proxy "sidecar~10.13.9.13~httpbin-679c5bcf6c-cqkp4.default~default.svc.cluster.local" connection "sidecar~10.13.9.13~httpbin-679c5bcf6c-cqkp4.default~default.svc.cluster.local-343" from secret cache: failed to sign CSR: no certificate chain in the CSR response

So the chain is empty, but I am not sure why - the CA seems properly configured and I expect the SDS to properly pick it up… I still see read error: Resource temporarily unavailable in Envoy and I am not sure if that’s the culprit here.
Cheers,
Simone

Based on the log entry “failed to sign CSR: no certificate chain in the CSR response”, the CSR response does not contain a certificate chain. The failure may be caused by the following reasons:

  • The CSR may not reach the Vault server, e.g., the Vault endpoint is not properly configured.
  • The CSR reached the Vault server but the Vault server did not return a valid CSR response.

I suggest you add/turn on the logs on the client and Vault server to check what is the request sent from the client, whether the Vault server receives the request, and what is the response from the Vault server. Meanwhile, you can also use the Vault command line tool to directly sign a CSR at your Vault server and check whether the response contains a certificate chain.

Hi Leitang,
thanks much for your input. This is what I see in Vault logs:
Request:


Aug  8 09:37:32 ip-10-13-1-167 vault[1807]: {"time":"2019-08-08T09:37:32.001899157Z","type":"request","auth":{"client_token":"hmac-sha256:1834d25c46f3dfed84b859697ca03b072740c69c96fc0faca8e03e0f42449e07","accessor":"hmac-sha256:c2a886e7105f1e678e55cf5c845e2ab20504e956f3c64e686369a2e6c5f77d25","display_name":"kubernetes-default-vault-citadel-sa","policies":["default","k8spolicy"],"token_policies":["default","k8spolicy"],"metadata":{"role":"istio-cert","service_account_name":"vault-citadel-sa","service_account_namespace":"default","service_account_secret_name":"vault-citadel-sa-token-4cnd7","service_account_uid":"aa933d86-a713-11e9-8fd1-0a4f26ebfca0"},"entity_id":"af53aca7-4ffe-517c-9424-ce96cd9ef00d","token_type":"service"},"request":{"id":"7fb61a37-3076-4beb-c422-dc59441eea69","operation":"update","client_token":"hmac-sha256:1834d25c46f3dfed84b859697ca03b072740c69c96fc0faca8e03e0f42449e07","client_token_accessor":"hmac-sha256:c2a886e7105f1e678e55cf5c845e2ab20504e956f3c64e686369a2e6c5f77d25","namespace":{"id":"root","path":""},"path":"istio_ca/sign/istio-pki-role","data":{"csr":"hmac-sha256:9ec11604dbf872b128a6e30266ae2242548d7f10a0faaf9d199ebf55d8dce6d4","exclude_cn_from_sans":true,"format":"hmac-sha256:9ba85113ce4c2691fda9236d57270669425a9377c02a1112929c844e61c44884","ttl":"hmac-sha256:299ffec4408

Response:

Aug  8 09:37:32 ip-10-13-1-167 vault[1807]: {"time":"2019-08-08T09:37:32.005374843Z","type":"response","auth":{"client_token":"hmac-sha256:1834d25c46f3dfed84b859697ca03b072740c69c96fc0faca8e03e0f42449e07","accessor":"hmac-sha256:c2a886e7105f1e678e55cf5c845e2ab20504e956f3c64e686369a2e6c5f77d25","display_name":"kubernetes-default-vault-citadel-sa","policies":["default","k8spolicy"],"token_policies":["default","k8spolicy"],"metadata":{"role":"istio-cert","service_account_name":"vault-citadel-sa","service_account_namespace":"default","service_account_secret_name":"vault-citadel-sa-token-4cnd7","service_account_uid":"aa933d86-a713-11e9-8fd1-0a4f26ebfca0"},"entity_id":"af53aca7-4ffe-517c-9424-ce96cd9ef00d","token_type":"service"},"request":{"id":"7fb61a37-3076-4beb-c422-dc59441eea69","operation":"update","client_token":"hmac-sha256:1834d25c46f3dfed84b859697ca03b072740c69c96fc0faca8e03e0f42449e07","client_token_accessor":"hmac-sha256:c2a886e7105f1e678e55cf5c845e2ab20504e956f3c64e686369a2e6c5f77d25","namespace":{"id":"root","path":""},"path":"istio_ca/sign/istio-pki-role","data":{"csr":"hmac-sha256:9ec11604dbf872b128a6e30266ae2242548d7f10a0faaf9d199ebf55d8dce6d4","exclude_cn_from_sans":true,"format":"hmac-sha256:9ba85113ce4c2691fda9236d57270669425a9377c02a1112929c844e61c44884","ttl":"hmac-sha256:299ffec4408d56f906dca9fcd1532caf0ab082262f68323ca1a6100a3d52af3f"},"policy_override":false,"remote_address":"10.13.9.83","wrap_ttl":0,"headers":{}},"response":{"data":{"certificate":"hmac-sha256:bae53d09e49bf37cfd49dcc97a13efd4e76d10bc4249c44ba0ff44e7d35d8014","expiration":1565343452,"issuing_ca":"hmac-sha256:2818c0b0531d37ac51d3d3628697a99c0082a4b6d60c55b14d1e7a38b4cfa19c","serial_number":"hmac-sha256:16a47f77356f43f0856499f998ff0ca06d1af71cd389d8a80cca343d8696f69b"},"headers":null},"error":""}

The response looks okay to me, there is the certificate with the issuing CA… I don’t know why SDS doesn’t take that. I also try to sign a certificate request using directly Vault and it looks good. Is there any way I can enable debug logs on the SDS side? I suspect the culprit is there.
Cheers,
Simone

The log level of Node Agent can be set through --log_output_level parameter: https://istio.io/docs/reference/commands/node_agent/. If the logs are not detailed enough, you may add your logs in the code of Node Agent and Envoy directly.

@leitang @Simone Any further ideas on this issue ?
Im facing the same exact issue while getting signed with an intermediate cert. I can’t think of a reason why the CA would be invalid in this case if Vault is signing the CSR and returning the correct response.

Hi Kasun,
For the issue you encountered (i.e., the CA is invalid), can you dump the entire certificate chain received by Envoy (you may need to turn on debug logging of Envoy or add certificate chain logging statements to Envoy) and verify whether the certificate chain is valid (e.g., through openssl tool)?

Would you happen to know how to dump a readable chain ?

This is what i get with /cert endpoint

{
 "certificates": [
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7ca0a6bf49631146dacb74313bfe2ddd9ad4d34a",
     "subject_alt_names": [],
     "days_until_expiration": "30",
     "valid_from": "2019-09-12T23:25:47Z",
     "expiration_time": "2019-10-14T23:26:17Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "68ab1120a7dc68e956e900b4d6f1a287e6920c9d",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/apps/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-09-14T14:18:10Z",
     "expiration_time": "2019-09-15T14:18:40Z"
    }
   ]
  },
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7ca0a6bf49631146dacb74313bfe2ddd9ad4d34a",
     "subject_alt_names": [],
     "days_until_expiration": "30",
     "valid_from": "2019-09-12T23:25:47Z",
     "expiration_time": "2019-10-14T23:26:17Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "1e9b3fc1e1e3ff36ca29411592c9a3baf38c0cb6",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/apps/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-09-14T14:18:48Z",
     "expiration_time": "2019-09-15T14:19:18Z"
    }
   ]
  },
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7ca0a6bf49631146dacb74313bfe2ddd9ad4d34a",
     "subject_alt_names": [],
     "days_until_expiration": "30",
     "valid_from": "2019-09-12T23:25:47Z",
     "expiration_time": "2019-10-14T23:26:17Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "1e9b3fc1e1e3ff36ca29411592c9a3baf38c0cb6",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/apps/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-09-14T14:18:48Z",
     "expiration_time": "2019-09-15T14:19:18Z"
    }
   ]
  },
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7ca0a6bf49631146dacb74313bfe2ddd9ad4d34a",
     "subject_alt_names": [],
     "days_until_expiration": "30",
     "valid_from": "2019-09-12T23:25:47Z",
     "expiration_time": "2019-10-14T23:26:17Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "1e9b3fc1e1e3ff36ca29411592c9a3baf38c0cb6",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/apps/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-09-14T14:18:48Z",
     "expiration_time": "2019-09-15T14:19:18Z"
    }
   ]
  },
  {
   "ca_cert": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "7ca0a6bf49631146dacb74313bfe2ddd9ad4d34a",
     "subject_alt_names": [],
     "days_until_expiration": "30",
     "valid_from": "2019-09-12T23:25:47Z",
     "expiration_time": "2019-10-14T23:26:17Z"
    }
   ],
   "cert_chain": [
    {
     "path": "\u003cinline\u003e",
     "serial_number": "1e9b3fc1e1e3ff36ca29411592c9a3baf38c0cb6",
     "subject_alt_names": [
      {
       "uri": "spiffe://cluster.local/ns/apps/sa/vault-citadel-sa"
      }
     ],
     "days_until_expiration": "0",
     "valid_from": "2019-09-14T14:18:48Z",
     "expiration_time": "2019-09-15T14:19:18Z"
    }
   ]
  }