Hi team,
I successfully installed k8s cluster based on centos8 with kubedm on my PC, and then I installed istio1.9.3 on this cluster, I followed all the steps and setup mentioned in the Virtual Machine Installation,I set the service istio-eastwestgateway as nodeport,then I added a configuration(ISTIO_PILOT_PORT=30529(Nodeport corresponding to 15012 on the service istio-eastwestgateway)) to the file cluster.env;
After the virtual machine started the istio service for the first time, everything looked normal,but after deployed the HelloWorld Service(kubectl apply -n sample -f samples/helloworld/helloworld.yaml
),when I executed the curl command(curl helloworld.sample.svc:5000/hello) on the virtual machine, it returned an error:upstream connect error or disconnect/rest before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQ;
Try curl again,it return:upstream connect error or disconnect/rest before headers. reset reason: local reset, transport failure reason: TLS error: 268435612:SSL routines:OPENSSL_internal:HTTP_REQUEST.
There’s another scene,if I restart the virtual machine without executing the commands,If the virtual machine is restarted without executing the commands(sudo systemctl stop istio;sudo rpm -e istio-sidecar), after the virtual machine is started again and the token is set correctly, starting istio service will report CA authentication error like this:2021-04-16T06:27:30.115925Z warn sds failed to warm certificate: failed to generate workload certificate: create certificate: rpc error: code = Unavailable desc = connection closed.
What problems might have caused this? Is my network not set up correctly or some additional configuration is needed?
I found the cause of the problem: the k8s cluster lacks a load balancer,when I used metallb as a load balancer in my k8s cluster, the problems were solved:
1,curl helloworld.sample.svc:5000/hello, the correct response is returned.
2,The workloadentry corresponding to my virtual machine is generated automatically in my k8s cluster.
why this problem still exists? i use metallb as a load balaner in my k8s cluster,
Master Node
root@sxf-virtual-machine:~/istio-1.10.3# ksvc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default details ClusterIP 10.110.60.221 <none> 9080/TCP 3d1h
default hostnames ClusterIP 10.111.190.216 <none> 80/TCP 3d2h
default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d
default productpage ClusterIP 10.98.222.204 <none> 9080/TCP 3d1h
default ratings ClusterIP 10.104.86.58 <none> 9080/TCP 3d1h
default reviews ClusterIP 10.101.246.133 <none> 9080/TCP 3d1h
istio-system istio-eastwestgateway LoadBalancer 10.99.130.139 192.168.3.252 15021:31544/TCP,15443:32575/TCP,15012:30747/TCP,15017:30099/TCP 3d
istio-system istio-ingressgateway LoadBalancer 10.96.196.70 192.168.3.251 15021:30035/TCP,80:31245/TCP,443:31639/TCP 3d23h
istio-system istiod ClusterIP 10.107.246.136 <none> 15010/TCP,15012/TCP,443/TCP,15014/TCP 3d23h
kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 4d
sample helloworld ClusterIP 10.108.34.35 <none> 5000/TCP 6m52s
vm-ns redis ClusterIP 10.104.5.158 <none> 6379/TCP 5h21m
root@sxf-virtual-machine:~/istio-1.10.3# curl 10.108.34.35:5000/hello
Hello version: v2, instance: helloworld-v2-54df5f84b-5crmx
root@sxf-virtual-machine:~/istio-1.10.3# more /etc/hosts
127.0.0.1 localhost
127.0.1.1 sxf-virtual-machine
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
root@sxf-virtual-machine:~/istio-1.10.3# more /etc/resolv.conf
# This file is managed by man:systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients to the
# internal DNS stub resolver of systemd-resolved. This file lists all
# configured search domains.
#
# Run "systemd-resolve --status" to see details about the uplink DNS servers
# currently in use.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See man:systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.
nameserver 127.0.0.53
options edns0 ndots:5
search svc.cluster.local cluster.local
vm Node
root@vm1:/home/sxf# curl helloworld.sample.svc:5000/hello
upstream connect error or disconnect/reset before headers. reset reason: connection failure
sxf@vm1:~$ nslookup helloworld.sample.svc
Server: 127.0.0.53
Address: 127.0.0.53#53
Name: helloworld.sample.svc
Address: 10.108.34.35
root@vm1:/home/sxf# more /etc/hosts
127.0.0.1 localhost
127.0.1.1 vm1
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.3.252 istiod.istio-system.svc
root@vm1:/home/sxf# more /etc/resolv.conf
nameserver 127.0.0.53
options edns0