相比于iptables(由于规则遍历,随着访问量增加,匹配变慢), IPVS(使用哈希表)性能更佳
1.Service默认使用的代理模式是iptables,可以查看kube-proxy pod验证
[root@k8s-master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-97769f7c7-xh7j7 1/1 Running 18 27d
calico-node-58ck4 1/1 Running 20 27d
calico-node-q6qxm 1/1 Running 17 27d
calico-node-vmmv5 1/1 Running 17 27d
coredns-7f89b7bc75-mgtnj 1/1 Running 17 27d
coredns-7f89b7bc75-wkrjq 1/1 Running 17 27d
etcd-k8s-master 1/1 Running 18 27d
kube-apiserver-k8s-master 1/1 Running 21 27d
kube-controller-manager-k8s-master 1/1 Running 14 15d
kube-proxy-clmzc 1/1 Running 17 27d
kube-proxy-f4mk4 1/1 Running 17 27d
kube-proxy-s7n6w 1/1 Running 18 27d
kube-scheduler-k8s-master 1/1 Running 14 15d
metrics-server-84f9866fdf-9l6rt 1/1 Running 16 16d
[root@k8s-master ~]#
没有指定mode,默认使用iptables?
2.切换Sevice模式为ipvs
?kubectl edit configmap kube-proxy -n kube-system 编辑configmap将mode改为ipvs
需要重启pod, 以?kube-proxy-clmzc为例?
[root@k8s-master ~]# kubectl get pod -o wide -n kube-system | grep kube-proxy
kube-proxy-clmzc 1/1 Running 17 27d 192.168.231.122 k8s-node1 <none> <none>
kube-proxy-f4mk4 1/1 Running 17 27d 192.168.231.123 k8s-node2 <none> <none>
kube-proxy-s7n6w 1/1 Running 18 27d 192.168.231.121 k8s-master <none> <none>
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl delete pod kube-proxy-clmzc -n kube-system
pod "kube-proxy-clmzc" deleted
[root@k8s-master ~]# kubectl get pod -o wide -n kube-system | grep kube-proxy
kube-proxy-f4mk4 1/1 Running 17 27d 192.168.231.123 k8s-node2 <none> <none>
kube-proxy-frvx7 1/1 Running 0 19s 192.168.231.122 k8s-node1 <none> <none>
kube-proxy-s7n6w 1/1 Running 18 27d 192.168.231.121 k8s-master <none> <none>
[root@k8s-master ~]#
3.使用ipvsadm -L -n验证
?重启的pod在?192.168.231.122, 即k8s-node1,如果没有ipvsadm先安装, yum install ipvsadm -y
svc的cluster IP 是10.104.247.10, nodePort 是31947 endpoints 是10.244.169.161:80,10.244.169.163:80,10.244.169.165:80
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 27d
web NodePort 10.104.247.10 <none> 80:31947/TCP 101m
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get ep
NAME ENDPOINTS AGE
kubernetes 192.168.231.121:6443 27d
web 10.244.169.161:80,10.244.169.163:80,10.244.169.165:80 102m
[root@k8s-master ~]#
在k8s-node1执行ipvsadm -L -n,可以找到集群内部/外部转发,调度算法是默认的rr
[root@k8s-node1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
...
TCP 10.104.247.10:80 rr
-> 10.244.169.161:80 Masq 1 0 0
-> 10.244.169.163:80 Masq 1 0 0
-> 10.244.169.165:80 Masq 1 0 0
...
TCP 192.168.122.1:31947 rr
-> 10.244.169.161:80 Masq 1 0 0
-> 10.244.169.163:80 Masq 1 0 0
-> 10.244.169.165:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.169.149:53 Masq 1 0 0
-> 10.244.169.156:53 Masq 1 0 0
[root@k8s-node1 ~]#
说明:一些调度算法 rr: round-robin lc: least connection dh: destination hashing sh: source hashing sed: shortest expected delay nq: never queue ……
4.恢复环境
kubectl edit configmap kube-proxy -n kube-system 编辑configmap
?重启k8s-node1上 kube-proxy的pod
[root@k8s-master ~]# kubectl edit configmap kube-proxy -n kube-system
configmap/kube-proxy edited
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl get pod -o wide -n kube-system | grep kube-proxy
kube-proxy-f4mk4 1/1 Running 17 27d 192.168.231.123 k8s-node2 <none> <none>
kube-proxy-frvx7 1/1 Running 0 26m 192.168.231.122 k8s-node1 <none> <none>
kube-proxy-s7n6w 1/1 Running 18 27d 192.168.231.121 k8s-master <none> <none>
[root@k8s-master ~]# kubectl delete pod kube-proxy-frvx7 -n kube-system
pod "kube-proxy-frvx7" deleted
[root@k8s-master ~]# kubectl get pod -o wide -n kube-system | grep kube-proxy
kube-proxy-f4mk4 1/1 Running 17 27d 192.168.231.123 k8s-node2 <none> <none>
kube-proxy-s7n6w 1/1 Running 18 27d 192.168.231.121 k8s-master <none> <none>
kube-proxy-xt644 1/1 Running 0 40s 192.168.231.122 k8s-node1 <none> <none>
[root@k8s-master ~]#
k8s-node1上执行?ipvsadm -L -n 还是能看到规则, 在k8s-node1执行如下命令
ip link del kube-ipvs0 ipvsadm -C
[root@k8s-node1 ~]# ip link
...
11: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default
link/ether 5e:9b:01:a5:2a:e0 brd ff:ff:ff:ff:ff:ff
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# ip link del kube-ipvs0
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# ipvsadm -C
[root@k8s-node1 ~]#
[root@k8s-node1 ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
[root@k8s-node1 ~]#
使用?kubectl logs kube-proxy-xt644 -n kube-system 查看使用的proxy mode是iptables proxy
?
|