第三天 Kubernetes进阶实践
本章介绍Kubernetes的进阶内容,包含Kubernetes集群调度、CNI插件、认证授权安全体系、分布式存储的对接、Helm的使用等,让学员可以更加深入的学习Kubernetes的核心内容。
-
ETCD数据的访问 -
kube-scheduler调度策略实践
-
k8s集群网络模型
- CNI介绍及集群网络选型
- Flannel网络模型的实现
- vxlan Backend
- hostgw Backend
-
集群认证与授权
- APIServer安全控制模型
- Kubectl的认证授权
- RBAC
- kubelet的认证授权
- Service Account
-
使用Helm管理复杂应用的部署
- Helm工作原理详解
- Helm的模板开发
- 实战:使用Helm部署Harbor仓库
-
kubernetes对接分部式存储
-
pv、pvc介绍 -
k8s集群如何使用cephfs作为分布式存储后端 -
利用storageClass实现动态存储卷的管理 -
实战:使用分部署存储实现有状态应用的部署 -
本章知识梳理及回顾
ETCD常用操作
拷贝etcdctl命令行工具:
$ docker exec -ti etcd_container which etcdctl
$ docker cp etcd_container:/usr/local/bin/etcdctl /usr/bin/etcdctl
注
k8s存放静态文件的目录,存放k8s整个集群的数据
[root@k8s-master ~]
total 16
-rw------- 1 root root 2104 Jul 9 22:55 etcd.yaml
-rw------- 1 root root 3161 Jul 9 22:55 kube-apiserver.yaml
-rw------- 1 root root 2858 Jul 9 22:55 kube-controller-manager.yaml
-rw------- 1 root root 1413 Jul 9 22:55 kube-scheduler.yaml
[root@k8s-master ~]
cbec05823ad2 0369cf4303ff "etcd --advertise-cl…" 26 hours ago Up 26 hours k8s_etcd_etcd-k8s-master_kube-system_ffeb60a5fc0a9dc352dceb8c62378b9c_1
b5b1b6ec7116 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 26 hours ago Up 26 hours k8s_POD_etcd-k8s-master_kube-system_ffeb60a5fc0a9dc352dceb8c62378b9c_1
[root@k8s-master ~]
...
/usr/local/bin/etcdctl
[root@k8s-master ~]
[root@k8s-master week3]
etcdctl version: 3.4.13
API version: 3.4
查看etcd集群的成员节点:
$ export ETCDCTL_API=3
[root@k8s-master ~]
total 32
-rw-r--r-- 1 root root 1058 Jul 17 19:05 ca.crt
-rw------- 1 root root 1679 Jul 17 19:05 ca.key
-rw-r--r-- 1 root root 1139 Jul 17 19:05 healthcheck-client.crt
-rw------- 1 root root 1679 Jul 17 19:05 healthcheck-client.
-rw-r--r-- 1 root root 1184 Jul 17 19:05 peer.crt
-rw------- 1 root root 1675 Jul 17 19:05 peer.key
-rw-r--r-- 1 root root 1184 Jul 17 19:05 server.crt
-rw------- 1 root root 1675 Jul 17 19:05 server.key
/etc/kubernetes/manifests/etcd.yaml
$ etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list -w table
$ alias etcdctl='etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key'
$ etcdctl member list -w table
member list 成员列表
-w table 通过表的方式打印出来
注:若是搭建的是k8s高可用集群,则master节点与slave节点的数量是一致的
注:
/etc/profile 所用用户有效,全局环境变量
~/.bashrc 每个运行bash shell的用户执行此文件
注:
[root@k8s-master ~]
[root@k8s-master ~]
[root@k8s-master ~]
- command:
- etcd
- --advertise-client-urls=https://10.0.1.5:2379
- --cert-file=/etc/kubernetes/pki/etcd/server.crt
- --client-cert-auth=true
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://10.0.1.5:2380
- --initial-cluster=k8s-master=https://10.0.1.5:2380
- --key-file=/etc/kubernetes/pki/etcd/server.key
- --listen-client-urls=https://127.0.0.1:2379,https://10.0.1.5:2379
- --listen-metrics-urls=http://127.0.0.1:2381
- --listen-peer-urls=https://10.0.1.5:2380
- --name=k8s-master
- --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
- --peer-client-cert-auth=true
- --peer-key-file=/etc/kubernetes/pki/etcd/peer.key
- --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
- --snapshot-count=10000
- --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
image: registry.aliyuncs.com/google_containers/etcd:3.4.13-0
imagePullPolicy: IfNotPresent
[root@k8s-master ~]
+------------------+---------+------------+-----------------------+-----------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+------------+-----------------------+-----------------------+------------+
| 8f4f0858fdc2d498 | started | k8s-master | https://10.0.1.5:2380 | https://10.0.1.5:2379 | false |
+------------------+---------+------------+-----------------------+-----------------------+------------+
参数解释
member list 成员列表
-w table通过表的方式打印出来
[root@k8s-master ~]
[root@k8s-master ~]
+------------------+---------+------------+-----------------------+-----------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+------------+-----------------------+-----------------------+------------+
| 8f4f0858fdc2d498 | started | k8s-master | https://10.0.1.5:2380 | https://10.0.1.5:2379 | false |
+------------------+---------+------------+-----------------------+-----------------------+------------+
查看etcd集群节点状态:
$ etcdctl endpoint status -w table
$ etcdctl endpoint health -w table
注:
[root@k8s-master ~]
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://[127.0.0.1]:2379 | 8f4f0858fdc2d498 | 3.4.13 | 4.1 MB | true | false | 3 | 70450 | 70450 | |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
本机的端点
ID
ETCD得版本
DB大小,每个节点大小都是一直的,数据不一致证明有问题
leader 角色是主位置,领导位置,其他是从节点
raft 选举的term轮数
raft index指数,索引
[root@k8s-master ~]
+--------------------------+--------+------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+--------------------------+--------+------------+-------+
| https://[127.0.0.1]:2379 | true | 7.520056ms | |
+--------------------------+--------+------------+-------+
health 健康状态,返回true是正常的
took那,获取的时间
设置key值:
$ etcdctl put luffy 1
$ etcdctl get luffy
查看所有key值:
$ etcdctl get / --prefix --keys-only
查看具体的key对应的数据:
$ etcdctl get /registry/pods/jenkins/sonar-postgres-7fc5d748b6-gtmsb
list-watch:
$ etcdctl watch /luffy --prefix
$ etcdctl put /luffy/key1 val1
注:响应是很及时的
[root@k8s-master week3]
OK
[root@k8s-master week3]
PUT
/luffy/key3
val3
添加定时任务做数据快照(重要!)
$ etcdctl snapshot save `hostname`-etcd_`date +%Y%m%d%H%M`.db
$ ll k8s-master-etcd_202106301901.db
$ etcdctl endpoint status -w table
恢复快照:
-
停止etcd和apiserver -
移走当前数据目录 $ mv /var/lib/etcd/ /tmp
-
恢复快照 $ etcdctl snapshot restore `hostname`-etcd_`date +%Y%m%d%H%M`.db --data-dir=/var/lib/etcd/
注:恢复快照必须保证如下几点:
集权IP,hostname不变
有备份数据,快照
整数也需保存下来
-
集群恢复 https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md 注:
[root@k8s-master week3]
Snapshot saved at k8s-master-etcd_202107180731.db
恢复快照必须保证如下几点
集群IP,hostname不变
有备份数据,快照
整数也需要保存下来
证书,文件等地址如下
[root@k8s-master week3]
total 32
-rw-r--r-- 1 root root 1058 Jul 17 19:05 ca.crt
-rw------- 1 root root 1679 Jul 17 19:05 ca.key
-rw-r--r-- 1 root root 1139 Jul 17 19:05 healthcheck-client.crt
-rw------- 1 root root 1679 Jul 17 19:05 healthcheck-client.key
-rw-r--r-- 1 root root 1184 Jul 17 19:05 peer.crt
-rw------- 1 root root 1675 Jul 17 19:05 peer.key
-rw-r--r-- 1 root root 1184 Jul 17 19:05 server.crt
-rw------- 1 root root 1675 Jul 17 19:05 server.key
1.停止etcd和apiserver
2.移走当前数据模板
[root@k8s-master week3]
3.恢复快照
[root@k8s-master week3]
total 4.8M
-rw------- 1 root root 4.8M Jul 18 07:31 k8s-master-etcd_202107180731.db
[root@k8s-master week3]
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://[127.0.0.1]:2379 | 8e9e05c52164694d | 3.4.13 | 5.0 MB | true | false | 2 | 88030 | 88030 | |
+--------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
小结
etcd的常用操作,设置key,获取key值
查看etcd集群节点状态的两个命令
etcdctl endpoint status -w table
etcdctl endpoint health -w table
ETCD的数据的快照与恢复备份
etcdctl 命令
遇见无法删除的,可以在etcd里面删除
[root@k8s-master manifests]
/registry/namespaces/default
/registry/namespaces/kube-node-lease
/registry/namespaces/kube-public
/registry/namespaces/kube-system
/registry/namespaces/kubernetes-dashboard
/registry/namespaces/luffy
[root@k8s-master manifests]
Kubernetes调度
为何要控制Pod应该如何调度
- 集群中有些机器的配置高(SSD,更好的内存等),我们希望核心的服务(比如说数据库)运行在上面
- 某两个服务的网络传输很频繁,我们希望它们最好在同一台机器上
- …
Kubernetes Scheduler 的作用是将待调度的 Pod 按照一定的调度算法和策略绑定到集群中一个合适的 Worker Node 上,并将绑定信息写入到 etcd 中,之后目标 Node 中 kubelet 服务通过 API Server 监听到 Scheduler 产生的 Pod 绑定事件获取 Pod 信息,然后下载镜像启动容器。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传
调度的过程
Scheduler 提供的调度流程分为预选 (Predicates) 和优选 (Priorities) 两个步骤:
- 预选,K8S会遍历当前集群中的所有 Node,筛选出其中符合要求的 Node 作为候选
- 优选,K8S将对候选的 Node 进行打分
经过预选筛选和优选打分之后,K8S选择分数最高的 Node 来运行 Pod,如果最终有多个 Node 的分数最高,那么 Scheduler 将从当中随机选择一个 Node 来运行 Pod。
预选:
优选:
NodeSelector
label 是kubernetes 中一个非常重要的概念,用户可以非常灵活的利用 label 来管理集群中的资源,POD 的调度可以根据节点的 label 进行特定的部署。
查看节点的label:
$ kubectl get nodes --show-labels
注:
还可用kubectl label pods 给pod打label,不会使用-h 查看example,语法同kubectl label nodes
为节点打label:
$ kubectl label node k8s-master disktype=ssd
当 node 被打上了相关标签后,在调度的时候就可以使用这些标签了,只需要在spec 字段中添加nodeSelector 字段,里面是我们需要被调度的节点的 label。
...
spec:
hostNetwork: true
volumes:
- name: mysql-data
hostPath:
path: /opt/mysql/data
nodeSelector:
component: mysql
containers:
- name: mysql
image: 172.21.51.143:5000/demo/mysql:5.7
...
nodeAffinity
注:pod根据node的标签做选择
节点亲和性 , 比上面的nodeSelector 更加灵活,它可以进行一些简单的逻辑组合,不只是简单的相等匹配 。分为两种,硬策略和软策略。
requiredDuringSchedulingIgnoredDuringExecution : 硬策略,如果没有满足条件的节点的话,就不断重试直到满足条件为止,简单说就是你必须满足我的要求,不然我就不会调度Pod。
preferredDuringSchedulingIgnoredDuringExecution:软策略,如果你没有满足调度要求的节点的话,Pod就会忽略这条规则,继续完成调度过程,说白了就是满足条件最好了,没有满足就忽略掉的策略。
...
spec:
containers:
- name: demo
image: 172.21.51.143:5000/myblog:v1
ports:
- containerPort: 8002
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: NotIn
values:
- 192.168.136.128
- 192.168.136.132
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: disktype
operator: In
values:
- ssd
- sas
...
这里的匹配逻辑是 label 的值在某个列表中,现在Kubernetes 提供的操作符有下面的几种:
- In:label 的值在某个列表中
- NotIn:label 的值不在某个列表中
- Gt:label 的值大于某个值
- Lt:label 的值小于某个值
- Exists:某个 label 存在
- DoesNotExist:某个 label 不存在
如果nodeSelectorTerms下面有多个选项的话,满足任何一个条件就可以了;如果matchExpressions有多个选项的话,则必须同时满足这些条件才能正常调度 Pod
pod亲和性和反亲和性
注: po根据pod的标签去选择
场景:
myblog 启动多副本,但是期望可以尽量分散到集群的可用节点中
分析:为了让myblog应用的多个pod尽量分散部署在集群中,可以利用pod的反亲和性,告诉调度器,如果某个节点中存在了myblog的pod,则可以根据实际情况,实现如下调度策略:
- 不允许同一个node节点,调度两个myblog的副本
- 可以允许同一个node节点中调度两个myblog的副本,前提是尽量把pod分散部署在集群中
...
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myblog
topologyKey: kubernetes.io/hostname
containers:
...
...
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myblog
topologyKey: kubernetes.io/hostname
containers:
...
$ kubectl -n luffy edit deployments.apps myblog
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: app
operator: In
values:
- myblog
topologyKey: kubernetes.io/hostname
[root@k8s-master week3]
NAME READY STATUS RESTARTS AGE
default-mem-demo 1/1 Running 1 36h
myblog-65847cf6ff-8s75f 1/1 Running 14 2d1h
myblog-65847cf6ff-f5rv2 1/1 Running 10 2d9h
myblog-65847cf6ff-tz46d 1/1 Running 11 2d9h
mysql-58d95d459c-jj4sx 1/1 Running 1 2d6h
$ kubectl -n luffy scale deployment myblog --replicas=3
[root@k8s-master myblog]
deployment.apps/myblog edited
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-65758f6854-knh9f 1/1 Running 0 2m20s 10.244.1.5 k8s-slave1 <none> <none>
myblog-65758f6854-zzv4m 1/1 Running 0 118s 10.244.2.13 k8s-slave2 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
[root@k8s-master myblog]
deployment.apps/myblog scaled
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-65758f6854-7l85h 1/1 Running 0 39s 10.244.0.7 k8s-master <none> <none>
myblog-65758f6854-knh9f 1/1 Running 0 5m6s 10.244.1.5 k8s-slave1 <none> <none>
myblog-65758f6854-zzv4m 1/1 Running 0 4m44s 10.244.2.13 k8s-slave2 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
[root@k8s-master myblog]
deployment.apps/myblog scaled
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-65758f6854-7l85h 1/1 Running 0 3m8s 10.244.0.7 k8s-master <none> <none>
myblog-65758f6854-knh9f 1/1 Running 0 7m35s 10.244.1.5 k8s-slave1 <none> <none>
myblog-65758f6854-nmp4n 0/1 Pending 0 8s <none> <none> <none> <none>
myblog-65758f6854-zzv4m 1/1 Running 0 7m13s 10.244.2.13 k8s-slave2 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
[root@k8s-master myblog]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 111s default-scheduler 0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.
Warning FailedScheduling 111s default-scheduler 0/3 nodes are available: 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't match pod anti-affinity rules.
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-56968c6d54-cd7xw 1/1 Running 0 111s 10.244.0.11 k8s-master <none> <none>
myblog-56968c6d54-vlmgh 1/1 Running 0 2m15s 10.244.0.10 k8s-master <none> <none>
myblog-596b7f9b8b-pfr5z 0/1 Running 0 1s 10.244.2.14 k8s-slave2 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-596b7f9b8b-pfr5z 1/1 Running 0 65s 10.244.2.14 k8s-slave2 <none> <none>
myblog-596b7f9b8b-zt4bx 1/1 Running 0 45s 10.244.1.6 k8s-slave1 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-596b7f9b8b-pfr5z 1/1 Running 0 4m4s 10.244.2.14 k8s-slave2 <none> <none>
myblog-596b7f9b8b-tsmrx 1/1 Running 0 28s 10.244.0.12 k8s-master <none> <none>
myblog-596b7f9b8b-zt4bx 1/1 Running 0 3m44s 10.244.1.6 k8s-slave1 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-596b7f9b8b-gx4pj 1/1 Running 0 29s 10.244.2.15 k8s-slave2 <none> <none>
myblog-596b7f9b8b-pfr5z 1/1 Running 0 5m52s 10.244.2.14 k8s-slave2 <none> <none>
myblog-596b7f9b8b-tsmrx 1/1 Running 0 2m16s 10.244.0.12 k8s-master <none> <none>
myblog-596b7f9b8b-zt4bx 1/1 Running 0 5m32s 10.244.1.6 k8s-slave1 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-596b7f9b8b-8tg9s 1/1 Running 0 36s 10.244.1.7 k8s-slave1 <none> <none>
myblog-596b7f9b8b-gx4pj 1/1 Running 0 2m26s 10.244.2.15 k8s-slave2 <none> <none>
myblog-596b7f9b8b-pfr5z 1/1 Running 0 7m49s 10.244.2.14 k8s-slave2 <none> <none>
myblog-596b7f9b8b-tsmrx 1/1 Running 0 4m13s 10.244.0.12 k8s-master <none> <none>
myblog-596b7f9b8b-zt4bx 1/1 Running 0 7m29s 10.244.1.6 k8s-slave1 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
总结就是:默认不会先调度到master节点上去,但是是平均调度的
[root@k8s-master myblog]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-596b7f9b8b-8tg9s 1/1 Running 0 115s 10.244.1.7 k8s-slave1 <none> <none>
myblog-596b7f9b8b-gx4pj 1/1 Running 0 3m45s 10.244.2.15 k8s-slave2 <none> <none>
myblog-596b7f9b8b-klppp 1/1 Running 0 47s 10.244.0.13 k8s-master <none> <none>
myblog-596b7f9b8b-pfr5z 1/1 Running 0 9m8s 10.244.2.14 k8s-slave2 <none> <none>
myblog-596b7f9b8b-tsmrx 1/1 Running 0 5m32s 10.244.0.12 k8s-master <none> <none>
myblog-596b7f9b8b-zt4bx 1/1 Running 0 8m48s 10.244.1.6 k8s-slave1 <none> <none>
mysql-58d95d459c-tkk5q 1/1 Running 0 23h 10.244.1.4 k8s-slave1 <none> <none>
https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/
有状态任务和无状态任务
无状态服务:新的ip名和主机地址
有状态服务statefulset:一致的主机名和持久化状态
污点(Taints)与容忍(tolerations)
对于nodeAffinity 无论是硬策略还是软策略方式,都是调度 Pod 到预期节点上,而Taints 恰好与之相反,如果一个节点标记为 Taints ,除非 Pod 也被标识为可以容忍污点节点,否则该 Taints 节点不会被调度Pod。
Taints(污点)是Node的一个属性,设置了Taints(污点)后,因为有了污点,所以Kubernetes是不会将Pod调度到这个Node上的。于是Kubernetes就给Pod设置了个属性Tolerations(容忍),只要Pod能够容忍Node上的污点,那么Kubernetes就会忽略Node上的污点,就能够(不是必须)把Pod调度过去。
场景一:私有云服务中,某业务使用GPU进行大规模并行计算。为保证性能,希望确保该业务对服务器的专属性,避免将普通业务调度到部署GPU的服务器。
场景二:用户希望把 Master 节点保留给 Kubernetes 系统组件使用,或者把一组具有特殊资源预留给某些 Pod,则污点就很有用了,Pod 不会再被调度到 taint 标记过的节点。taint 标记节点举例如下:
设置污点:
$ kubectl taint node [node_name] key=value:[effect]
其中[effect] 可取值: [ NoSchedule | PreferNoSchedule | NoExecute ]
NoSchedule:一定不能被调度。
PreferNoSchedule:尽量不要调度。
NoExecute:不仅不会调度,还会驱逐Node上已有的Pod。
示例:kubectl taint node k8s-slave1 smoke=true:NoSchedule
去除污点:
去除指定key及其effect:
kubectl taint nodes [node_name] key:[effect]-
去除指定key所有的effect:
kubectl taint nodes node_name key-
示例:
kubectl taint node k8s-master smoke=true:NoSchedule
kubectl taint node k8s-master smoke:NoExecute-
kubectl taint node k8s-master smoke-
污点演示:
$ kubectl taint node k8s-master gamble=true:NoSchedule
$ kubectl taint node k8s-slave1 drunk=true:NoSchedule
$ kubectl taint node k8s-slave2 smoke=true:NoSchedule
$ kuebctl -n luffy scale deploy myblog --replicas=3
$ kubectl -n luffy get po -w
列出污点:
注:
[root@k8s-master week3]
node/k8s-master tainted
[root@k8s-master week3]
node/k8s-slave1 tainted
[root@k8s-master week3]
node/k8s-slave2 tainted
怎么列出污点
wget http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
rpm -ivh epel-release-latest-7.noarch.rpm
yum install -y jq
kubectl get nodes -o json | jq '.items[].spec'
kubectl get nodes -o json | jq '.items[].spec.taints'
[root@k8s-master week3]
[
{
"effect": "NoSchedule",
"key": "gamble",
"value": "true"
}
]
[
{
"effect": "NoSchedule",
"key": "drunk",
"value": "true"
}
]
[
{
"effect": "NoSchedule",
"key": "smoke",
"value": "true"
}
]
$ kubectl taint node k8s-master node-role.kubernetes.io/master=:NoSchedule
[root@k8s-master week3]
deployment.apps/myblog scaled
[root@k8s-master week3]
NAME READY STATUS RESTARTS AGE
myblog-5d9b76df88-b2c8b 0/1 Pending 0 20m
myblog-5d9b76df88-f7tp7 0/1 Pending 0 20m
myblog-6694bccb48-jsmnh 1/1 Running 0 78m
myblog-6694bccb48-tqnk8 1/1 Running 0 79m
mysql-7446f4dc7b-2wqs8 1/1 Running 1 12h
[root@k8s-master week3]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 10m default-scheduler 0/3 nodes are available: 1 node(s) had taint {drunk: true}, that the pod didn't tolerate, 1 node(s) had taint {gamble: true}, that the pod didn't tolerate, 1 node(s) had taint {smoke: true}, that the pod didn't tolerate.
Pod容忍污点示例:myblog/deployment/deploy-myblog-taint.yaml
...
spec:
containers:
- name: demo
image: 172.21.51.143:5000/myblog:v1
tolerations:
- key: "smoke"
operator: "Equal"
value: "true"
effect: "NoSchedule"
- key: "drunk"
operator: "Exists"
$ kubectl apply -f deploy-myblog-taint.yaml
spec:
containers:
- name: demo
image: 172.21.51.143:5000/myblog:v1
tolerations:
- operator: "Exists"
NoExecute
Cordon
$ kubectl cordon k8s-slave2
$ kubectl drain k8s-slave2
注:
kubectl cordon nodename
kubectl uncordon nodename
$ kubectl drain --ignore-daemonsets --delete-local-data nodename
查看命令帮助
[root@k8s-master week3]
cordon Mark node as unschedulable
uncordon Mark node as schedulable
[root@k8s-master week3]
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18h v1.19.8
k8s-slave1 Ready <none> 18h v1.19.8
k8s-slave2 Ready <none> 18h v1.19.8
[root@k8s-master week3]
node/k8s-slave2 cordoned
[root@k8s-master week3]
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18h v1.19.8
k8s-slave1 Ready <none> 18h v1.19.8
k8s-slave2 Ready,SchedulingDisabled <none> 18h v1.19.8
[root@k8s-master week3]
node/k8s-slave2 uncordoned
[root@k8s-master week3]
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 18h v1.19.8
k8s-slave1 Ready <none> 18h v1.19.8
k8s-slave2 Ready <none> 18h v1.19.8
查看是否有污点
[root@k8s-master week3]
Taints: gamble=true:NoSchedule
Taints: drunk=true:NoSchedule
Taints: smoke=true:NoSchedule
小结
讲了k8s的调度策略
scheduler 提供了调度流程分为了预选和优选两个步骤
影响k8s调度的规则:
cordon : 标记节点为不可调度的对象,
uncordon:标记节点为可调度的对象
drain:驱逐,已有的pod不受影响
taint: 标记为污点节点
NoSchedule:一定不能被调度
PreferNoSchedule:尽量不要调度
NoExecute:不仅不会调度,还会驱逐node上已有的pod
语法:
添加污点:
kubectl taint node [node_name] key=value:[effect] 译文:[effect: 效果]
去除指定key及其effect:加短横线即可去除
kubectl taint nodes [node_name] key:[effect]-
去除指定key所有的effect:
kubectl taint nodes node_name key-
列出所有节点上的污点
kubectl get nodes -o json | jq '.items[].spec.taints'
标签的增删改查
NodeSelector:节点选择器,当node打上标签后,在调度时就可以使用这些标签
节点亲和性
软策略:如果你可以满足我条件最好,没有就忽略
硬策略:必须满足我的条件,不然就不能调度pod
设置污点容忍度
Exists 容忍所有污点
equal 容忍某个污点
Kubernetes集群的网络实现
CNI介绍及集群网络选型
容器网络接口(Container Network Interface),实现kubernetes集群的Pod网络通信及管理。包括:
- CNI Plugin负责给容器配置网络,它包括两个基本的接口:
配置网络: AddNetwork(net NetworkConfig, rt RuntimeConf) (types.Result, error) 清理网络: DelNetwork(net NetworkConfig, rt RuntimeConf) error - IPAM Plugin负责给容器分配IP地址,主要实现包括host-local和dhcp。
以上两种插件的支持,使得k8s的网络可以支持各式各样的管理模式,当前在业界也出现了大量的支持方案,其中比较流行的比如flannel、calico等。
kubernetes配置了cni网络插件后,其容器网络创建流程为:
- kubelet先创建pause容器生成对应的network namespace
- 调用网络driver,因为配置的是CNI,所以会调用CNI相关代码,识别CNI的配置目录为/etc/cni/net.d
- CNI driver根据配置调用具体的CNI插件,二进制调用,可执行文件目录为/opt/cni/bin,项目
- CNI插件给pause容器配置正确的网络,pod中其他的容器都是用pause的网络
可以在此查看社区中的CNI实现,https://github.com/containernetworking/cni
通用类型:flannel、calico等,部署使用简单
其他:根据具体的网络环境及网络需求选择,比如
- 公有云机器,可以选择厂商与网络插件的定制Backend,如AWS、阿里、腾讯针对flannel均有自己的插件,也有AWS ECS CNI
- 私有云厂商,比如Vmware NSX-T等
- 网络性能等,MacVlan
Flannel网络模型实现剖析
flannel的网络有多种实现:
不特殊指定的话,默认会使用vxlan技术作为Backend,可以通过如下查看:
$ kubectl -n kube-system exec kube-flannel-ds-amd64-cb7hs cat /etc/kube-flannel/net-conf.json
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
注:
[root@k8s-master week3]
ube-flannel-ds-amd64-4tfxs 1/1 Running 12 15d
kube-flannel-ds-amd64-58d2h 1/1 Running 6 15d
kube-flannel-ds-amd64-sfsj2 1/1 Running 10 15d
[root@k8s-master week2]
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
[root@k8s-master week3]
total 4
-rw-r--r-- 1 root root 292 Jul 18 15:11 10-flannel.conflist
[root@k8s-master week3]
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
[root@k8s-master week3]
bandwidth dhcp flannel host-local loopback portmap sbr tuning
bridge firewall host-device ipvlan macvlan ptp static vlan
vxlan介绍及点对点通信的实现
VXLAN 全称是虚拟可扩展的局域网( Virtual eXtensible Local Area Network),它是一种 overlay 技术,通过三层的网络来搭建虚拟的二层网络。
它创建在原来的 IP 网络(三层)上,只要是三层可达(能够通过 IP 互相通信)的网络就能部署 vxlan。在每个端点上都有一个 vtep 负责 vxlan 协议报文的封包和解包,也就是在虚拟报文上封装 vtep 通信的报文头部。物理网络上可以创建多个 vxlan 网络,这些 vxlan 网络可以认为是一个隧道,不同节点的虚拟机能够通过隧道直连。每个 vxlan 网络由唯一的 VNI 标识,不同的 vxlan 可以不相互影响。
- VTEP(VXLAN Tunnel Endpoints):vxlan 网络的边缘设备,用来进行 vxlan 报文的处理(封包和解包)。vtep 可以是网络设备(比如交换机),也可以是一台机器(比如虚拟化集群中的宿主机)
- VNI(VXLAN Network Identifier):VNI 是每个 vxlan 的标识,一共有 2^24 = 16,777,216,一般每个 VNI 对应一个租户,也就是说使用 vxlan 搭建的公有云可以理论上可以支撑千万级别的租户
演示:在k8s-slave1和k8s-slave2两台机器间,利用vxlan的点对点能力,实现虚拟二层网络的通信
k8s-slave1 节点:
$ ip link add vxlan20 type vxlan id 20 remote 10.0.1.8 dstport 4789 dev ens32
vxlan20 设备名
type 类型指定
id 指定20
remote 远端的地址是10.0.1.8
dstport 监听的默认地址是4789
dev 指定执行次命令机器上的设备网卡
$ ip -d link show vxlan20
$ ip link set vxlan20 up
$ ip addr add 10.0.51.55/24 dev vxlan20
注:
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
13: vxlan20: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000
link/ether 52:e6:f0:85:05:73 brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 20 remote 10.0.1.8 dev ens32 srcport 0 0 dstport 4789 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
13: vxlan20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ether 52:e6:f0:85:05:73 brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 20 remote 10.0.1.8 dev ens32 srcport 0 0 dstport 4789 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
k8s-slave2 节点:
[root@k8s-slave2 ~]
[root@k8s-slave2 ~]
[root@k8s-slave2 ~]
[root@k8s-slave2 ~]
12: vxlan20: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/ether e6:5e:54:bd:8d:13 brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 20 remote 10.0.1.6 dev ens32 srcport 0 0 dstport 4789 ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
在k8s-slave1 节点:
$ ping 10.0.137.11
[root@k8s-slave1 ~]
在k8s-slave2机器
[root@k8s-slave2 ~]
[root@k8s-slave1 ~]
PING 10.0.137.11 (10.0.137.11) 56(84) bytes of data.
64 bytes from 10.0.137.11: icmp_seq=1 ttl=64 time=0.777 ms
64 bytes from 10.0.137.11: icmp_seq=2 ttl=64 time=0.580 ms
在k8s-slave2机器上
$ tcpdump -i vxlan20 icmp
隧道是一个逻辑上的概念,在 vxlan 模型中并没有具体的物理实体相对应。隧道可以看做是一种虚拟通道,vxlan 通信双方(图中的虚拟机)认为自己是在直接通信,并不知道底层网络的存在。从整体来说,每个 vxlan 网络像是为通信的虚拟机搭建了一个单独的通信通道,也就是隧道。
实现的过程:
虚拟机的报文通过 vtep 添加上 vxlan 以及外部的报文层,然后发送出去,对方 vtep 收到之后拆除 vxlan 头部然后根据 VNI 把原始报文发送到目的虚拟机。
$ route -n
10.0.51.0 0.0.0.0 255.255.255.0 U 0 0 0 vxlan20
10.0.52.0 0.0.0.0 255.255.255.0 U 0 0 0 vxlan20
$ ip -d link show vxlan20
vxlan id 20 remote 172.21.51.55 dev eth0 srcport 0 0 dstport 4789 ...
$ bridge fdb show dev vxlan20
00:00:00:00:00:00 dst 172.21.52.84 via eth0 self permanent
a6:61:05:84:20:c6 dst 172.21.52.84 self
注
[root@k8s-slave1 ~]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
10.0.136.0 0.0.0.0 255.255.255.0 U 0 0 0 vxlan20
10.0.137.0 0.0.0.0 255.255.255.0 U 0 0 0 vxlan20
[root@k8s-slave1 ~]
vxlan id 20 remote 10.0.1.8 dev ens32 srcport 0 0 dstport 4789
[root@k8s-slave1 ~]
00:00:00:00:00:00 dev vxlan20 dst 10.0.1.8 via ens32 self permanent
02:6e:c5:fa:d1:89 dev vxlan20 dst 10.0.1.8 self
在k8s-slave2 机器抓包,查看vxlan封装后的包:
$ tcpdump -i eth32 host 10.0.1.6 -w vxlan.cap
$ ping 10.0.137.11
注:下载tcpdumo命令
yum install -y tcpdump
[root@k8s-slave2 ~]
tcpdump: listening on ens32, link-type EN10MB (Ethernet), capture size 262144 bytes
[root@k8s-slave2 ~]
total 24
-rw-------. 1 root root 1441 Jul 7 15:41 anaconda-ks.cfg
-rw-r--r-- 1 tcpdump tcpdump 16726 Jul 18 16:23 vxlan.cap
[root@k8s-slave1 ~]
PING 10.0.136.12 (10.0.136.12) 56(84) bytes of data.
64 bytes from 10.0.136.12: icmp_seq=1 ttl=64 time=0.435 ms
64 bytes from 10.0.136.12: icmp_seq=2 ttl=64 time=0.512 ms
64 bytes from 10.0.136.12: icmp_seq=3 ttl=64 time=0.721 ms
使用wireshark分析ICMP类型的数据包
清理:
$ ip link del vxlan20
跨主机容器网络的通信
思考:容器网络模式下,vxlan设备该接在哪里?
基本的保证:目的容器的流量要通过vtep设备进行转发!
演示:利用vxlan实现跨主机容器网络通信
为了不影响已有的网络,因此创建一个新的网桥,创建容器接入到新的网桥来演示效果
在k8s-slave1 节点:
$ docker network ls
$ docker network create --subnet 172.18.1.0/24 network-luffy
$ docker network ls
$ brctl show
$ docker run -d --name vxlan-test --net network-luffy --ip 172.18.1.2 nginx:alpine
$ docker exec vxlan-test ifconfig
注:
[root@k8s-slave1 ~]
NETWORK ID NAME DRIVER SCOPE
59a5d3a5fbab bridge bridge local
3b24ea493741 host host local
05f4c1d4d620 none null local
[root@k8s-slave1 ~]
6cda332dade866f0990994a924953a5b06efd80dbf058a8b17f5bda0ad94328a
[root@k8s-slave1 ~]
NETWORK ID NAME DRIVER SCOPE
3017c364daf2 bridge bridge local
3b24ea493741 host host local
c58faa633165 network-luffy bridge local
05f4c1d4d620 none null local
[root@k8s-slave1 ~]
bridge name bridge id STP enabled interfaces
br-6cda332dade8 8000.02425dbc8bf5 no
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
[bridge name bridge id STP enabled interfaces
br-6cda332dade8 8000.02425dbc8bf5 no vethabc15e1
[root@k8s-slave1 ~]
eth0 Link encap:Ethernet HWaddr 02:42:AC:12:01:02
inet addr:172.18.1.2 Bcast:172.18.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:13 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:1086 (1.0 KiB) TX bytes:0 (0.0 B)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
在k8s-slave2 节点:
$ docker network create --subnet 172.18.2.0/24 network-luffy
$ docker run -d --name vxlan-test --net network-luffy --ip 172.18.2.2 nginx:alpine
注:
[root@k8s-slave2 ~]
e954091f19d1fd1ee389a366a4e5222498475a02669888da6931ed36fac5b721
[root@k8s-slave2 ~]
3f701927057a6a4c9d20591e53299af8d9459ebacc6114b2d0620c681dcef004
[root@k8s-slave2 ~]
PING 172.18.2.2 (172.18.2.2): 56 data bytes
64 bytes from 172.18.2.2: seq=0 ttl=64 time=0.096 ms
64 bytes from 172.18.2.2: seq=1 ttl=64 time=0.168 ms
[root@k8s-slave2 ~]
172.18.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-a05a9e1cbf5c
此时执行ping测试:
$ docker exec vxlan-test ping 172.18.2.2
[root@k8s-slave1 ~]
PING 172.18.2.2 (172.18.2.2): 56 data bytes
分析:数据到了网桥后,出不去。结合前面的示例,因此应该将流量由vtep设备转发,联想到网桥的特性,接入到桥中的端口,会由网桥负责转发数据,因此,相当于所有容器发出的数据都会经过到vxlan的端口,vxlan将流量转到对端的vtep端点,再次由网桥负责转到容器中。
k8s-slave1 节点:
$ ip link del vxlan20
$ ip link add vxlan_docker type vxlan id 100 remote 172.21.52.84 dstport 4789 dev eth0
$ ip link set vxlan_docker up
$ brctl show
br-0fdb78d3b486 8000.02421452871b no vethfffdd2f
$ brctl addif br-0fdb78d3b486 vxlan_docker
注:
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
bridge name bridge id STP enabled interfaces
br-7aa1c412ebd9 8000.02422da4cc96 no veth30da43b
注意这个ID是上面的brctl-show的值br-7aa1c412ebd9
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
bridge name bridge id STP enabled interfaces
br-7aa1c412ebd9 8000.02422da4cc96 no veth30da43b
vxlan_docker
[root@k8s-slave1 ~]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.18.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-7aa1c412ebd9
[root@k8s-slave1 ~]
k8s-slave2 节点:
$ ip link del vxlan20
$ ip link add vxlan_docker type vxlan id 100 remote 172.21.51.55 dstport 4789 dev eth0
$ ip link set vxlan_docker up
$ brctl show
$ brctl addif br-c6660fe2dc53 vxlan_docker
注:
[root@k8s-slave2 ~]
[root@k8s-slave2 ~]
[root@k8s-slave2 ~]
[root@k8s-slave2 ~]
bridge name bridge id STP enabled interfaces
br-cbd95326f8df 8000.02421006642f no veth7e24c00
注意这个ID是上面的brctl-show的值br-cbd95326f8df
[root@k8s-slave2 ~]
[root@k8s-slave2 ~]
bridge name bridge id STP enabled interfaces
br-cbd95326f8df 8000.02421006642f no veth7e24c00
vxlan_docker
[root@k8s-slave2 ~]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
172.18.2.0 0.0.0.0 255.255.255.0 U 0 0 0 br-cbd95326f8df
[root@k8s-slave2 ~]
再次执行ping测试:
$ docker exec vxlan-test ping 172.18.2.2
[root@k8s-slave1 ~]
PING 172.18.2.2 (172.18.2.2): 56 data bytes
64 bytes from 172.18.2.2: seq=0 ttl=63 time=0.843 ms
64 bytes from 172.18.2.2: seq=1 ttl=63 time=1.293 ms
手动通过建立网桥,实现跨主机之间容器的通信
brct命令
设置Linux网桥命令
安装包
yum install -y bridge-utils
参数 | 说明 | 示例 |
---|
addbr <bridge> | 创建网桥 | brctl addbr br10 | delbr <bridge> | 删除网桥 | brctl delbr br10 | addif <bridge> <device> | 将网卡接口接入网桥 | brctl addif br10 eth0 | delif <bridge> <device> | 删除网桥接入的网卡接口 | brctl delif br10 eth0 | show <bridge> | 查询网桥信息 | brctl show br10 或者直接 brctl show | stp <bridge> {on|off} | 启用禁用 STP | brctl stp br10 off/on | showstp <bridge> | 查看网桥 STP 信息 | brctl showstp br10 | setfd <bridge> <time> | 设置网桥延迟 | brctl setfd br10 10 | showmacs <bridge> | 查看 mac 信息 | brctl showmacs br10 |
ip命令
https://wangchujiang.com/linux-command/c/ip.html
Flannel的vxlan实现精讲
思考:k8s集群的网络环境和手动实现的跨主机的容器通信有哪些差别?
-
CNI要求,集群中的每个Pod都必须分配唯一的Pod IP -
k8s集群内的通信不是vxlan点对点通信,因为集群内的所有节点之间都需要互联
-
集群节点动态添加
flannel如何为每个节点分配Pod地址段:
$ kubectl -n kube-system get po |grep flannel
$ kubectl -n kube-system exec kube-flannel-ds-amd64-cb7hs cat /etc/kube-flannel/net-conf.json
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
[root@k8s-master bin]
NAME READY STATUS RESTARTS AGE IP NODE
myblog-5d9ff54d4b-4rftt 1/1 Running 1 33h 10.244.2.19 k8s-slave2
myblog-5d9ff54d4b-n447p 1/1 Running 1 33h 10.244.1.32 k8s-slave1
$ cat /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.1.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
注:
[root@k8s-master week3]
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=true
[root@k8s-slave2 ~]
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.2.1/24
FLANNEL_MTU=1500
FLANNEL_IPMASQ=true
注:
[root@k8s-master 2021]
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
[root@k8s-master 2021]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-65847cf6ff-f5rv2 1/1 Running 1 7h31m 10.244.1.15 k8s-slave1 <none> <none>
myblog-65847cf6ff-tz46d 1/1 Running 1 7h31m 10.244.0.11 k8s-master <none> <none>
mysql-58d95d459c-jj4sx 1/1 Running 0 5h18m 10.244.1.16 k8s-slave1 <none>
[root@k8s-master ~]
PodCIDR: 10.244.1.0/24
PodCIDRs: 10.244.1.0/24
vtep的设备在哪:
$ ip -d link show flannel.1
[root@k8s-master 2021]
6: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 0a:4c:e7:8d:15:4e brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 1 local 10.0.1.5 dev ens32 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
Pod的流量如何转到vtep设备中
$ brctl show cni0
$ route -n
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1
注:
[root@k8s-master 2021]
bridge name bridge id STP enabled interfaces
cni0 8000.067713fbf9e2 no veth34f9f400
veth7d8843b1
vethc42219bc
[root@k8s-slave1 ~]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.2 0.0.0.0 UG 100 0 0 ens32
10.0.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
10.244.0.0 10.0.1.5 255.255.255.0 UG 0 0 0 ens32
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.2.0 10.0.1.8 255.255.255.0 UG 0 0 0 ens32
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-c58faa633165
route命令 :显示并设置Linux中静态路由表
vtep封包的时候,如何拿到目的vetp端的IP及MAC信息
演示跨主机Pod通信的流量详细过程:
$ kubectl -n luffy get po -o wide
myblog-5d9ff54d4b-4rftt 1/1 Running 1 25h 10.244.2.19 k8s-slave2
myblog-5d9ff54d4b-n447p 1/1 Running 1 25h 10.244.1.32 k8s-slave1
$ kubectl -n luffy exec myblog-5d9ff54d4b-n447p -- ping 10.244.2.19 -c 2
PING 10.244.2.19 (10.244.2.19) 56(84) bytes of data.
64 bytes from 10.244.2.19: icmp_seq=1 ttl=62 time=0.480 ms
64 bytes from 10.244.2.19: icmp_seq=2 ttl=62 time=1.44 ms
--- 10.244.2.19 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.480/0.961/1.443/0.482 ms
$ kubectl -n luffy exec myblog-5d9ff54d4b-n447p -- route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.244.1.1 0.0.0.0 UG 0 0 0 eth0
10.244.0.0 10.244.1.1 255.255.0.0 UG 0 0 0 eth0
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
$ brctl show
bridge name bridge id STP enabled interfaces
cni0 8000.6a9a0b341d88 no veth048cc253
veth76f8e4ce
vetha4c972e1
$ route -n
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.136.2 0.0.0.0 UG 100 0 0 eth0
10.0.136.0 0.0.0.0 255.255.255.0 U 0 0 0 vxlan20
10.244.0.0 10.244.0.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.136.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
$ ip -d link show flannel.1
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 8a:2a:89:4d:b0:31 brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 1 local 172.21.51.68 dev eth0 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
$ bridge fdb show dev flannel.1
4a:4d:9d:3a:c5:f0 dst 172.21.51.68 self permanent
76:e7:98:9f:5b:e9 dst 172.21.51.67 self permanent
$ route -n
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.21.50.140 0.0.0.0 UG 0 0 0 eth0
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 10.244.1.0 255.255.255.0 UG 0 0 0 flannel.1
10.244.2.0 10.244.2.0 255.255.255.0 UG 0 0 0 flannel.1
169.254.0.0 0.0.0.0 255.255.0.0 U 1002 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.21.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0
总结:flannel实现了跨主机的pod直接的通信
[root@k8s-master 2021]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-65847cf6ff-8s75f 1/1 Running 0 39s 10.244.2.7 k8s-slave2 <none> <none>
myblog-65847cf6ff-f5rv2 1/1 Running 1 7h59m 10.244.1.15 k8s-slave1 <none> <none>
myblog-65847cf6ff-tz46d 1/1 Running 1 7h59m 10.244.0.11 k8s-master <none> <none>
mysql-58d95d459c-jj4sx 1/1 Running 0 5h45m 10.244.1.16 k8s-slave1 <none> <none>
[root@k8s-master 2021]
PING 10.244.2.7 (10.244.2.7) 56(84) bytes of data.
64 bytes from 10.244.2.7: icmp_seq=1 ttl=62 time=0.514 ms
64 bytes from 10.244.2.7: icmp_seq=2 ttl=62 time=0.455 ms
[root@k8s-master 2021]
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.1.15 netmask 255.255.255.0 broadcast 10.244.1.255
ether de:8b:7c:f3:44:8a txqueuelen 0 (Ethernet)
RX packets 73499 bytes 7631745 (7.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 80288 bytes 9807009 (9.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 38636 bytes 6604367 (6.2 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 38636 bytes 6604367 (6.2 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
[root@k8s-master 2021]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.244.1.1 0.0.0.0 UG 0 0 0 eth0
10.244.0.0 10.244.1.1 255.255.0.0 UG 0 0 0 eth0
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
[root@k8s-master 2021]
bridge name bridge id STP enabled interfaces
cni0 8000.067713fbf9e2 no veth34f9f400
veth7d8843b1
vethc42219bc
[root@k8s-slave1 ~]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.2 0.0.0.0 UG 100 0 0 ens32
10.0.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
10.244.0.0 10.0.1.5 255.255.255.0 UG 0 0 0 ens32
10.244.1.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.2.0 10.0.1.8 255.255.255.0 UG 0 0 0 ens32
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.1.0 0.0.0.0 255.255.255.0 U 0 0 0 br-c58faa633165
[root@k8s-slave1 ~]
4: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN mode DEFAULT group default
link/ether 06:cd:95:7e:6e:a6 brd ff:ff:ff:ff:ff:ff promiscuity 0
vxlan id 1 local 10.0.1.6 dev ens32 srcport 0 0 dstport 8472 nolearning ageing 300 noudpcsum noudp6zerocsumtx noudp6zerocsumrx addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
[root@k8s-slave1 ~]
0a:4c:e7:8d:15:4e dst 10.0.1.5 self permanent
5a:b9:e4:92:6a:34 dst 10.0.1.8 self permanent
72:cd:f4:b5:34:d5 dst 10.0.1.5 self permanent
[root@k8s-slave2 ~]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.2 0.0.0.0 UG 100 0 0 ens32
10.0.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
10.244.0.0 10.0.1.5 255.255.255.0 UG 0 0 0 ens32
10.244.1.0 10.0.1.6 255.255.255.0 UG 0 0 0 ens32
10.244.2.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br-aaad277060d
[root@k8s-slave2 ~]
bridge name bridge id STP enabled interfaces
cni0 8000.1e45de8b7ca7 no veth20885c0b
总结:flannel实现了跨主机的pod直接的通信
实际的请求图:
- k8s-slave1 节点中的 pod-a(10.244.2.19)当中的 IP 包通过 pod-a 内的路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥
cni0 - 到达
cni0 当中的 IP 包通过匹配节点 k8s-slave1 的路由表发现通往 10.244.2.19 的 IP 包应该交给 flannel.1 接口 flannel.1 作为一个 VTEP 设备,收到报文后将按照 VTEP 的配置进行封包,第一次会发送ARP请求,知道10.244.2.19的vtep设备是k8s-slave2机器,IP地址是172.21.51.67,拿到MAC 地址进行 VXLAN 封包。- 通过节点 k8s-slave2 跟 k8s-slave1之间的网络连接,VXLAN 包到达 k8s-slave2 的 eth0 接口
- 通过端口 8472,VXLAN 包被转发给 VTEP 设备
flannel.1 进行解包 - 解封装后的 IP 包匹配节点 k8s-slave2 当中的路由表(10.244.2.0),内核将 IP 包转发给
cni0 cni0 将 IP 包转发给连接在 cni0 上的 pod-b
利用host-gw模式提升集群网络性能
vxlan模式适用于三层可达的网络环境,对集群的网络要求很宽松,但是同时由于会通过VTEP设备进行额外封包和解包,因此给性能带来了额外的开销。
网络插件的目的其实就是将本机的cni0网桥的流量送到目的主机的cni0网桥。实际上有很多集群是部署在同一二层网络环境下的,可以直接利用二层的主机当作流量转发的网关。这样的话,可以不用进行封包解包,直接通过路由表去转发流量。
为什么三层可达的网络不直接利用网关转发流量?
内核当中的路由规则,网关必须在跟主机当中至少一个 IP 处于同一网段。
由于k8s集群内部各节点均需要实现Pod互通,因此,也就意味着host-gw模式需要整个集群节点都在同一二层网络内。
修改flannel的网络后端:
$ kubectl edit cm kube-flannel-cfg -n kube-system
...
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "host-gw"
}
}
kind: ConfigMap
...
注:
[root@k8s-master ~]
NAME DATA AGE
coredns 1 26h
extension-apiserver-authentication 6 26h
kube-flannel-cfg 2 6h30m
kube-proxy 2 26h
kubeadm-config 2 26h
kubelet-config-1.19 1 26h
[root@k8s-master ~]
apiVersion: v1
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "host-gw"
}
}
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"cni-conf.json":"{\n \"name\": \"cbr0\",\n \"cniVersion\": \"0.3.1\",\n \"plugins\": [\n {\n \"type\": \"flannel\",\n \"delegate\": {\n \"hairpinMode\": true,\n \"isDefaultGateway\": true\n }\n },\n {\n \"type\": \"portmap\",\n \"capabilities\": {\n \"portMappings\": true\n }\n }\n ]\n}\n","net-conf.json":"{\n \"Network\": \"10.244.0.0/16\",\n \"Backend\": {\n \"Type\": \"vxlan\"\n }\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app":"flannel","tier":"node"},"name":"kube-flannel-cfg","namespace":"kube-system"}}
creationTimestamp: "2021-07-18T07:11:15Z"
labels:
app: flannel
tier: node
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:cni-conf.json: {}
f:metadata:
f:annotations:
.: {}
f:kubectl.kubernetes.io/last-applied-configuration: {}
f:labels:
.: {}
f:app: {}
f:tier: {}
manager: kubectl-client-side-apply
operation: Update
time: "2021-07-18T07:11:15Z"
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:data:
f:net-conf.json: {}
manager: kubectl-edit
operation: Update
time: "2021-07-18T12:14:55Z"
name: kube-flannel-cfg
namespace: kube-system
resourceVersion: "218583"
selfLink: /api/v1/namespaces/kube-system/configmaps/kube-flannel-cfg
uid: a50a1bd5-d261-468d-b498-9b8aa83da681
[root@k8s-master week3]
kube-flannel-ds-9xr8l 1/1 Running 0 88m
kube-flannel-ds-hz2tg 1/1 Running 0 88m
kube-flannel-ds-qj4zh 1/1 Running 0 88m
[root@k8s-master week3]
cni-conf.json
net-conf.json
[root@k8s-master week3]
29 "Network": "10.244.0.0/16",
30 "Backend": {
31 "Type": "host-gw"
[root@k8s-master week3]
kube-flannel-ds-9xr8l 1/1 Running 0 88m
kube-flannel-ds-hz2tg 1/1 Running 0 88m
kube-flannel-ds-qj4zh 1/1 Running 0 88m
[root@k8s-master week3]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.2 0.0.0.0 UG 100 0 0 ens32
10.0.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 10.0.1.6 255.255.255.0 UG 0 0 0 ens32
10.244.2.0 10.0.1.8 255.255.255.0 UG 0 0 0 ens32
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
[root@k8s-master week3]
pod "kube-flannel-ds-9xr8l" deleted
pod "kube-flannel-ds-hz2tg" deleted
pod "kube-flannel-ds-qj4zh" deleted
[root@k8s-master week3]
kube-flannel-ds-bd59v 1/1 Running 0 38s
kube-flannel-ds-p9l5c 1/1 Running 0 49s
kube-flannel-ds-s92r5 1/1 Running 0 47s
[root@k8s-master week3]
I0718 13:52:47.232151 1 main.go:533] Using interface with name ens32 and address 10.0.1.6
I0718 13:52:47.232259 1 main.go:550] Defaulting external address to interface address (10.0.1.6)
W0718 13:52:47.232278 1 client_config.go:608] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0718 13:52:47.434705 1 kube.go:116] Waiting 10m0s for node controller to sync
I0718 13:52:47.434794 1 kube.go:299] Starting kube subnet manager
I0718 13:52:48.435230 1 kube.go:123] Node controller sync successful
I0718 13:52:48.435289 1 main.go:254] Created subnet manager: Kubernetes Subnet Manager - k8s-slave1
I0718 13:52:48.435297 1 main.go:257] Installing signal handlers
I0718 13:52:48.435864 1 main.go:392] Found network config - Backend type: host-gw
【找到此行表示已经修改成功】
[root@k8s-master week3]
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 10.0.1.2 0.0.0.0 UG 100 0 0 ens32
10.0.1.0 0.0.0.0 255.255.255.0 U 100 0 0 ens32
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 10.0.1.6 255.255.255.0 UG 0 0 0 ens32
10.244.2.0 10.0.1.8 255.255.255.0 UG 0 0 0 ens32
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
[root@k8s-master week3]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
myblog-65847cf6ff-8s75f 1/1 Running 0 49m 10.244.2.7 k8s-slave2 <none> <none>
myblog-65847cf6ff-f5rv2 1/1 Running 1 8h 10.244.1.15 k8s-slave1 <none> <none>
myblog-65847cf6ff-tz46d 1/1 Running 1 8h 10.244.0.11 k8s-master <none> <none>
mysql-58d95d459c-jj4sx 1/1 Running 0 6h34m 10.244.1.16 k8s-slave1 <none> <none>
[root@k8s-master week3]
PING 10.244.1.15 (10.244.1.15) 56(84) bytes of data.
64 bytes from 10.244.1.15: icmp_seq=1 ttl=63 time=0.386 ms
[root@k8s-master week3]
PING 10.244.2.7 (10.244.2.7) 56(84) bytes of data.
64 bytes from 10.244.2.7: icmp_seq=1 ttl=62 time=3.74 ms
64 bytes from 10.244.2.7: icmp_seq=2 ttl=62 time=0.389 ms
重建Flannel的Pod
$ kubectl -n kube-system get po |grep flannel
kube-flannel-ds-amd64-5dgb8 1/1 Running 0 15m
kube-flannel-ds-amd64-c2gdc 1/1 Running 0 14m
kube-flannel-ds-amd64-t2jdd 1/1 Running 0 15m
$ kubectl -n kube-system delete po kube-flannel-ds-amd64-5dgb8 kube-flannel-ds-amd64-c2gdc kube-flannel-ds-amd64-t2jdd
$ kubectl -n kube-system logs -f kube-flannel-ds-amd64-4hjdw
I0704 01:18:11.916374 1 kube.go:126] Waiting 10m0s for node controller to sync
I0704 01:18:11.916579 1 kube.go:309] Starting kube subnet manager
I0704 01:18:12.917339 1 kube.go:133] Node controller sync successful
I0704 01:18:12.917848 1 main.go:247] Installing signal handlers
I0704 01:18:12.918569 1 main.go:386] Found network config - Backend type: host-gw
I0704 01:18:13.017841 1 main.go:317] Wrote subnet file to /run/flannel/subnet.env
查看节点路由表:
$ route -n
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 192.168.136.2 0.0.0.0 UG 100 0 0 eth0
10.244.0.0 0.0.0.0 255.255.255.0 U 0 0 0 cni0
10.244.1.0 172.21.51.68 255.255.255.0 UG 0 0 0 eth0
10.244.2.0 172.21.51.55 255.255.255.0 UG 0 0 0 eth0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
192.168.136.0 0.0.0.0 255.255.255.0 U 100 0 0 eth0
- k8s-slave1 节点中的 pod-a(10.244.2.19)当中的 IP 包通过 pod-a 内的路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥
cni0 - 到达
cni0 当中的 IP 包通过匹配节点 k8s-slave1 的路由表发现通往 10.244.2.19 的 IP 包应该使用172.21.51.55这个网关进行转发 - 包到达k8s-slave2节点(172.21.51.55)节点的eth0网卡,根据该节点的路由规则,转发给cni0网卡
cni0 将 IP 包转发给连接在 cni0 上的 pod-b
slave1节点中的pod的当中的ip包通过路由表被发送到eth0,进一步通过veth pair转到宿主机中的网桥cni0
到达cni当中的ip包通过匹配节点k8s-slave1的路由表发现通往目的地的ip使用k8s-slave2的网关进行转发
包到达k8s-slave2节点(网关)的eth0网卡,根据自己的路由规则,转发给cni0网卡
cni将ip包转发给连接在cni0上的pod-b
小结
做了啥?
做了vxlan点对点通信
通过搭建网桥,利用vxlan实现了跨主机之间的容器通信
讲了flannel插件实现了跨宿主机pod之间的通信
利用host-gw [host-getaway]提升了集群网络性能,但是条件时建立在同一二层网络之上
需要掌握的知识:
k8s:
理解flannel实现原理
了解怎么去实现跨主机之间得到容器通信
知道flannel是干嘛的,实现跨主机pod之间通信的一个网络插件,条件只要在同一二层网络之上就可以使用flannel插,熟悉host-gw,它比flannel性能更优,但是必须在同一二层为网络之上,要求相比flannel更加严苛
k8s之外:
希望掌握brctl ip link 等命令的使用
Kubernetes认证与授权
APIServer安全控制
-
Authentication:身份认证
-
这个环节它面对的输入是整个http request ,负责对来自client的请求进行身份校验,支持的方法包括:
-
APIServer启动时,可以指定一种Authentication方法,也可以指定多种方法。如果指定了多种方法,那么APIServer将会逐个使用这些方法对客户端请求进行验证, 只要请求数据通过其中一种方法的验证,APIServer就会认为Authentication成功; -
使用kubeadm引导启动的k8s集群,apiserver的初始配置中,默认支持client证书 验证和serviceaccount 两种身份验证方式。 证书认证通过设置--client-ca-file 根证书以及--tls-cert-file 和--tls-private-key-file 来开启。 -
在这个环节,apiserver会通过client证书或 http header 中的字段(比如serviceaccount的jwt token )来识别出请求的用户身份 ,包括”user”、”group”等,这些信息将在后面的authorization 环节用到。 -
Authorization:鉴权,你可以访问哪些资源
-
这个环节面对的输入是http request context 中的各种属性,包括:user 、group 、request path (比如:/api/v1 、/healthz 、/version 等)、 request verb (比如:get 、list 、create 等)。 -
APIServer会将这些属性值与事先配置好的访问策略(access policy )相比较。APIServer支持多种authorization mode ,包括Node、RBAC、Webhook 等。 -
APIServer启动时,可以指定一种authorization mode ,也可以指定多种authorization mode ,如果是后者,只要Request通过了其中一种mode的授权, 那么该环节的最终结果就是授权成功。在较新版本kubeadm引导启动的k8s集群的apiserver初始配置中,authorization-mode 的默认配置是”Node,RBAC” 。 -
Admission Control:准入控制,一个控制链(层层关卡),用于拦截请求的一种方式。偏集群安全控制、管理方面。
-
为什么需要? 认证与授权获取 http 请求 header 以及证书,无法通过body内容做校验。 Admission 运行在 API Server 的增删改查 handler 中,可以自然地操作 API resource -
举个栗子
-
以NamespaceLifecycle为例, 该插件确保处于Termination状态的Namespace不再接收新的对象创建请求,并拒绝请求不存在的Namespace。该插件还可以防止删除系统保留的Namespace:default,kube-system,kube-public -
LimitRanger,若集群的命名空间设置了LimitRange对象,若Pod声明时未设置资源值,则按照LimitRange的定义来未Pod添加默认值 apiVersion: v1
kind: LimitRange
metadata:
name: mem-limit-range
namespace: demo
spec:
limits:
- default:
memory: 512Mi
defaultRequest:
memory: 256Mi
type: Container
---
apiVersion: v1
kind: Pod
metadata:
name: default-mem-demo
namespace: demo
spec:
containers:
- name: default-mem-demo
image: nginx:alpine
注:
[root@k8s-master week3]
namespace/demo created
[root@k8s-master week3]
[root@k8s-master week3]
limitrange/mem-limit-range created
[root@k8s-master week3]
[root@k8s-master week3]
pod/default-mem-demo unchanged
[root@k8s-master week3]
NAME READY STATUS RESTARTS AGE
default-mem-demo 1/1 Running 0 7m33s
[root@k8s-master week3]
limits:
memory: 512Mi
requests:
memory: 256Mi
-
NodeRestriction, 此插件限制kubelet修改Node和Pod对象,这样的kubelets只允许修改绑定到Node的Pod API对象,以后版本可能会增加额外的限制 。开启Node授权策略后,默认会打开该项 -
怎么用? APIServer启动时通过 --enable-admission-plugins --disable-admission-plugins 指定需要打开或者关闭的 Admission Controller -
场景
- 自动注入sidecar容器或者initContainer容器
- webhook admission,实现业务自定义的控制需求
kubectl的认证授权
kubectl的日志调试级别:
信息 | 描述 |
---|
v=0 | 通常,这对操作者来说总是可见的。 | v=1 | 当您不想要很详细的输出时,这个是一个合理的默认日志级别。 | v=2 | 有关服务和重要日志消息的有用稳定状态信息,这些信息可能与系统中的重大更改相关。这是大多数系统推荐的默认日志级别。 | v=3 | 关于更改的扩展信息。 | v=4 | 调试级别信息。 | v=6 | 显示请求资源。 | v=7 | 显示 HTTP 请求头。 | v=8 | 显示 HTTP 请求内容。 | v=9 | 显示 HTTP 请求内容,并且不截断内容。 |
$ kubectl get nodes -v=7
I0329 20:20:08.633065 3979 loader.go:359] Config loaded from file /root/.kube/config
I0329 20:20:08.633797 3979 round_trippers.go:416] GET https://172.21.51.143:6443/api/v1/nodes?limit=500
注: 调试命令,数字越大,信息显示越详细
[root@k8s-master week3]
I0720 11:13:44.713735 101669 loader.go:375] Config loaded from file: /root/.kube/config
I0720 11:13:44.783558 101669 round_trippers.go:421] GET https://10.0.1.5:6443/api/v1/nodes?limit=500
I0720 11:13:44.783593 101669 round_trippers.go:428] Request Headers:
I0720 11:13:44.783598 101669 round_trippers.go:432] Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json
I0720 11:13:44.783603 101669 round_trippers.go:432] User-Agent: kubectl/v1.19.8 (linux/amd64) kubernetes/fd5d415
I0720 11:13:44.792951 101669 round_trippers.go:447] Response Status: 200 OK in 9 milliseconds
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 2d16h v1.19.8
k8s-slave1 Ready <none> 2d16h v1.19.8
k8s-slave2 Ready <none> 2d15h v1.19.8
kubeadm init 启动完master节点后,会默认输出类似下面的提示内容:
... ...
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
... ...
这些信息是在告知我们如何配置kubeconfig 文件。按照上述命令配置后,master节点上的kubectl 就可以直接使用$HOME/.kube/config 的信息访问k8s cluster 了。 并且,通过这种配置方式,kubectl 也拥有了整个集群的管理员(root)权限。
很多K8s初学者在这里都会有疑问:
- 当
kubectl 使用这种kubeconfig 方式访问集群时,Kubernetes 的kube-apiserver 是如何对来自kubectl 的访问进行身份验证(authentication )和授权(authorization )的呢? - 为什么来自
kubectl 的请求拥有最高的管理员权限呢?
查看/root/.kube/config 文件:
[root@k8s-master week3]
total 8
drwxr-x--- 4 root root 35 Jul 17 19:09 cache
-rw------- 1 root root 5560 Jul 17 19:07 config
前面提到过apiserver的authentication支持通过tls client certificate、basic auth、token 等方式对客户端发起的请求进行身份校验, 从kubeconfig信息来看,kubectl显然在请求中使用了tls client certificate 的方式,即客户端的证书。
证书base64解码:
$ echo xxxxxxxxxxxxxx |base64 -d > kubectl.crt
注:
[root@k8s-master week3]
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxxx
...
client-certificate-data: xxxx
...
client-key-data:
...
[root@k8s-master week3]
[root@k8s-master week3]
-----BEGIN CERTIFICATE-----
[root@k8s-master week3]
...
-----END RSA PRIVATE KEY-----
echo certificate-authority-data: xxx | base64 -d
cat /etc/kubernetes/pki/ca.crt
说明在认证阶段,apiserver 会首先使用--client-ca-file 配置的CA证书去验证kubectl提供的证书的有效性,基本的方式 :
$ openssl verify -CAfile /etc/kubernetes/pki/ca.crt kubectl.crt
kubectl.crt: OK
注:
[root@k8s-master week3]
kubectl.crt: OK
[root@k8s-master week3]
/etc/kubernetes/pki/apiserver-kubelet-client.crt: OK
除了认证身份,还会取出必要的信息供授权阶段使用,文本形式查看证书内容:
$ openssl x509 -in kubectl.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 4736260165981664452 (0x41ba9386f52b74c4)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Feb 10 07:33:39 2020 GMT
Not After : Feb 9 07:33:40 2021 GMT
Subject: O=system:masters, CN=kubernetes-admin
...
[root@k8s-master week3]
认证通过后,提取出签发证书时指定的CN(Common Name),kubernetes-admin ,作为请求的用户名 (User Name), 从证书中提取O(Organization)字段作为请求用户所属的组 (Group),group = system:masters ,然后传递给后面的授权模块。
注:到这里相当于用户认证授权通过后,下面需要做鉴权
kubeadm在init初始引导集群启动过程中,创建了许多默认的RBAC规则, 在k8s有关RBAC的官方文档中,我们看到下面一些default clusterrole 列表:
理解:
default clusterrole 它是一类跨集群的角色,它定义了很多可操作的资源,权限。
cluster-admin是一个default clusterrole下的管理员权限的角色,相当于谁[用户和用户组]和他绑定,就拥有和他一样的权限。
default clusterrolebingding:从证书中提取O(Organization)字段作为请求用户所属的组 (Group),`group = system:masters`
下面这表的意思就是:
default clusterrole先定义了一类跨集群的角色,它定义了很多可操作的权限,和资源
default clusterrolebingding : 定了哪些用户和用户组 和 default clusterrole下 的角色绑定,就拥有和他们一样的权限
其中第一个cluster-admin这个cluster role binding绑定了system:masters group,这和authentication环节传递过来的身份信息不谋而合。 沿着system:masters group对应的cluster-admin clusterrolebinding“追查”下去,真相就会浮出水面。
注:RBAC基于角色的一种防控规则,定义了那些用户,那些组,能做什么
我们查看一下这一binding:
$ kubectl describe clusterrolebinding cluster-admin
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
Role:
Kind: ClusterRole
Name: cluster-admin
Subjects:
Kind Name Namespace
---- ---- ---------
Group system:masters
我们看到在kube-system名字空间中,一个名为cluster-admin的clusterrolebinding将cluster-admin cluster role与system:masters Group绑定到了一起, 赋予了所有归属于system:masters Group中用户cluster-admin角色所拥有的权限。
我们再来查看一下cluster-admin这个role的具体权限信息:
$ kubectl describe clusterrole cluster-admin
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
注:
[root@k8s-master ~]
NAME CREATED AT
admin 2021-07-17T11:06:43Z
cluster-admin 2021-07-17T11:06:43Z
edit 2021-07-17T11:06:43Z
flannel 2021-07-18T07:11:15Z
kubeadm:get-nodes 2021-07-17T11:06:45Z
kubernetes-dashboard 2021-07-17T11:34:01Z
...
[root@k8s-master week3]
Name: cluster-admin
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
*.* [] [] [*]
[*] [] [*]
非资源类,如查看集群健康状态。
RBAC
Role-Based Access Control,基于角色的访问控制, apiserver启动参数添加–authorization-mode=RBAC 来启用RBAC认证模式,kubeadm安装的集群默认已开启。官方介绍
查看开启:
$ ps aux |grep apiserver
RBAC模式引入了4个资源类型:
-
Role,角色 一个Role只能授权访问单个namespace
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: demo
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list"]
注:
[root@k8s-master week3]
ClusterRole 一个ClusterRole能够授予和Role一样的权限,但是它是集群范围内的。
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: secret-reader
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "watch", "list"]
[root@k8s-master week3]
clusterrolebindings rbac.authorization.k8s.io false ClusterRoleBinding
clusterroles rbac.authorization.k8s.io false ClusterRole
rolebindings rbac.authorization.k8s.io true RoleBinding
roles rbac.authorization.k8s.io true Role
-
Rolebinding 将role中定义的权限分配给用户和用户组。RoleBinding包含主题(users,groups,或service accounts)和授予角色的引用。对于namespace内的授权使用RoleBinding,集群范围内使用ClusterRoleBinding。
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: read-pods
namespace: demo
subjects:
- kind: User
name: jane
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
注意:rolebinding既可以绑定role,也可以绑定clusterrole,当绑定clusterrole的时候,subject的权限也会被限定于rolebinding定义的namespace内部,若想跨namespace,需要使用clusterrolebinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-secrets
namespace: development
subjects:
- kind: User
name: dave
apiGroup: rbac.authorization.k8s.io
- kind: ServiceAccount
name: dave
namespace: luffy
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
考虑一个场景: 如果集群中有多个namespace分配给不同的管理员,每个namespace的权限是一样的,就可以只定义一个clusterrole,然后通过rolebinding将不同的namespace绑定到管理员身上,否则就需要每个namespace定义一个Role,然后做一次rolebinding。 -
ClusterRolebingding 允许跨namespace进行授权 apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: read-secrets-global
subjects:
- kind: Group
name: manager
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: secret-reader
apiGroup: rbac.authorization.k8s.io
kubelet的认证授权
查看kubelet进程
$ systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Sun 2020-07-05 19:33:36 EDT; 1 day 12h ago
Docs: https://kubernetes.io/docs/
Main PID: 10622 (kubelet)
Tasks: 24
Memory: 60.5M
CGroup: /system.slice/kubelet.service
└─851 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf
查看/etc/kubernetes/kubelet.conf ,解析证书:
$ echo xxxxx |base64 -d >kubelet.crt
$ openssl x509 -in kubelet.crt -text
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 9059794385454520113 (0x7dbadafe23185731)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Feb 10 07:33:39 2020 GMT
Not After : Feb 9 07:33:40 2021 GMT
Subject: O=system:nodes, CN=system:node:master-1
注:
[root@k8s-master week3]
total 32
-rw------- 1 root root 5560 Jul 17 19:05 admin.conf
-rw------- 1 root root 5600 Jul 17 19:05 controller-manager.conf
-rw------- 1 root root 1928 Jul 17 19:06 kubelet.conf
drwxr-xr-x 2 root root 113 Jul 17 19:05 manifests
drwxr-xr-x 3 root root 4096 Jul 17 19:05 pki
-rw------- 1 root root 5548 Jul 17 19:05 scheduler.conf
[root@k8s-master week3]
[root@k8s-master week3]
[root@k8s-master week3]
kubelet.crt: OK
[root@k8s-master week3]
Certificate:
Data:
Version: 3 (0x2)
Serial Number: 0 (0x0)
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Jul 17 11:05:37 2021 GMT
Not After : Jul 15 11:05:37 2031 GMT
Subject: CN=kubernetes
Subject Public Key Info:
...
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----
-in
-d
-a/-base64
-x509
-key
-out
-days
-text
得到我们期望的内容:
Subject: O=system:nodes, CN=system:node:k8s-master
我们知道,k8s会把O作为Group来进行请求,因此如果有权限绑定给这个组,肯定在clusterrolebinding的定义中可以找得到。因此尝试去找一下绑定了system:nodes组的clusterrolebinding
$ kubectl get clusterrolebinding -oyaml|grep -n10 system:nodes
178- resourceVersion: "225"
179- selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/kubeadm%3Anode-autoapprove-certificate-rotation
180- uid: b4303542-d383-4b62-a1e9-02f2cefa2c20
181- roleRef:
182- apiGroup: rbac.authorization.k8s.io
183- kind: ClusterRole
184- name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
185- subjects:
186- - apiGroup: rbac.authorization.k8s.io
187- kind: Group
188: name: system:nodes
189-- apiVersion: rbac.authorization.k8s.io/v1
190- kind: ClusterRoleBinding
191- metadata:
192- creationTimestamp: "2021-06-06T02:39:46Z"
193- managedFields:
194- - apiVersion: rbac.authorization.k8s.io/v1
195- fieldsType: FieldsV1
196- fieldsV1:
197- f:roleRef:
198- f:apiGroup: {}
[root@k8s-master week3]
Name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
Labels: kubernetes.io/bootstrapping=rbac-defaults
Annotations: rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
Resources Non-Resource URLs Resource Names Verbs
--------- ----------------- -------------- -----
certificatesigningrequests.certificates.k8s.io/selfnodeclient [] []
结局有点意外,除了system:certificates.k8s.io:certificatesigningrequests:selfnodeclient 外,没有找到system相关的rolebindings,显然和我们的理解不一样。 尝试去找资料,发现了这么一段 :
Default ClusterRole | Default ClusterRoleBinding | Description |
---|
system:kube-scheduler | system:kube-scheduler user | Allows access to the resources required by the schedulercomponent. | system:volume-scheduler | system:kube-scheduler user | Allows access to the volume resources required by the kube-scheduler component. | system:kube-controller-manager | system:kube-controller-manager user | Allows access to the resources required by the controller manager component. The permissions required by individual controllers are detailed in the controller roles. | system:node | None | Allows access to resources required by the kubelet, including read access to all secrets, and write access to all pod status objects. You should use the Node authorizer and NodeRestriction admission plugin instead of the system:node role, and allow granting API access to kubelets based on the Pods scheduled to run on them. The system:node role only exists for compatibility with Kubernetes clusters upgraded from versions prior to v1.8. | system:node-proxier | system:kube-proxy user | Allows access to the resources required by the kube-proxycomponent. |
大致意思是说:之前会定义system:node这个角色,目的是为了kubelet可以访问到必要的资源,包括所有secret的读权限及更新pod状态的写权限。如果1.8版本后,是建议使用 Node authorizer and NodeRestriction admission plugin 来代替这个角色的。
我们目前使用1.19,查看一下授权策略:
$ ps axu|grep apiserver
kube-apiserver --authorization-mode=Node,RBAC --enable-admission-plugins=NodeRestriction
查看一下官网对Node authorizer的介绍:
Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets.
In future releases, the node authorizer may add or remove permissions to ensure kubelets have the minimal set of permissions required to operate correctly.
In order to be authorized by the Node authorizer, kubelets must use a credential that identifies them as being in the system:nodes group, with a username of system:node:<nodeName>
译文:
翻译结果
*节点授权是一种特殊用途的授权方式,专门对kubelets发出的API请求进行授权。*
*在未来版本中,节点授权这可能会添加或删除权限,以确保kubelet具有正确操作所需的最少权限集。*
*为了获得节点授权者的授权,kubelet必须使用一个凭证来表示他们在'system:nodes'组中,用户名为'systme:node:<nodeName>'*
总结:kubelet的授权和kuberctl授权是两种不同的方式,kubelet走的是node的授权模式,kubectl走的是RBAC
Service Account及K8S Api调用
认证:
授权
前面说,认证可以通过证书,也可以通过使用ServiceAccount(服务账户)的方式来做认证。大多数时候,我们在基于k8s做二次开发时都是选择通过ServiceAccount + RBAC 的方式。我们之前访问dashboard的时候,是如何做的?
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: kubernetes-dashboard
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: admin
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: admin
namespace: kubernetes-dashboard
注:
[root@k8s-master week3]
NAME STATUS AGE
default Active 2d21h
kube-node-lease Active 2d21h
kube-public Active 2d21h
kube-system Active 2d21h
kubernetes-dashboard Active 2d21h
luffy Active 2d21h
[root@k8s-master week3]
NAME SECRETS AGE
admin 1 2d21h
default 1 2d21h
kubernetes-dashboard 1
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2021-07-17T11:34:39Z"
name: admin
namespace: kubernetes-dashboard
resourceVersion: "4976"
selfLink: /api/v1/namespaces/kubernetes-dashboard/serviceaccounts/admin
uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0
secrets:
- name: admin-token-j6gs8
[root@k8s-master week3]
NAME TYPE DATA AGE
admin-token-j6gs8 kubernetes.io/service-account-token 3 2d21h
default-token-xvd2w kubernetes.io/service-account-token 3 2d21h
kubernetes-dashboard-certs Opaque 0 2d21h
kubernetes-dashboard-csrf Opaque 1 2d21h
kubernetes-dashboard-key-holder Opaque 2 2d21h
kubernetes-dashboard-token-gszzn kubernetes.io/service-account-token 3 2d21h
我们查看一下:
$ kubectl -n kubernetes-dashboard get sa admin -o yaml
apiVersion: v1
kind: ServiceAccount
metadata:
creationTimestamp: "2020-04-01T11:59:21Z"
name: admin
namespace: kubernetes-dashboard
resourceVersion: "1988878"
selfLink: /api/v1/namespaces/kubernetes-dashboard/serviceaccounts/admin
uid: 639ecc3e-74d9-11ea-a59b-000c29dfd73f
secrets:
- name: admin-token-lfsrf
注意到serviceaccount上默认绑定了一个名为admin-token-lfsrf的secret,我们查看一下secret
$ kubectl -n kubernetes-dashboard describe secret admin-token-lfsrf
Name: admin-token-lfsrf
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: 639ecc3e-74d9-11ea-a59b-000c29dfd73f
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1025 bytes
namespace: 4 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZW1vIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImFkbWluLXRva2VuLWxmc3JmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNjM5ZWNjM2UtNzRkOS0xMWVhLWE1OWItMDAwYzI5ZGZkNzNmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50OmRlbW86YWRtaW4ifQ.ffGCU4L5LxTsMx3NcNixpjT6nLBi-pmstb4I-W61nLOzNaMmYSEIwAaugKMzNR-2VwM14WbuG04dOeO67niJeP6n8-ALkl-vineoYCsUjrzJ09qpM3TNUPatHFqyjcqJ87h4VKZEqk2qCCmLxB6AGbEHpVFkoge40vHs56cIymFGZLe53JZkhu3pwYuS4jpXytV30Ad-HwmQDUu_Xqcifni6tDYPCfKz2CZlcOfwqHeGIHJjDGVBKqhEeo8PhStoofBU6Y4OjObP7HGuTY-Foo4QindNnpp0QU6vSb7kiOiQ4twpayybH8PTf73dtdFt46UF6mGjskWgevgolvmO8A
注:
[root@k8s-master week3]
Name: admin-token-j6gs8
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IllJS0pRTmVqWUZqVDdXYnhKbDRxTl9yWHVYdk5QVFNmR2tLOEM0QzU1RDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1qNmdzOCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjhlMjRjMDQyLTVhMmMtNDliYy05ZjkyLWJmZGY3MmVhZjZjMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.WJm16yoTdmN1srZ6M4o__6mt7e_NUm5_Gl8H3oblsohA_RVr5T9ZKQEciNK63b3acZO2gxo0bjX8zbd_mnAQH4LBJ7XaiMJxbFvblbXC3DEN8aawNO_8J8twG6pN3Hanhk8gUFCHmd8Lj8k5Q59BDW1yaIv05u6LTCSQwY1zoFwup-Fk2-LEFcLgzyWTtN3SJG_OTkM1XvaSMTGR-KJi_KTg29nXkcrCPuKAEq9QQzFYeulfZt0QWknF67Bn8OyoKSY1o6m1SrsHHneSeT2Rebww-qjd-9rCwCj7apGkSoyLFByrSTKlgX0nv43yaYuHsPIBP4msBx_iZsaq1-APHw
只允许访问luffy命名空间的pod资源:
$ cat luffy-admin-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: luffy-pods-admin
namespace: luffy
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: luffy
name: pods-reader-writer
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: pods-reader-writer
namespace: luffy
subjects:
- kind: ServiceAccount
name: luffy-pods-admin
namespace: luffy
roleRef:
kind: Role
name: pods-reader-writer
apiGroup: rbac.authorization.k8s.io
[root@k8s-master week3]
[root@k8s-master week3]
serviceaccount/luffy-pods-admin created
role.rbac.authorization.k8s.io/pods-reader-writer created
rolebinding.rbac.authorization.k8s.io/pods-reader-writer created
[root@k8s-master week3]
NAME SECRETS AGE
default 1 5d
luffy-pods-admin 1 2d2h
[root@k8s-master week3]
NAME TYPE DATA AGE
default-token-pmv6k kubernetes.io/service-account-token 3 5d
luffy-pods-admin-token-ffhfh kubernetes.io/service-account-token 3 2d2h
myblog Opaque 2 4d22h
3 2d21h
[root@k8s-master week3]
Name: luffy-pods-admin-token-ffhfh
Namespace: luffy
Labels: <none>
Annotations: kubernetes.io/service-account.name: luffy-pods-admin
kubernetes.io/service-account.uid: b830e127-ef77-40ba-a242-bf844b6aa42c
Type: kubernetes.io/service-account-token
Data
====
namespace: 5 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IllJS0pRTmVqWUZqVDdXYnhKbDRxTl9yWHVYdk5QVFNmR2tLOEM0QzU1RDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWZmeSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWZmeS1wb2RzLWFkbWluLXRva2VuLWZmaGZoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imx1ZmZ5LXBvZHMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiODMwZTEyNy1lZjc3LTQwYmEtYTI0Mi1iZjg0NGI2YWE0MmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bHVmZnk6bHVmZnktcG9kcy1hZG1pbiJ9.Oi6phpFhwGCF5IT9ep10j0c_QUWK5vPNCEmdd36xsllm2se8IBnWKbm1R9jl8p4Se5uNnTgOAHEagm3rkAv6uUG8ERyz19M-4WcrEilL7WznqAvNvlpW2-7wSZ1_rFuGbizoXlH5Lwyvkj3odA1y1yBNAl2P2ZyfQtwOMEpPHTaF1LFjlFW478NecfkQgxmElk9FLT6wjcxbN1U85-P4RQZ6r_-PHsvMNtlFl62vvDC4ka8bVw0R2TYR3-zbZQFD4QVCU2EzoGFK6FmnJjOHwXGarNTB3aVeLiQ81NWA-8TNJQFsmZOwYuYPiJ9EjdB-yC5CBAnHZjaohc5qkjNLow
ca.crt: 1066 bytes
演示权限:
$ kubectl -n luffy describe secrets luffy-pods-admin-token-prr25
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6InBtQUZfRl8ycC03TTBYaUUwTnJVZGpvQWU0cXZ5M2FFbjR2ZjkzZVcxOE0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWZmeSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWZmeS1hZG1pbi10b2tlbi1wcnIyNSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJsdWZmeS1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImFhZDA0MTU3LTliNzMtNDJhZC1hMGU4LWVmOTZlZDU3Yzg1ZiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpsdWZmeTpsdWZmeS1hZG1pbiJ9.YWckylE5wlKITKrVltXY4VPKvZP9ar5quIT5zq9N-0_FnDkLIBX7xOyFvZA5Wef0wSFSZe3e9FwrO1UbPsmK7cZn74bhH8cNdoH_YVbIVT3-6tIOlCA_Bc8YypGE1gl-ZvLOIPV7WnRQsWpWtZtqfKBSkwLAHgWoxcx_d1bOcyTOdPmsW224xcBxjYwi6iRUtjTJST0LzOcAOCPDZq6-lqYUwnxLO_afxwg71BGX4etE48Iny8TxSEIs1VJRahoabC7hVOs17ujEm5loTDSpfuhae51qSDg8xeYwRHdM42aLUmc-wOvBWauHa5EHbH9rWPAnpaGIwF8QvnLszqp4QQ
...
$ curl -k -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IllJS0pRTmVqWUZqVDdXYnhKbDRxTl9yWHVYdk5QVFNm2tLOEM0QzU1RDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJsdWZmeSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJsdWZmeS1wb2RzLWFkbWluLXRva2VuLWZmaGZoIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imx1ZmZ5LXBvZHMtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJiODMwZTEyNy1lZjc3LTQwYmEtYTI0Mi1iZjg0NGI2YWE0MmMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6bHVmZnk6bHVmZnktcG9kcy1hZG1pbiJ9.Oi6phpFhwGCF5IT9ep10j0c_QUWK5vPNCEmdd36xsllm2se8IBnWKbm1R9jl8p4Se5uNnTgOAHEagm3rkAv6uUG8ERyz19M-4WcrEilL7WznqAvNvlpW2-7wSZ1_rFuGbizoXlH5Lwyvkj3odA1y1yBNAl2P2ZyfQtwOMEpPHTaF1LFjlFW478NecfkQgxmElk9FLT6wjcxbN1U85-P4RQZ6r_-PHsvMNtlFl62vvDC4ka8bVw0R2TYR3-zbZQFD4QVCU2EzoGFK6FmnJjOHwXGarNTB3aVeLiQ81NWA-8TNJQFsmZOwYuYPiJ9EjdB-yC5CBAnHZjaohc5qkjNLow" https://10.0.1.5:6443/api/v1/namespaces/luffy/pods?limit=500
[root@k8s-master week3]
创建用户认证授权的kubeconfig文件
签发证书对:
$ openssl genrsa -out luffy.key 2048
$ openssl req -new -key luffy.key -out luffy.csr -subj "/O=admin:luffy/CN=luffy-admin"
$ cat extfile.conf
[ v3_ca ]
keyUsage = critical, digitalSignature, keyEncipherment
extendedKeyUsage = clientAuth
$ openssl x509 -req -in luffy.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -sha256 -out luffy.crt -extensions v3_ca -extfile extfile.conf -days 3650
注:
$ mkdir cret && cd cret/
[root@k8s-master cret]
Generating RSA private key, 2048 bit long modulus
................+++
....................................+++
e is 65537 (0x10001)
[root@k8s-master cret]
[root@k8s-master cret]
[root@k8s-master cret]
Signature ok
subject=/O=admin:luffy/CN=luffy-admin
Getting CA Private Key
[root@k8s-master cret]
total 16
-rw-r--r-- 1 root root 95 Jul 22 20:23 extfile.conf
-rw-r--r-- 1 root root 1074 Jul 22 20:23 luffy.crt
-rw-r--r-- 1 root root 924 Jul 22 20:23 luffy.csr
-rw-r--r-- 1 root root 1679 Jul 22 20:23 luffy.key
[root@k8s-master cret]
Certificate:
Data:
Version: 3 (0x2)
Serial Number:
c6:13:7e:53:21:80:e4:3d
Signature Algorithm: sha256WithRSAEncryption
Issuer: CN=kubernetes
Validity
Not Before: Jul 22 12:23:47 2021 GMT
Not After : Jul 20 12:23:47 2031 GMT
Subject: O=admin:luffy, CN=luffy-admin
配置kubeconfig文件:
$ kubectl config set-cluster luffy-cluster --certificate-authority=/etc/kubernetes/pki/ca.crt --embed-certs=true --server=https://172.21.51.143:6443 --kubeconfig=luffy.kubeconfig
$ kubectl config set-credentials luffy-admin --client-certificate=luffy.crt --client-key=luffy.key --embed-certs=true --kubeconfig=luffy.kubeconfig
$ kubectl config set-context luffy-context --cluster=luffy-cluster --user=luffy-admin --kubeconfig=luffy.kubeconfig
$ kubectl config use-context luffy-context --kubeconfig=luffy.kubeconfig
注:
[root@k8s-master cret]
Cluster "luffy-cluster" set.
[root@k8s-master cret]
[root@k8s-master cret]
[root@k8s-master cret]
Context "luffy-context" created.
[root@k8s-master cret]
Switched to context "luffy-context".
[root@k8s-master cret]
luffy.kubeconfig
验证:
$ export KUBECONFIG=luffy.kubeconfig
$ kubectl get po
Error from server (Forbidden): pods is forbidden: User "luffy-admin" cannot list resource "pods" in API group "" in the namespace "default"
[root@k8s-master cret]
[root@k8s-master cret]
Error from server (Forbidden): pods is forbidden: User "luffy-admin" cannot list resource "pods" in API group "" in the namespace "default"
为luffy用户添加luffy命名空间访问权限:
$ cat luffy-admin-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: luffy
name: luffy-admin
rules:
- apiGroups: [""]
resources: ["*"]
verbs: ["*"]
$ cat luffy-admin-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: luffy-admin
namespace: luffy
subjects:
- kind: User
name: luffy-admin
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: luffy-admin
apiGroup: rbac.authorization.k8s.io
注
[root@k8s-master cret]
[root@k8s-master cret]
[root@k8s-master cret]
role.rbac.authorization.k8s.io/luffy-admin created
[root@k8s-master cret]
[root@k8s-master cret]
rolebinding.rbac.authorization.k8s.io/luffy-admin created
[root@k8s-master cret]
[root@k8s-master cret]
NAME READY STATUS RESTARTS AGE
myblog-6759fcc46f-7jgtf 1/1 Running 23 35h
myblog-6759fcc46f-lpp9t 1/1 Running 13 35h
myblog-6759fcc46f-qckrp 1/1 Running 12 35h
mysql-58d95d459c-jj4sx 1/1 Running 2 4d5h
通过HPA实现业务应用的动态扩缩容
HPA控制器介绍
当系统资源过高的时候,我们可以使用如下命令来实现 Pod 的扩缩容功能
$ kubectl -n luffy scale deployment myblog --replicas=2
但是这个过程是手动操作的。在实际项目中,我们需要做到是的是一个自动化感知并自动扩容的操作。Kubernetes 也为提供了这样的一个资源对象:Horizontal Pod Autoscaling(Pod 水平自动伸缩),简称HPA
基本原理:HPA 通过监控分析控制器控制的所有 Pod 的负载变化情况来确定是否需要调整 Pod 的副本数量
HPA的实现有两个版本:
- autoscaling/v1,只包含了根据CPU指标的检测,稳定版本
- autoscaling/v2beta1,支持根据memory或者用户自定义指标进行伸缩
如何获取Pod的监控数据?
- k8s 1.8以下:使用heapster,1.11版本完全废弃
- k8s 1.8以上:使用metric-server
思考:为什么之前用 heapster ,现在废弃了项目,改用 metric-server ?
heapster时代,apiserver 会直接将metric请求通过apiserver proxy 的方式转发给集群内的 hepaster 服务,采用这种 proxy 方式是有问题的:
-
http://kubernetes_master_address/api/v1/namespaces/namespace_name/services/service_name[:port_name]/proxy
-
proxy只是代理请求,一般用于问题排查,不够稳定,且版本不可控 -
heapster的接口不能像apiserver一样有完整的鉴权以及client集成 -
pod 的监控数据是核心指标(HPA调度),应该和 pod 本身拥有同等地位,即 metric应该作为一种资源存在,如metrics.k8s.io 的形式,称之为 Metric Api
于是官方从 1.8 版本开始逐步废弃 heapster,并提出了上边 Metric api 的概念,而 metrics-server 就是这种概念下官方的一种实现,用于从 kubelet获取指标,替换掉之前的 heapster。
Metrics Server 可以通过标准的 Kubernetes API 把监控数据暴露出来,比如获取某一Pod的监控数据:
https://172.21.51.143:6443/apis/metrics.k8s.io/v1beta1/namespaces/<namespace-name>/pods/<pod-name>
注:
[root@k8s-master week3]
目前的采集流程:
Metric Server
官方介绍
...
Metric server collects metrics from the Summary API, exposed by Kubelet on each node.
Metrics Server registered in the main API server through Kubernetes aggregator, which was introduced in Kubernetes 1.7
...
安装
官方代码仓库地址:https://github.com/kubernetes-sigs/metrics-server
Depending on your cluster setup, you may also need to change flags passed to the Metrics Server container. Most useful flags:
--kubelet-preferred-address-types - The priority of node address types used when determining an address for connecting to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP])--kubelet-insecure-tls - Do not verify the CA of serving certificates presented by Kubelets. For testing purposes only.--requestheader-client-ca-file - Specify a root certificate bundle for verifying client certificates on incoming requests.
$ wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.4.4/components.yaml
修改args参数:
...
130 containers:
131 - args:
132 - --cert-dir=/tmp
133 - --secure-port=4443
134 - --kubelet-insecure-tls
135 - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
136 - --kubelet-use-node-status-port
137 image: willdockerhub/metrics-server:v0.4.4
138 imagePullPolicy: IfNotPresent
...
执行安装:
$ kubectl apply -f components.yaml
$ kubectl -n kube-system get pods
$ kubectl top nodes
注:
[root@k8s-master week3]
[root@k8s-master week3]
[root@k8s-master week3]
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
[root@k8s-master week3]
metrics-server-7dbbc69d95-j6gkd 0/1 ContainerCreating 0 105s
[root@k8s-master week3]
[root@k8s-master week3]
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-master 530m 3% 1217Mi 15%
k8s-slave1 248m 1% 770Mi 9%
k8s-slave2 218m 1% 507Mi 6%
[root@k8s-master week3]
NAME CPU(cores) MEMORY(bytes)
myblog-6759fcc46f-7jgtf 4m 72Mi
myblog-6759fcc46f-lpp9t 3m 71Mi
myblog-6759fcc46f-qckrp 2m 71Mi
mysql-58d95d459c-jj4sx 6m 227Mi
kubelet的指标采集
无论是 heapster还是 metric-server,都只是数据的中转和聚合,两者都是调用的 kubelet 的 api 接口获取的数据,而 kubelet 代码中实际采集指标的是 cadvisor 模块,你可以在 node 节点访问 10250 端口获取监控数据:
- Kubelet Summary metrics: https://127.0.0.1:10250/metrics,暴露 node、pod 汇总数据
- Cadvisor metrics: https://127.0.0.1:10250/metrics/cadvisor,暴露 container 维度数据
调用示例:
$ curl -k -H "Authorization: Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6InhXcmtaSG5ZODF1TVJ6dUcycnRLT2c4U3ZncVdoVjlLaVRxNG1wZ0pqVmcifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi1xNXBueiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImViZDg2ODZjLWZkYzAtNDRlZC04NmZlLTY5ZmE0ZTE1YjBmMCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.iEIVMWg2mHPD88GQ2i4uc_60K4o17e39tN0VI_Q_s3TrRS8hmpi0pkEaN88igEKZm95Qf1qcN9J5W5eqOmcK2SN83Dd9dyGAGxuNAdEwi0i73weFHHsjDqokl9_4RGbHT5lRY46BbIGADIphcTeVbCggI6T_V9zBbtl8dcmsd-lD_6c6uC2INtPyIfz1FplynkjEVLapp_45aXZ9IMy76ljNSA8Uc061Uys6PD3IXsUD5JJfdm7lAt0F7rn9SdX1q10F2lIHYCMcCcfEpLr4Vkymxb4IU4RCR8BsMOPIO_yfRVeYZkG4gU2C47KwxpLsJRrTUcUXJktSEPdeYYXf9w" https://localhost:10250/metrics
注:
[root@k8s-master week3]
Name: admin-token-j6gs8
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: xxxxxxxxx
[root@k8s-master week3]
[root@k8s-master week3]
kubelet虽然提供了 metric 接口,但实际监控逻辑由内置的cAdvisor模块负责,早期的时候,cadvisor是单独的组件,从k8s 1.12开始,cadvisor 监听的端口在k8s中被删除,所有监控数据统一由Kubelet的API提供。
cadvisor获取指标时实际调用的是 runc/libcontainer库,而libcontainer是对 cgroup文件 的封装,即 cadvsior也只是个转发者,它的数据来自于cgroup文件。
cgroup文件中的值是监控数据的最终来源,如
Metrics数据流:
思考:
Metrics Server是独立的一个服务,只能服务内部实现自己的api,是如何做到通过标准的kubernetes 的API格式暴露出去的?
kube-aggregator
kube-aggregator聚合器及Metric-Server的实现
kube-aggregator是对 apiserver 的api的一种拓展机制,它允许开发人员编写一个自己的服务,并把这个服务注册到k8s的api里面,即扩展 API 。
定义一个APIService对象:
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
name: v1beta1.luffy.k8s.io
spec:
group: luffy.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: service-A
namespace: luffy
port: 443
version: v1beta1
versionPriority: 100
k8s会自动帮我们代理如下url的请求:
proxyPath := "/apis/" + apiService.Spec.Group + "/" + apiService.Spec.Version
即:https://172.21.51.143:6443/apis/luffy.k8s.io/v1beta1/xxxx转到我们的service-A服务中,service-A中只需要实现 https://service-A/apis/luffy.k8s.io/v1beta1/xxxx 即可。
看下metric-server的实现:
$ kubectl get apiservice
NAME SERVICE AVAILABLE
v1beta1.metrics.k8s.io kube-system/metrics-server True
$ kubectl get apiservice v1beta1.metrics.k8s.io -oyaml
...
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
...
$ kubectl -n kube-system get svc metrics-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.110.111.146 <none> 443/TCP 11h
$ curl -k -H "Authorization: Bearer xxxx" https://10.110.111.146
{
"paths": [
"/apis",
"/apis/metrics.k8s.io",
"/apis/metrics.k8s.io/v1beta1",
"/healthz",
"/healthz/healthz",
"/healthz/log",
"/healthz/ping",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/metrics",
"/openapi/v2",
"/version"
]
$ kubectl -n luffy top pods -v=6
$ curl -k -H "Authorization: Bearer xxxx" https://10.110.111.146/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods/myblog-5d9ff54d4b-4rftt
$ curl -k -H "Authorization: Bearer xxxx" https://172.21.51.143:6443/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods/myblog-5d9ff54d4b-4rftt
注:
[root@k8s-master week3]
v1beta1.metrics.k8s.io kube-system/metrics-server True 59m
[root@k8s-master week3]
...
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: kube-system
port: 443
version: v1beta1
versionPriority: 100
...
[root@k8s-master week3]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
metrics-server ClusterIP 10.100.57.78 <none> 443/TCP 65m
[root@k8s-master week3]
I0722 22:24:40.311311 101235 loader.go:375] Config loaded from file: /root/.kube/config
I0722 22:24:40.323724 101235 round_trippers.go:444] GET https://10.0.1.5:6443/api?timeout=32s 200 OK in 10 milliseconds
I0722 22:24:40.326095 101235 round_trippers.go:444] GET https://10.0.1.5:6443/apis?timeout=32s 200 OK in 1 milliseconds
I0722 22:24:40.330018 101235 round_trippers.go:444] GET https://10.0.1.5:6443/apis/metrics.k8s.io/v1beta1/namespaces/luffy/pods 200 OK in 2 milliseconds
NAME CPU(cores) MEMORY(bytes)
myblog-6759fcc46f-7jgtf 3m 71Mi
myblog-6759fcc46f-x9p88 2m 71Mi
myblog-6759fcc46f-xqfh4 3m 71Mi
mysql-58d95d459c-jj4sx 3m 227Mi
[root@k8s-master week3]
[root@k8s-master week3]
[root@k8s-master week3]
Name: admin-token-j6gs8
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin
kubernetes.io/service-account.uid: 8e24c042-5a2c-49bc-9f92-bfdf72eaf6c0
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: xxxx
[root@k8s-master week3]
总结如下:
metric-server是什么?
集群核心监控数据的聚合,通俗的说就存储了集群和节点的监控数据,并且提供了API以供分析使用。
metrics-server的主要作用为kube-scheduler,HorizontalPodAutoscaler等k8s核心组件,以及kubectl top命令和Dashboard等组件提供数据来源。
除此之外,也可以自定义metrics-server,添加一些其他的监控指标,比如说比较流行的k8s-prometheus-adapter
HPA实践
基于CPU和内存的动态伸缩
创建hpa对象:
$ cat hpa-myblog.yaml
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: hpa-myblog
namespace: luffy
spec:
maxReplicas: 3
minReplicas: 1
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myblog
metrics:
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 20
$ kubectl -n luffy autoscale deployment myblog --cpu-percent=10 --min=1 --max=3
Deployment对象必须配置requests的参数,不然无法获取监控数据,也无法通过HPA进行动态伸缩
注:
[root@k8s-master week3]
[root@k8s-master week3]
horizontalpodautoscaler.autoscaling/hpa-myblog created
[root@k8s-master week3]
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-myblog Deployment/myblog 71%/80%, 5%/20% 1 3 3 25s
验证:
$ yum -y install httpd-tools
$ kubectl -n luffy get svc myblog
myblog ClusterIP 10.104.245.225 <none> 80/TCP 6d18h
$ kubectl -n luffy scale deploy myblog --replicas=1
$ ab -n 100000 -c 1000 http://10.104.245.225/blog/index/
$ kubectl get hpa
$ kubectl -n luffy get pods
注:
[root@k8s-master week3]
[root@k8s-master week3]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
myblog ClusterIP 10.105.189.209 <none> 80/TCP 4d6h
[root@k8s-master week3]
deployment.apps/myblog scaled
ab -n 100000 -c 1000 http://10.105.189.209/blog/index/
[root@k8s-master week3]
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
hpa-myblog Deployment/myblog 68%/80%, 95%/20% 1 3 3 8m55s
hpa-myblog Deployment/myblog 68%/80%, 78%/20% 1 3 3 9m8s
hpa-myblog Deployment/myblog 4%/80%, 0%/20% 1 3 3 9m38s
[root@k8s-master week3]
NAME READY STATUS RESTARTS AGE
myblog-6759fcc46f-7jgtf 1/1 Running 24 36h
myblog-6759fcc46f-x9p88 1/1 Running 1 4m17s
myblog-6759fcc46f-xqfh4 1/1 Running 1 4m17s
压力降下来后,会有默认5分钟的scaledown的时间,可以通过controller-manager的如下参数设置:
--horizontal-pod-autoscaler-downscale-stabilization
The value for this option is a duration that specifies how long the autoscaler has to wait before another downscale operation can be performed after the current one has completed. The default value is 5 minutes (5m0s).
是一个逐步的过程,当前的缩放完成后,下次缩放的时间间隔,比如从3个副本降低到1个副本,中间大概会等待2*5min = 10分钟
基于自定义指标的动态伸缩
除了基于 CPU 和内存来进行自动扩缩容之外,我们还可以根据自定义的监控指标来进行。这个我们就需要使用 Prometheus Adapter ,Prometheus 用于监控应用的负载和集群本身的各种指标,Prometheus Adapter 可以帮我们使用 Prometheus 收集的指标并使用它们来制定扩展策略,这些指标都是通过 APIServer 暴露的,而且 HPA 资源对象也可以很轻易的直接使用。
架构图:
小结
章节:HPA全称HorizontalPodAutoscaler,(又是一类资源)即pod的水平自动扩展。自动扩展主要分为水平扩展和垂直扩展,即单个实例可以使用的资源的增减,HPA属于前者。它操作的对象是RC,RS或者deployment对应的pod,根据观察到CPU等实际使用量与用户的期望值进行对比,做出是否需要增减实例数量的对策。
Metrics-Server通过kubelet获取监控数据。
这些数据最终来自/sys/fs/cgroup/memory/
讲了HPA控制的介绍
metrics server的实现
kubelet的指标采集
基于CPU的,基于内存的动态伸缩
kubernetes对接分部式存储
PV与PVC快速入门
k8s存储的目的就是保证Pod重建后,数据不丢失。简单的数据持久化的下述方式:
-
emptyDir apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: webserver
volumeMounts:
- mountPath: /cache
name: cache-volume
- image: k8s.gcr.io/test-redis
name: redis
volumeMounts:
- mountPath: /data
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
- Pod内的容器共享卷的数据
- 存在于Pod的生命周期,Pod销毁,数据丢失
- Pod内的容器自动重建后,数据不会丢失
-
hostPath apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pod
name: test-volume
volumes:
- name: test-volume
hostPath:
path: /data
type: Directory
通常配合nodeSelector使用 -
nfs存储 ...
volumes:
- name: redisdata
nfs:
server: 192.168.31.241
path: /data/redis
readOnly: false
...
注:三种方式
volume支持的种类众多(参考 https://kubernetes.io/docs/concepts/storage/volumes/#types-of-volumes ),每种对应不同的存储后端实现,因此为了屏蔽后端存储的细节,同时使得Pod在使用存储的时候更加简洁和规范,k8s引入了两个新的资源类型,PV和PVC。
PersistentVolume(持久化卷),是对底层的存储的一种抽象,它和具体的底层的共享存储技术的实现方式有关,比如 Ceph、GlusterFS、NFS 等,都是通过插件机制完成与共享存储的对接。如使用PV对接NFS存储:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/k8s
server: 172.21.51.55
- capacity,存储能力, 目前只支持存储空间的设置, 就是我们这里的 storage=1Gi,不过未来可能会加入 IOPS、吞吐量等指标的配置。
- accessModes,访问模式, 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
- ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
- ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
- ReadWriteMany(RWX):读写权限,可以被多个节点挂载
- persistentVolumeReclaimPolicy,pv的回收策略, 目前只有 NFS 和 HostPath 两种类型支持回收策略
- Retain(保留)- 保留数据,需要管理员手工清理数据
- Recycle(回收)- 清除 PV 中的数据,效果相当于执行 rm -rf /thevolume/*
- Delete(删除)- 与 PV 相连的后端存储完成 volume 的删除操作,当然这常见于云服务商的存储服务,比如 ASW EBS。
因为PV是直接对接底层存储的,就像集群中的Node可以为Pod提供计算资源(CPU和内存)一样,PV可以为Pod提供存储资源。因此PV不是namespaced的资源,属于集群层面可用的资源。Pod如果想使用该PV,需要通过创建PVC挂载到Pod中。
PVC全写是PersistentVolumeClaim(持久化卷声明),PVC 是用户存储的一种声明,创建完成后,可以和PV实现一对一绑定。对于真正使用存储的用户不需要关心底层的存储实现细节,只需要直接使用 PVC 即可。
注:
PersistentVolume (PV)是集群中由管理员配置的一段网络存储,它是集群中的资源,就像节点是集群资源一样。PVS是容量插件额,如Volumes,但其生命周期独立于使用PV的任何单个pod
PersistentVolumeClaim(PVC)是由用户进行存储的请求,它类似于pod。pod消耗节点资源,PVC消耗PV资源Pod,Pod可以请求特定级别的资源(CPU和内存)。声明可以请求特定的大小和访问模式(例如,可以一次读/写或多次只读)。
PVC消耗PV资源,PVC和PV是一一对应的。
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
然后Pod中通过如下方式去使用:
...
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: pvc-nfs
...
PV与PVC管理NFS存储卷实践
环境准备
服务端:172.21.51.55
$ yum -y install nfs-utils rpcbind
$ mkdir -p /data/k8s && chmod 755 /data/k8s
$ echo '/data/k8s *(insecure,rw,sync,no_root_squash)'>>/etc/exports
$ systemctl enable rpcbind && systemctl start rpcbind
$ systemctl enable nfs && systemctl start nfs
注:
[root@node1 ~]
[root@node1 ~]
[root@node1 ~]
[root@node1 ~]
[root@node1 ~]
客户端:k8s集群slave节点(master和node节点都需执行)
$ yum -y install nfs-utils rpcbind
$ mkdir /nfsdata
$ mount -t nfs 172.21.51.55:/data/k8s /nfsdata
注:
$ yum install nfs-utils rpcbind -y
$ mkdir /nfsdata
$ mount -t nfs 10.0.1.3:/data/k8s /nfsdata
[root@k8s-slave1 nfsdata]
[root@node1 k8s]
1.txt
PV与PVC演示
$ cat pv-nfs.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/k8s/nginx
server: 172.21.51.55
$ kubectl create -f pv-nfs.yaml
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS
nfs-pv 1Gi RWO Retain Available
注:
[root@k8s-master week3]
[root@k8s-master week3]
persistentvolume/nfs-pv created
[root@k8s-master week3]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 1Gi RWX Retain Available 21s
一个 PV 的生命周期中,可能会处于4中不同的阶段:
- Available(可用):表示可用状态,还未被任何 PVC 绑定
- Bound(已绑定):表示 PV 已经被 PVC 绑定
- Released(已释放):PVC 被删除,但是资源还未被集群重新声明
- Failed(失败): 表示该 PV 的自动回收失败
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-nfs
namespace: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
$ kubectl create -f pvc.yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound nfs-pv 1Gi RWO 3s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM
nfs-pv 1Gi RWO Retain Bound default/pvc-nfs
$ ls /nfsdata
注:
[root@k8s-master week3]
[root@k8s-master week3]
persistentvolumeclaim/pvc-nfs created
[root@k8s-master week3]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 1Gi RWX Retain Bound default/pvc-nfs 5m50s
[root@k8s-master week3]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound nfs-pv 1Gi RWX 50s
创建Pod挂载pvc
$ cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-pvc
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: web
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: pvc-nfs
$ kubectl create -f deployment.yaml
注:
[root@k8s-master week3]
[root@k8s-master week3]
deployment.apps/nfs-pvc created
[root@k8s-master week3]
NAME READY STATUS RESTARTS AGE
nfs-pvc-7bf65c788-954z6 1/1 Running 0 4s
[root@k8s-master week3]
/
10.0.1.3:/data/k8s/nginx
/
/usr/share/nginx/html
/usr/share/nginx/html
[root@node1 k8s]
-rw-r--r-- 1 root root 0 Jul 23 11:10 nginx/index.html
[root@k8s-master week3]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nfs-pvc-7bf65c788-954z6 1/1 Running 0 12m 10.244.0.26 k8s-master <none> <none>
注:之前是使用hostPath,就把pod固定了某台机器上,现在使用nfs挂载,就不限制于pod在哪台机器,这样就不会限制k8s横向扩展,集群式漂移的能力
storageClass实现动态挂载
创建pv及pvc过程是手动,且pv与pvc一一对应,手动创建很繁琐。因此,通过storageClass + provisioner的方式来实现通过PVC自动创建并绑定PV。
实现流程
1.集群管理员事先创建存储类StorageClass
2.用户创建使用存储类的声明pvc
3.存储持久化声明通知系统,它需要一个持久化声明PV
4.系统读取存储类的信息
5.系统基于存储类的信息,在后台自动创建PVC需要的pv
6.用户创建一个使用的pvc的pod
7.pod中的应用通过pvc进行数据的持久化
8.而PVC使用pv进行数据的最终持久化处理
注:
StorageClass是对存储资源的一个抽象定义。与静态模式的存储配置(就是集群管理员手动去创建持久卷PV),StorageClass是一种动态模式的存储卷配置。StorageClass资源同PV一样,也不是命名空间级别的,是集群级别的。
StorageClass资源使得集群管理员解放双手,无需多次手动创建持久化PV,集群管理员只需要创建不同类别的存储类对应的StorageClass资源,供用户的PVC资源进行引用,k8s系统会自动创建持久化卷与持久卷声明PVC进行绑定。
在用户创建持久卷声明PVC之前,集群管理员需要创建StorageClass资源,这样才能动态的创建新的持久化卷PV。
provisioner:写了ceph的一些secret信息,pod - pvc - storageclass+ceph_provisioner - ceph
pod - pvc - storageclass+nfs_provisioner - ceph
部署: https://github.com/kubernetes-retired/external-storage
provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: nfs-provisioner
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: luffy.com/nfs
- name: NFS_SERVER
value: 172.21.51.55
- name: NFS_PATH
value: /data/k8s
volumes:
- name: nfs-client-root
nfs:
server: 172.21.51.55
path: /data/k8s
rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
name: nfs-client-provisioner
namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
namespace: nfs-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
namespace: nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-provisioner
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: nfs-provisioner
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: nfs
provisioner: luffy.com/nfs
pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
storageClassName: nfs
注:
[root@k8s-master nfs]
[root@k8s-master nfs]
[root@k8s-master nfs]
[root@k8s-master nfs]
namespace/nfs-provisioner created
[root@k8s-master nfs]
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
storageclass.storage.k8s.io/nfs created
[root@k8s-master nfs]
[root@k8s-master nfs]
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs luffy.com/nfs Delete Immediate false 63m
[root@k8s-master nfs]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound nfs-pv 1Gi RWX 109m
[root@k8s-master nfs]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 1Gi RWX Retain Bound default/pvc-nfs 114m
[root@k8s-master nfs]
persistentvolumeclaim/test-claim created
[root@k8s-master nfs]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-pv 1Gi RWX Retain Bound default/pvc-nfs 117m
pvc-8900a743-7e0a-42cd-a1db-aa3b5065a2c0 1Mi RWX Delete Bound default/test-claim nfs 110s
[root@k8s-master nfs]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-nfs Bound nfs-pv 1Gi RWX 114m
test-claim Bound pvc-8900a743-7e0a-42cd-a1db-aa3b5065a2c0 1Mi RWX nfs 3m34s
对接Ceph存储实践
ceph的安装及使用参考 http://docs.ceph.org.cn/start/intro/
单点快速安装: https://blog.csdn.net/h106140873/article/details/90201379
ceph osd pool create cephfs_data 128
ceph osd pool create cephfs_meta 128
ceph osd lspools
ceph fs new cephfs cephfs_meta cephfs_data
ceph fs ls
client.admin
key: AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
$ mount -t ceph 172.21.51.55:6789:/ /mnt/cephfs -o name=admin,secret=AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
注:
[root@ceph ~]
[root@ceph ~]
pool 'cephfs_meta' created
[root@ceph ~]
[root@ceph ~]
[root@ceph ~]
name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@ceph ~]
AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==
[root@k8s-master pki]
[root@k8s-master pki]
storageClass实现动态挂载
创建pv及pvc过程是手动,且pv与pvc一一对应,手动创建很繁琐。因此,通过storageClass + provisioner的方式来实现通过PVC自动创建并绑定PV。
比如,针对cephfs,可以创建如下类型的storageclass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 172.21.51.55:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: "kube-system"
claimRoot: /volumes/kubernetes
NFS,ceph-rbd,cephfs均提供了对应的provisioner
部署cephfs-provisioner
$ cat external-storage-cephfs-provisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: cephfs-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cephfs-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cephfs-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: cephfs-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: "quay.io/external_storage/cephfs-provisioner:latest"
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
imagePullPolicy: IfNotPresent
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
- "-disable-ceph-namespace-isolation=true"
serviceAccount: cephfs-provisioner
注:
注意这里最后创建
[root@k8s-master week3]
[root@k8s-master ceph]
[root@k8s-master ceph]
secret/ceph-admin-secret created
storageclass.storage.k8s.io/dynamic-cephfs created
serviceaccount/cephfs-provisioner created
clusterrole.rbac.authorization.k8s.io/cephfs-provisioner created
clusterrolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
role.rbac.authorization.k8s.io/cephfs-provisioner created
rolebinding.rbac.authorization.k8s.io/cephfs-provisioner created
deployment.apps/cephfs-provisioner created
[root@k8s-master ceph]
NAME READY STATUS RESTARTS AGE
cephfs-provisioner-7858cc7b6-hgg6k 1/1 Running 0 7h15m
coredns-6d56c8448f-gjgvc 1/1 Running 7 6d11h
coredns-6d56c8448f-sgdvm 1/1 Running 7 6d11h
[root@k8s-master ceph]
persistentvolumeclaim/cephfs-claim created
[root@k8s-master ceph]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cephfs-claim Bound pvc-fe29d44e-2acd-4c11-928c-e5329041e0a3 2Gi RWO dynamic-cephfs 81s
[root@k8s-master ceph]
在ceph monitor机器中查看admin账户的key
$ ceph auth list
$ ceph auth get-key client.admin
AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
注:
[root@ceph ~]
[root@ceph ~]
AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==
创建secret
$ echo -n AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==|base64
QVFCUFRzdGdjMDc4TkJBQTc4RDEvS0FCZ2xJWkhLaDcrRzJYOHc9PQ==
$ cat ceph-admin-secret.yaml
apiVersion: v1
data:
key: QVFBTWFQcGdmZ0R2QkJBQUpoVm0zRW5udHFMNW9iZnZRY1lTNEE9PQ==
kind: Secret
metadata:
name: ceph-admin-secret
namespace: kube-system
type: Opaque
注:
[root@ceph ~]
QVFBTWFQcGdmZ0R2QkJBQUpoVm0zRW5udHFMNW9iZnZRY1lTNEE9PQ==
[root@k8s-master ceph]
apiVersion: v1
data:
key: QVFBTWFQcGdmZ0R2QkJBQUpoVm0zRW5udHFMNW9iZnZRY1lTNEE9PQ==
kind: Secret
metadata:
name: ceph-admin-secret
namespace: kube-system
type: Opaque
[root@k8s-master ceph]
secret/ceph-admin-secret created
创建storageclass
$ cat cephfs-storage-class.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 172.21.51.55:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: "kube-system"
claimRoot: /volumes/kubernetes
注:
[root@k8s-master ceph]
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: dynamic-cephfs
provisioner: ceph.com/cephfs
parameters:
monitors: 10.0.1.3:6789
adminId: admin
adminSecretName: ceph-admin-secret
adminSecretNamespace: "kube-system"
claimRoot: /volumes/kubernetes
[root@k8s-master ceph]
storageclass.storage.k8s.io/dynamic-cephfs created
[root@k8s-master ceph]
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
dynamic-cephfs ceph.com/cephfs Delete Immediate false 11m
动态pvc验证及实现分析
使用流程: 创建pvc,指定storageclass和存储大小,即可实现动态存储。
创建pvc测试自动生成pv
$ cat cephfs-pvc-test.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: dynamic-cephfs
resources:
requests:
storage: 2Gi
$ kubectl create -f cephfs-pvc-test.yaml
$ kubectl get pv
pvc-2abe427e-7568-442d-939f-2c273695c3db 2Gi RWO Delete Bound default/cephfs-claim dynamic-cephfs 1s
注:
[root@k8s-master ceph]
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: cephfs-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: dynamic-cephfs
resources:
requests:
storage: 2Gi
[root@k8s-master ceph]
persistentvolumeclaim/cephfs-claim created
[root@k8s-master ceph]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-fe29d44e-2acd-4c11-928c-e5329041e0a3 2Gi RWO Delete Bound default/cephfs-claim dynamic-cephfs 8m42s
创建Pod使用pvc挂载cephfs数据盘
$ cat test-pvc-cephfs.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-pod
labels:
name: nginx-pod
spec:
containers:
- name: nginx-pod
image: nginx:alpine
ports:
- name: web
containerPort: 80
volumeMounts:
- name: cephfs
mountPath: /usr/share/nginx/html
volumes:
- name: cephfs
persistentVolumeClaim:
claimName: cephfs-claim
$ kubectl create -f test-pvc-cephfs.yaml
注:
[root@k8s-master ceph]
[root@k8s-master ceph]
pod/nginx-pod created
[root@k8s-master ceph]
NAME READY STATUS RESTARTS AGE
nfs-pvc-7bf65c788-954z6 1/1 Running 2 20h
nginx-pod 1/1 Running 0 18s
我们所说的容器的持久化,实际上应该理解为宿主机中volume的持久化,因为Pod是支持销毁重建的,所以只能通过宿主机volume持久化,然后挂载到Pod内部来实现Pod的数据持久化。
宿主机上的volume持久化,因为要支持数据漂移,所以通常是数据存储在分布式存储中,宿主机本地挂载远程存储(NFS,Ceph,OSS),这样即使Pod漂移也不影响数据。
k8s的pod的挂载盘通常的格式为:
/var/lib/kubelet/pods/<Pod的ID>/volumes/kubernetes.io~<Volume类型>/<Volume名字>
查看nginx-pod的挂载盘,
$ df -TH
/var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/
$ findmnt /var/lib/kubelet/pods/61ba43c5-d2e9-4274-ac8c-008854e4fa8e/volumes/kubernetes.io~cephfs/pvc-2abe427e-7568-442d-939f-2c273695c3db/
172.21.51.55:6789:/volumes/kubernetes/kubernetes/kubernetes-dynamic-pvc-ffe3d84d-c433-11ea-b347-6acc3cf3c15f
注:
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
使用Helm3管理复杂应用的部署
认识Helm
-
为什么有helm? -
Helm是什么? kubernetes的包管理器,“可以将Helm看作Linux系统下的apt-get/yum”。
除此以外,Helm还提供了kubernetes上的软件部署,删除,升级,回滚应用的强大功能。 -
Helm的版本
-
helm2 C/S架构,helm通过Tiller与k8s交互 -
helm3
-
从安全性和易用性方面考虑,移除了Tiller服务端,helm3直接使用kubeconfig文件鉴权访问APIServer服务器 -
由二路合并升级成为三路合并补丁策略( 旧的配置,线上状态,新的配置 ) helm install very_important_app ./very_important_app
这个应用的副本数量设置为 3 。现在,如果有人不小心执行了 kubectl edit 或: kubectl scale -replicas=0 deployment/very_important_app
然后,团队中的某个人发现 very_important_app 莫名其妙宕机了,尝试执行命令: helm rollback very_important_app
在 Helm 2 中,这个操作将比较旧的配置与新的配置,然后生成一个更新补丁。由于,误操作的人仅修改了应用的线上状态(旧的配置并未更新)。Helm 在回滚时,什么事情也不会做。因为旧的配置与新的配置没有差别(都是 3 个副本)。然后,Helm 不执行回滚,副本数继续保持为 0 -
移除了helm server本地repo仓库 -
创建应用时必须指定名字(或者–generate-name随机生成) -
Helm的重要概念
- chart,应用的信息集合,包括各种对象的配置模板、参数定义、依赖关系、文档说明等
- Repoistory,chart仓库,存储chart的地方,并且提供了一个该 Repository 的 Chart 包的清单文件以供查询。Helm 可以同时管理多个不同的 Repository。
- release, 当 chart 被安装到 kubernetes 集群,就生成了一个 release , 是 chart 的运行实例,代表了一个正在运行的应用
helm 是包管理工具,包就是指 chart,helm 能够:
- 从零创建chart
- 与仓库交互,拉取、保存、更新 chart
- 在kubernetes集群中安装、卸载 release
- 更新、回滚、测试 release
安装与快速入门实践
下载最新的稳定版本:https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
更多版本可以参考: https://github.com/helm/helm/releases
$ wget https://get.helm.sh/helm-v3.2.4-linux-amd64.tar.gz
$ tar -zxf helm-v3.2.4-linux-amd64.tar.gz
$ cp linux-amd64/helm /usr/local/bin/
$ helm version
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
$ helm env
$ helm repo add stable http://mirror.azure.cn/kubernetes/charts/
$ helm repo update
注:
[root@k8s-master 2021]
[root@k8s-master 2021]
[root@k8s-master 2021]
[root@k8s-master 2021]
version.BuildInfo{Version:"v3.2.4", GitCommit:"0ad800ef43d3b826f31a5ad8dfbb4fe05d143688", GitTreeState:"clean", GoVersion:"go1.13.12"}
[root@k8s-master 2021]
HELM_BIN="helm"
HELM_DEBUG="false"
HELM_KUBEAPISERVER=""
HELM_KUBECONTEXT=""
HELM_KUBETOKEN=""
HELM_NAMESPACE="default"
HELM_PLUGINS="/root/.local/share/helm/plugins"
HELM_REGISTRY_CONFIG="/root/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/root/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/root/.config/helm/repositories.yaml"
[root@k8s-master 2021]
[root@k8s-master 2021]
[root@k8s-master 2021]
"stable" has been added to your repositories
[root@k8s-master 2021]
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "stable" chart repository
Update Complete. ? Happy Helming!?
快速入门实践:
示例一:使用helm安装mysql应用
$ helm search repo mysql
$ helm install mysql --set mysqlRootPassword=root,mysqlUser=luffy,mysqlPassword=luffy,mysqlDatabase=my-database --set persistence.storageClass=dynamic-cephfs stable/mysql
$ helm ls
$ kubectl get all
$ helm pull stable/mysql
$ tree mysql
注:
$ helm search repo wordpress
[root@k8s-master 2021]
namespace/wordpress created
[root@k8s-master 2021]
-n 指定命名空间,不指定默认default
install 安装
wordpress 名字,可以自定义
stable/wordpress 用stable启动一个chart名字是wordpress
--set 指定参数
--set ingress.hostname=wordpress.luffy.com 指定了一个ingress域名
[root@k8s-master 2021]
[root@k8s-master 2021]
注:需要的各类资源
[root@k8s-master 2021]
NAME READY STATUS RESTARTS AGE
wordpress-565c745795-t5mpf 1/1 Running 0 97m
wordpress-mariadb-0 1/1 Running 0 97m
[root@k8s-master 2021]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wordpress ClusterIP 10.99.38.142 <none> 80/TCP,443/TCP 97m
wordpress-mariadb ClusterIP 10.102.129.15 <none> 3306/TCP 97m
[root@k8s-master 2021]
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
wordpress <none> wordpress.luffy.com 80 98m
[root@k8s-master 2021]
NAME: wordpress
LAST DEPLOYED: Sat Jul 24 09:32:50 2021
NAMESPACE: wordpress
STATUS: deployed
REVISION: 1
NOTES:
** Please be patient while the chart is being deployed **
Your WordPress site can be accessed through the following DNS name from within your cluster:
wordpress.wordpress.svc.cluster.local (port 80)
To access your WordPress site from outside the cluster follow the steps below:
1. Get the WordPress URL and associate WordPress hostname to your cluster external IP:
export CLUSTER_IP=$(minikube ip)
echo "WordPress URL: http://wordpress.luffy.com/"
echo "$CLUSTER_IP wordpress.luffy.com" | sudo tee -a /etc/hosts
2. Open a browser and access WordPress using the obtained URL.
3. Login with the following credentials below to see your blog:
echo Username: user
echo Password: $(kubectl get secret --namespace wordpress wordpress -o jsonpath="{.data.wordpress-password}" | base64 --decode)
获取上面的输出信息执行这条命令拿到密码
[root@k8s-master 2021]
再去浏览器访问http://wordpress.luffy.com/
登陆wordpress
账号 user
密码上面获取的值
[root@k8s-master 2021]
[root@k8s-master 2021]
[root@k8s-master 2021]
total 104
-rw-r--r-- 1 root root 387 Jul 16 23:51 Chart.lock
drwxr-xr-x 5 root root 52 Jul 24 08:31 charts
-rw-r--r-- 1 root root 881 Jul 16 23:51 Chart.yaml
drwxr-xr-x 2 root root 159 Jul 24 08:31 ci
-rw-r--r-- 1 root root 48803 Jul 16 23:51 README.md
drwxr-xr-x 3 root root 4096 Jul 24 08:31 templates
示例二:新建nginx的chart并安装
$ helm create nginx
$ helm install nginx ./nginx
$ helm -n luffy install nginx ./nginx --set replicaCount=2 --set image.tag=alpine
$ helm ls
$ helm -n luffy ls
$ kubectl -n luffy get all
注:
[root@k8s-master helm]
[root@k8s-master helm]
total 8
drwxr-xr-x 2 root root 6 Jul 24 11:16 charts
-rw-r--r-- 1 root root 1096 Jul 24 11:16 Chart.yaml
drwxr-xr-x 3 root root 162 Jul 24 11:16 templates
-rw-r--r-- 1 root root 1798 Jul 24 11:16 values.yaml
[root@k8s-master helm]
NAME: nginx
LAST DEPLOYED: Sat Jul 24 11:18:30 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=nginx" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace default port-forward $POD_NAME 8080:80
[root@k8s-master helm]
NAME: nginx
LAST DEPLOYED: Sat Jul 24 11:34:16 2021
NAMESPACE: luffy
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
export POD_NAME=$(kubectl get pods --namespace luffy -l "app.kubernetes.io/name=nginx,app.kubernetes.io/instance=nginx" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl --namespace luffy port-forward $POD_NAME 8080:80
[root@k8s-master helm]
NAME READY STATUS RESTARTS AGE
nginx-555d85b485-86pp7 1/1 Running 0 47s
nginx-555d85b485-g42xw 1/1 Running 0 47s
[root@k8s-master helm]
f:image: {}
f:imagePullPolicy: {}
- image: nginx:alpine
imagePullPolicy: IfNotPresent
image: nginx:alpine
imageID: docker-pullable://nginx@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3
Chart的模板语法及开发
nginx的chart实现分析
格式:
$ tree nginx/
nginx/
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── NOTES.txt
│ ├── serviceaccount.yaml
│ ├── service.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
很明显,资源清单都在templates中,数据来源于values.yaml,安装的过程就是将模板与数据融合成k8s可识别的资源清单,然后部署到k8s环境中。
$ helm install debug-nginx ./ --dry-run --set replicaCount=2 --debug
分析模板文件的实现:
-
引用命名模板并传递作用域 {{ include "nginx.fullname" . }}
include从_helpers.tpl中引用命名模板,并传递顶级作用域. -
内置对象 .Values # 把定义的对象封装进去
.Release.Name
.Chat
Release :该对象描述了 release 本身的相关信息,它内部有几个对象:
Release.Name :release 名称Release.Namespace :release 安装到的命名空间Release.IsUpgrade :如果当前操作是升级或回滚,则该值为 trueRelease.IsInstall :如果当前操作是安装,则将其设置为 trueRelease.Revision :release 的 revision 版本号,在安装的时候,值为1,每次升级或回滚都会增加Release.Service :渲染当前模板的服务,在 Helm 上,实际上该值始终为 Helm Values :从 values.yaml 文件和用户提供的 values 文件传递到模板的 Values 值Chart :获取 Chart.yaml 文件的内容,该文件中的任何数据都可以访问,例如 {{ .Chart.Name }}-{{ .Chart.Version}} 可以渲染成 mychart-0.1.0 -
模板定义 {{- define "nginx.fullname" -}} #定义模板
{{- if .Values.fullnameOverride }} #如果.Values.fullnameOverride值不为空
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }} #nginx.fullnameOverride就等于.Values.fullnameOverride截取前63位,同时去掉最后的"-"
{{- else }} # 否则
{{- $name := default .Chart.Name .Values.nameOverride }} # 定一个变量为name,值为 .Values.nameOverride 默认为.Chart.Name
{{- if contains $name .Release.Name }} # 如果变量name的值包含了.Release.Name的名称
{{- .Release.Name | trunc 63 | trimSuffix "-" }} # 那么nginx.fullname=.Release.Name截取前63位,同时去掉最后"-"
{{- else }} #否则
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }} # nginx.fullname=.Release.Name
{{- end }}
{{- end }}
{{- end }}
-
{{- 去掉左边的空格及换行,-}} 去掉右侧的空格及换行 -
示例 apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
data:
myvalue: "Hello World"
drink: {{ .Values.favorite.drink | default "tea" | quote }}
food: {{ .Values.favorite.food | upper | quote }}
{{ if eq .Values.favorite.drink "coffee" }}
mug: true
{{ end }}
渲染完后是: apiVersion: v1
kind: ConfigMap
metadata:
name: mychart-1575971172-configmap
data:
myvalue: "Hello World"
drink: "coffee"
food: "PIZZA"
mug: true
-
管道及方法
-
trunc表示字符串截取,63作为参数传递给trunc方法,trimSuffix表示去掉- 后缀 {{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
-
nindent表示前面的空格数 selector:
matchLabels:
{{- include "nginx.selectorLabels" . | nindent 6 }}
-
lower表示将内容小写,quote表示用双引号引起来 value: {{ include "mytpl" . | lower | quote }}
-
条件判断语句每个if对应一个end {{- if .Values.fullnameOverride }}
...
{{- else }}
...
{{- end }}
通常用来根据values.yaml中定义的开关来控制模板中的显示: {{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
-
定义变量,模板中可以通过变量名字去引用 {{- $name := default .Chart.Name .Values.nameOverride }}
-
遍历values的数据 {{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
toYaml处理值中的转义及特殊字符, “kubernetes.io/role”=master , name=“value1,value2” 类似的情况 -
default设置默认值 image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
Helm template
hpa.yaml
{{- if .Values.autoscaling.enabled }}
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "nginx.fullname" . }}
labels:
{{- include "nginx.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "nginx.fullname" . }}
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}
创建Release的时候赋值
$ helm install nginx-2 ./nginx --set replicaCount=2 --set resources.limits.cpu=200m --set resources.limits.memory=256Mi
更多语法参考:
https://helm.sh/docs/topics/charts/
实战:使用Helm部署Harbor镜像及chart仓库
harbor部署
架构 https://github.com/goharbor/harbor/wiki/Architecture-Overview-of-Harbor
- Core,核心组件
- API Server,接收处理用户请求
- Config Manager :所有系统的配置,比如认证、邮件、证书配置等
- Project Manager:项目管理
- Quota Manager :配额管理
- Chart Controller:chart管理
- Replication Controller :镜像副本控制器,可以与不同类型的仓库实现镜像同步
- Distribution (docker registry)
- Docker Hub
- …
- Scan Manager :扫描管理,引入第三方组件,进行镜像安全扫描
- Registry Driver :镜像仓库驱动,目前使用docker registry
- Job Service,执行异步任务,如同步镜像信息
- Log Collector,统一日志收集器,收集各模块日志
- GC Controller
- Chart Museum,chart仓库服务,第三方
- Docker Registry,镜像仓库服务
- kv-storage,redis缓存服务,job service使用,存储job metadata
- local/remote storage,存储服务,比较镜像存储
- SQL Database,postgresl,存储用户、项目等元数据
通常用作企业级镜像仓库服务,实际功能强大很多。
组件众多,因此使用helm部署
$ helm repo add harbor https://helm.goharbor.io
$ helm search repo harbor
$ helm pull harbor/harbor
注:
[root@k8s-master helm]
"harbor" has been added to your repositories
[root@k8s-master helm]
NAME CHART VERSION APP VERSION DESCRIPTION
harbor/harbor 1.7.0 2.3.0 An open source trusted cloud native registry th...
stable/harbor 10.2.2 2.3.0 Harbor is an an open source trusted cloud nativ...
[root@k8s-master helm]
[root@k8s-master helm]
total 48
-rw-r--r-- 1 root root 48691 Jul 24 14:16 harbor-1.7.0.tgz
[root@k8s-master helm]
创建pvc
$ kubectl create namespace harbor
$ cat harbor-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: harbor-pv
labels:
pv: harbor-pv
spec:
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
cephfs:
monitors:
- 10.0.1.3:6789
user: admin
secretRef:
name: ceph-admin-secret
namespace: kube-system
readOnly: false
persistentVolumeReclaimPolicy: Retain
$ cat harbor-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: harbor-data-pvc
namespace: harbor
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
注:
[root@k8s-master helm]
namespace/harbor created
[root@k8s-master helm]
[root@k8s-master helm]
[root@k8s-master helm]
persistentvolumeclaim/harbor-data-pvc created
[root@k8s-master helm]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
harbor-data-pvc Bound harbor-pv 20Gi RWX 27s
[root@k8s-master helm]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
harbor-pv 20Gi RWX Retain Bound harbor/harbor-data-pvc 4m31s
pvc-fe29d44e-2acd-4c11-928c-e5329041e0a3 2Gi RWO Delete Bound default/cephfs-claim dynamic-cephfs 7h52m
注释:
vim values.yaml
38-39
core: harbor.luffy.com
notary: harbor.luffy.com
120
externalURL: https://harbor.luffy.com
198-242
registry:
existingClaim: "harbor-data-pvc"
storageClass: ""
subPath: "registry"
accessMode: ReadWriteOnce
size: 5Gi
chartmuseum:
existingClaim: "harbor-data-pvc"
storageClass: ""
subPath: "chartmuseum"
accessMode: ReadWriteOnce
size: 5Gi
jobservice:
existingClaim: "harbor-data-pvc"
storageClass: ""
subPath: "jobservice"
accessMode: ReadWriteOnce
size: 1Gi
database:
existingClaim: "harbor-data-pvc"
storageClass: ""
subPath: "database"
accessMode: ReadWriteOnce
size: 1Gi
redis:
existingClaim: "harbor-data-pvc"
storageClass: ""
subPath: "redis"
accessMode: ReadWriteOnce
size: 1Gi
trivy:
existingClaim: "harbor-data-pvc"
storageClass: ""
subPath: "trivy"
accessMode: ReadWriteOnce
size: 5Gi
580
trivy:
enabled: false
643
notary:
enabled: false
706
password: "Harbor12345"
修改harbor配置:
- 开启ingress访问
- externalURL,web访问入口,和ingress的域名相同
- 持久化,使用PVC对接的cephfs
- harborAdminPassword: “Harbor12345”,管理员默认账户 admin/Harbor12345
- 开启chartmuseum
- clair和trivy漏洞扫描组件,暂不启用
helm创建:
$ helm install harbor ./harbor -n harbor
$ helm -n harbor uninstall harbor
注:
[root@k8s-master harbor]
/root/2021/week3/helm/harbor
编辑内容参考上面
[root@k8s-master harbor]
[root@k8s-master harbor]
NAME: harbor
LAST DEPLOYED: Sat Jul 24 14:55:47 2021
NAMESPACE: harbor
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Please wait for several minutes for Harbor deployment to complete.
Then you should be able to visit the Harbor portal at https://harbor.luffy.com
For more details, please visit https://github.com/goharbor/harbor
[root@k8s-master harbor]
NAME READY STATUS RESTARTS AGE
harbor-chartmuseum-9f58c9fdb-bhhfx 0/1 Running 0 3s
harbor-core-64f9f7465-d9njv 0/1 ContainerCreating 0 3s
harbor-database-0 0/1 Init:0/2 0 3s
harbor-jobservice-7d96c7f677-c78xz 0/1 Running 0 3s
harbor-portal-5bfdfcf9f6-n8gl9 0/1 Running 0 3s
harbor-redis-0 0/1 ContainerCreating 0 3s
harbor-registry-7cf4c7b85b-hxmwt 0/2 ContainerCreating 0 3s
[root@k8s-master harbor]
NAME READY STATUS RESTARTS AGE
harbor-chartmuseum-856556776b-2jr6n 1/1 Running 0 55s
harbor-core-5b89d7f4b5-clvm5 1/1 Running 0 55s
harbor-database-0 1/1 Running 0 55s
harbor-jobservice-5f78c76fb9-pdjbc 1/1 Running 0 55s
harbor-portal-5bfdfcf9f6-2jshm 1/1 Running 0 55s
harbor-redis-0 1/1 Running 0 55s
harbor-registry-8449cd8f96-xztcn 2/2 Running 0 55s
数据权限问题:
- 数据库目录初始化无权限
- redis持久化数据目录权限导致无法登录
- registry组件的镜像存储目录权限导致镜像推送失败
- chartmuseum存储目录权限,导致chart推送失败
解决:
$ mount -t ceph 172.21.51.55:6789:/ /mnt/cephfs -o name=admin,secret=AQBPTstgc078NBAA78D1/KABglIZHKh7+G2X8w==
$ chown -R 999:999 database
$ chown -R 999:999 redis
$ chown -R 10000:10000 chartmuseum
$ chown -R 10000:10000 registry
注:
mount -t ceph 10.0.1.3:6789:/ /mnt/cephfs -o name=admin,secret=AQAMaPpgfgDvBBAAJhVm3EnntqL5obfvQcYS4A==
[root@k8s-master harbor]
redis [ ~ ]$ echo $UID
999
都可以通过容器查看
[root@k8s-slave1 cephfs]
/mnt/cephfs
[root@k8s-slave1 cephfs]
[root@k8s-slave1 cephfs]
[root@k8s-slave1 cephfs]
[root@k8s-slave1 cephfs]
推送镜像到Harbor仓库
配置hosts及docker非安全仓库:
$ cat /etc/hosts
...
172.21.51.143 k8s-master harbor.luffy.com
...
$ cat /etc/docker/daemon.json
{
"insecure-registries": [
"172.21.51.143:5000",
"harbor.luffy.com"
],
"registry-mirrors" : [
"https://8xpk5wnt.mirror.aliyuncs.com"
]
}
$ systemctl restart docker
$ docker login harbor.luffy.com
$ docker tag nginx:alpine harbor.luffy.com/library/nginx:alpine
$ docker push harbor.luffy.com/library/nginx:alpine
注:
[root@k8s-slave1 cephfs]
10.0.1.5 harbor.luffy.com
[root@k8s-slave1 cephfs]
{
"insecure-registries": [
"10.0.1.5:5000",
"harbor.luffy.com"
],
"registry-mirrors" : [
"https://8xpk5wnt.mirror.aliyuncs.com"
]
}
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
Username: admin
Password: Harbor12345
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/
Login Succeeded
[root@k8s-slave1 ~]
[root@k8s-slave1 ~]
The push refers to repository [harbor.luffy.com/luffy/nginx]
7ebe47ef59e5: Pushed
a40efec40891: Pushed
d3a37e5dc9b6: Pushed
2524a71e1218: Pushed
b74fa78b1528: Pushed
72e830a4dff5: Pushed
alpine: digest: sha256:c35699d53f03ff9024ce2c8f6730567f183a15cc27b24453c5d90f0e7542daea size: 1568
推送chart到Harbor仓库
helm3默认没有安装helm push插件,需要手动安装。插件地址 https://github.com/chartmuseum/helm-push
安装插件:
$ helm plugin install https://github.com/chartmuseum/helm-push
离线安装:
$ mkdir helm-push
$ wget https://github.com/chartmuseum/helm-push/releases/download/v0.8.1/helm-push_0.8.1_linux_amd64.tar.gz
$ tar zxf helm-push_0.8.1_linux_amd64.tar.gz -C helm-push
$ helm plugin install ./helm-push
$ helm plugin uninstall push
注:
[root@k8s-master helm]
[root@k8s-master helm-push]
[root@k8s-master helm-push]
[root@k8s-master helm-push]
sh: scripts/install_plugin.sh: No such file or directory
Error: plugin install hook for "push" exited with error
[root@k8s-master helm-push]
NAME VERSION DESCRIPTION
push 0.8.1 Push chart package to ChartMuseum
添加repo
$ helm repo add myharbor https://harbor.luffy.com/chartrepo/luffy
$ kubectl get secret harbor-ingress -n harbor -o jsonpath="{.data.ca\.crt}" | base64 -d >harbor.ca.crt
$ cp harbor.ca.crt /etc/pki/ca-trust/source/anchors
$ update-ca-trust enable; update-ca-trust extract
$ helm repo add luffy https://harbor.luffy.com/chartrepo/luffy --ca-file=harbor.ca.crt --username admin --password Harbor12345
$ helm repo ls
注:
[root@k8s-master helm-push]
10.0.1.5 harbor.luffy.com
[root@k8s-master helm-push]
Error: looks like "https://harbor.luffy.com/chartrepo/luffy" is not a valid chart repository or cannot be reached: Get https://harbor.luffy.com/chartrepo/luffy/index.yaml: x509: certificate signed by unknown authority
[root@k8s-master helm]
[root@k8s-master helm]
[root@k8s-master helm]
[root@k8s-master helm]
"luffy" has been added to your repositories
[root@k8s-master helm]
NAME URL
stable https://charts.bitnami.com/bitnami
harbor https://helm.goharbor.io
luffy https://harbor.luffy.com/chartrepo/luffy
推送chart到仓库:
$ helm push harbor luffy --ca-file=harbor.ca.crt -u admin -p Harbor12345
harbor本地文件的名称
luffy 本地仓库的名字
注意:harbor下一定不能有多余的文件,否则会报错,chart推不上仓库
注:
[root@k8s-master helm]
Pushing harbor-1.7.0.tgz to luffy...
Done.
课程小结
使用k8s的进阶内容。
-
学习k8s在etcd中数据的存储,掌握etcd的基本操作命令 -
理解k8s调度的过程,预选及优先。影响调度策略的设置 -
Flannel网络的原理学习,了解网络的流向,帮助定位问题 -
认证与授权,掌握kubectl、kubelet、rbac及二次开发如何调度API -
利用HPA进行业务动态扩缩容,通过metrics-server了解整个k8s的监控体系 -
PV + PVC -
Helm
|