k8s环境搭建
创建三个虚拟机,保证私有网络使用同一个网卡,在同一个网段上
kubeadm创建集群
预备环境(机器都执行)
安装docker
systemctl enable docker --now
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
hostname
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0
systemctl stop firewalld
systemctl disable firewalld
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
free -g
cat > /etc/modules-load.d/k8s.conf <<EOF
br_netfilter
EOF
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
安装kubectl,kubeadm,kubelet
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet-1.20.9 kubeadm-1.20.9 kubectl-1.20.9 disableexcludes=kubernetes
systemctl enable kubelet --now
systemctl status kubelet
kubeadm引导集群
- 创建一个images.sh的文件
#!/bin/bash
images=(
kube-apiserver:v1.20.9
kube-proxy:v1.20.9
kube-controller-manager:v1.20.9
kube-scheduler:v1.20.9
coredns:1.7.0
etcd:3.4.13-0
pause:3.2
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
done
- 修改权限并且执行
chmod +x ./images.sh && ./images.sh
设置hosts
vi /etc/hosts
172.31.0.4 cluster-endpoint
172.31.0.2 node1
172.31.0.3 node2
ping cluster-endpoint
ping node1
ping node2
初始化kubeadm(仅在master上运行)
kubeadm init \
--apiserver-advertise-address=172.31.0.4 \
--control-plane-endpoint=cluster-endpoint \
--image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images \
--kubernetes-version v1.20.9 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=192.168.0.0/16
- 初始化完成master节点 信息先保存
- 先配置网络,在加入从节点
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join cluster-endpoint:6443 --token tbi1jv.9qp9rzap3tjhp61y \
--discovery-token-ca-cert-hash sha256:46847d2f47dd4fd8a567eb947d244722bfbea6f780ebb66f1a3d66fa1aa3a82d \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join cluster-endpoint:6443 --token tbi1jv.9qp9rzap3tjhp61y \
--discovery-token-ca-cert-hash sha256:46847d2f47dd4fd8a567eb947d244722bfbea6f780ebb66f1a3d66fa1aa3a82d
此时看master节点查看节点信息
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@i-wgt9mh3x ~]
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane,master 6m11s v1.20.9
设置k8s的连接网络calico
[root@i-wgt9mh3x ~]
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 197k 100 197k 0 0 16514 0 0:00:12 0:00:12 --:--:-- 22016
[root@i-wgt9mh3x ~]
calico.yaml images.sh
[root@i-wgt9mh3x ~]
configmap/calico-config created
.....
[root@i-wgt9mh3x ~]
[root@i-wgt9mh3x ~]
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 16m v1.20.9
kubeadm join cluster-endpoint:6443 --token tbi1jv.9qp9rzap3tjhp61y \
--discovery-token-ca-cert-hash sha256:46847d2f47dd4fd8a567eb947d244722bfbea6f780ebb66f1a3d66fa1aa3a82d
[root@i-wgt9mh3x ~]
NAME STATUS ROLES AGE VERSION
k8s-master Ready control-plane,master 20m v1.20.9
k8s-node1 NotReady <none> 4s v1.20.9
k8s-node2 NotReady <none> 1s v1.20.9
等待pod全部是运行状态 ,节点就是部署成功了,节点都是运行状态
[root@i-wgt9mh3x ~]
kubeadm join cluster-endpoint:6443 --token 7flayn.le264xh12n2ry3hj --discovery-token-ca-cert-hash sha256:46847d2f47dd4fd8a567eb947d244722bfbea6f780ebb66f1a3d66fa1aa3a82d
k8s-dashboard
对应的版本下载
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
[root@i-wgt9mh3x ~]
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
默认这个只能在集群内部访问(Cluster-IP模式,修改为NodePort开放这个端口)
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
修改之前
修改之后
在安全组设置暴露这个端口 在使用外网访问
创建认证文件 dashboard-svc-account.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: dashboard-admin
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: dashboard-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kube-system
roleRef:
kind: ClusterRole
name: cluster-admin
apiGroup: rbac.authorization.k8s.io
执行认证文件
kubectl apply -f dashboard-svc-account.yaml
令牌获取
[root@i-wgt9mh3x ~]
dashboard-admin-token-qt2n5
[root@i-wgt9mh3x ~]
eyJhbGciOiJSUzI1NiIsImtpZCI6InVsemhnZWtOVFByZERYVHk0cEl0Z1BteURYVHRsUXBqOWw1WHNxMjFwcU0ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcXQybjUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZGQxZmMxNjktNzJiMS00ZmNkLWJmMTAtN2FlMTcxYzFiYzI2Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.BAtp47SKRI6RbsCiYlgtcpQQUB2mUNwZiQH44jiMdPCOxro9YCyqGU_-HZhcX_EU8yQX1cVx7__F7HhKpHHRTKmX7mscjuDrdiT1IBuaKo-rcPP91snNwUtyDATGLSJaouZ7h5_H3nfbU05nTysUHLxUJARWwT1jzZL_nVyWQVb9GCr2zw2kl9h5RLb4LsuLgLRKQtfgl8emWKQfckD2wn2pO2V9ugOT9_weCcMYPXmxhSrh9IYS8dh84B-7TJjPplnFXFrbC4qNZM2ijOcgJ1-gCbMvSvpAn8FmjdOMffbhrc_QEtan6wC3nsJxJuB47hLeX-lVFhKV0lr2NcxpRQ
访问 复制令牌登录
k8s简单使用
名称空间
资源隔离效果 但是网络不隔离
kubectl create ns dev
kubectl delete ns dev
[root@i-wgt9mh3x ~]
apiVersion: v1
kind: NameSpace
metadata:
name: dev
[root@i-wgt9mh3x ~]
namespace/dev created
[root@i-wgt9mh3x ~]
NAME STATUS AGE
default Active 166m
dev Active 5s
kube-node-lease Active 166m
kube-public Active 166m
kube-system Active 166m
kubernetes-dashboard Active 117m
[root@i-wgt9mh3x ~]
namespace "dev" deleted
[root@i-wgt9mh3x ~]
NAME STATUS AGE
default Active 168m
kube-node-lease Active 168m
kube-public Active 168m
kube-system Active 168m
kubernetes-dashboard Active 120m
[root@i-wgt9mh3x ~]
- create apply区别 create只能创建资源 apply可以创建或者更新资源(配置文件有变化,会进行更新资源)
Pod
pod是k8s的最小运行单位
[root@k8s-master ~]
pod/mynginx created
[root@k8s-master ~]
NAME READY STATUS RESTARTS AGE
mynginx 0/1 ContainerCreating 0 22s
mynginx 1/1 Running 0 52s
[root@k8s-master ~]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mynginx 1/1 Running 0 2m11s 192.168.169.131 k8s-node2 <none> <none>
[root@k8s-master ~]
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
。。。。。
[root@k8s-master ~]
root@mynginx:/
root@mynginx:/usr/share/nginx/html
root@mynginx:/usr/share/nginx/html
exit
[root@k8s-master ~]
hello nginx...
[root@k8s-master ~]
在可视化控制台相关操作
使用页面yaml形式创建一个pod
等待镜像拉取
[root@k8s-master ~]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mypod01 2/2 Running 0 11m 192.168.36.67 k8s-node1 <none> <none>
在tomcat上访问nginx 是可以的
图解一个pod网络之间(由于在同一个ip下可以互相访问) (一个pod中容器的端口号不能重复,如:不能部署两个nginx)
[root@k8s-master ~]
root@mypod01:/
Deployment
自愈能力
部署一次应用
[root@k8s-master ~]# kubectl create deploy mynginx --image=nginx
如果pod删除会在健康机器重新拉取一个(而使用pod创建的删除就真的没了)
- 删除deployment的pod (会在健康的机器拉取一个pod创建)(自愈能力)
扩缩容
命令行扩缩容
kubectl scale --replicas=5 deploy mynginx
kubectl get deploy -w
修改为三个副本数量 并应用
kubectl edit deploy mynginx -n default
会在创建两个pod
滚动升级
在部署完成阶段修改镜像版本,会启动一个容器杀死一个老的容器,确保应用不掉线
[root@k8s-master ~]
deployment.apps/mynginx image updated
[root@k8s-master ~]
NAME READY STATUS RESTARTS AGE
mynginx-5b686ccd46-6jpr9 1/1 Running 0 42m
mynginx-5b686ccd46-f625f 1/1 Running 0 47m
mynginx-5b686ccd46-rdsdq 1/1 Running 0 23m
mynginx-f9cbbdc9f-bxqj2 0/1 Pending 0 0s
mynginx-f9cbbdc9f-bxqj2 0/1 Pending 0 0s
mynginx-f9cbbdc9f-bxqj2 0/1 ContainerCreating 0 0s
mynginx-f9cbbdc9f-bxqj2 0/1 ContainerCreating 0 0s
mynginx-f9cbbdc9f-bxqj2 1/1 Running 0 52s
mynginx-5b686ccd46-rdsdq 1/1 Terminating 0 24m
mynginx-f9cbbdc9f-2n25n 0/1 Pending 0 0s
mynginx-f9cbbdc9f-2n25n 0/1 Pending 0 0s
mynginx-f9cbbdc9f-2n25n 0/1 ContainerCreating 0 0s
mynginx-5b686ccd46-rdsdq 1/1 Terminating 0 24m
mynginx-5b686ccd46-rdsdq 0/1 Terminating 0 24m
mynginx-f9cbbdc9f-2n25n 0/1 ContainerCreating 0 1s
mynginx-f9cbbdc9f-2n25n 1/1 Running 0 4s
mynginx-5b686ccd46-f625f 1/1 Terminating 0 48m
mynginx-f9cbbdc9f-sjvsc 0/1 Pending 0 0s
mynginx-f9cbbdc9f-sjvsc 0/1 Pending 0 0s
mynginx-f9cbbdc9f-sjvsc 0/1 ContainerCreating 0 0s
mynginx-5b686ccd46-f625f 1/1 Terminating 0 48m
mynginx-f9cbbdc9f-sjvsc 0/1 ContainerCreating 0 1s
mynginx-5b686ccd46-f625f 0/1 Terminating 0 48m
mynginx-5b686ccd46-rdsdq 0/1 Terminating 0 24m
mynginx-5b686ccd46-rdsdq 0/1 Terminating 0 24m
mynginx-5b686ccd46-f625f 0/1 Terminating 0 48m
mynginx-5b686ccd46-f625f 0/1 Terminating 0 48m
mynginx-f9cbbdc9f-sjvsc 1/1 Running 0 53s
mynginx-5b686ccd46-6jpr9 1/1 Terminating 0 44m
mynginx-5b686ccd46-6jpr9 1/1 Terminating 0 44m
mynginx-5b686ccd46-6jpr9 0/1 Terminating 0 44m
mynginx-5b686ccd46-6jpr9 0/1 Terminating 0 44m
mynginx-5b686ccd46-6jpr9 0/1 Terminating 0 44m
版本回退
[root@k8s-master ~]
deployment.apps/mynginx
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deploy mynginx nginx=nginx:1.16.1 --record=true
[root@k8s-master ~]
deployment.apps/mynginx rolled back
[root@k8s-master ~]
f:imagePullPolicy: {}
f:image: {}
- image: nginx
imagePullPolicy: Always
Available Commands:
history View rollout history
pause Mark the provided resource as paused
restart Restart a resource
resume Resume a paused resource
status Show the status of the rollout
undo Undo a previous rollout
Service
ClusterIp
只能在集群内部访问
为外界提供公共的ip地址
# 部署的mynginx 80端口映射到service的
[root@k8s-master ~]# kubectl expose deploy mynginx --port=8000 --target-port=80
service/mynginx exposed
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h
mynginx ClusterIP 10.96.189.140 <none> 8000/TCP 2s
修改三个容器的首页
echo 1111 > /usr/share/nginx/html/index.html
echo 2222 > /usr/share/nginx/html/index.html
echo 3333 > /usr/share/nginx/html/index.html
访问
[root@k8s-master ~]
1111
[root@k8s-master ~]
2222
[root@k8s-master ~]
3333
- 默认是使用集群内部的ip Clusterip 默认外部是访问不了的
- 在容器内部还可以使用域名访问 ,规则【服务名.名称空间.svc】
NodePort
在每个节点暴露一个随机的ip(30000-32767之间)
[root@k8s-master ~]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
[root@k8s-master ~]
service/mynginx exposed
[root@k8s-master ~]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 21h
mynginx NodePort 10.96.29.32 <none> 8000:30096/TCP 5s
[root@k8s-master ~]
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
ingress
安装
wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.47.0/deploy/static/provider/baremetal/deploy.yaml
vi deploy.yaml
registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/ingress-nginx-controller:v0.46.0
kubectl get pod,svc -n ingress-nginx
[root@k8s-master ~]
ingress-nginx ingress-nginx-controller NodePort 10.96.81.134 <none> 80:30533/TCP,443:30726/TCP 5m12s
测试访问
http
https
使用
准备部署文件 ingress-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-server
spec:
replicas: 2
selector:
matchLabels:
app: hello-server
template:
metadata:
labels:
app: hello-server
spec:
containers:
- name: hello-server
image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
ports:
- containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-demo
name: nginx-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-demo
template:
metadata:
labels:
app: nginx-demo
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-demo
name: nginx-demo
spec:
selector:
app: nginx-demo
ports:
- port: 8000
protocol: TCP
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-server
name: hello-server
spec:
selector:
app: hello-server
ports:
- port: 8000
protocol: TCP
targetPort: 9000
准备规则文件 ingress-test.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-host-bar
spec:
ingressClassName: nginx
rules:
- host: "hello.k8s.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-server
port:
number: 8000
- host: "demo.k8s.com"
http:
paths:
- pathType: Prefix
path: "/nginx"
backend:
service:
name: nginx-demo
port:
number: 8000
[root@k8s-master ~]
ingress.networking.k8s.io/ingress-host-bar created
[root@k8s-master ~]
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default ingress-host-bar nginx hello.k8s.com,demo.k8s.com 80 3s
配置主机域名映射
在主机上访问测试 确实访问到了pod上的nginx
这个nginx原因是访问ingress中的nginx路径不匹配,被ingress返回的
测试路径重写,参照官方文档)
修改规则文件
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-host-bar
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: "hello.k8s.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: hello-server
port:
number: 8000
- host: "demo.k8s.com"
http:
paths:
- pathType: Prefix
path: /nginx(/|$)(.*)
backend:
service:
name: nginx-demo
port:
number: 8000
编辑文件ingress-test.yaml
kubectl apply -f ingress-test.yaml
创建ingress-test01.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-limit-rate
annotations:
nginx.ingress.kubernetes.io/limit-rps: "1"
spec:
ingressClassName: nginx
rules:
- host: "haha.k8s.com"
http:
paths:
- pathType: Exact
path: "/"
backend:
service:
name: nginx-demo
port:
number: 8000
应用限流规则
139.198.164.86 haha.k8s.com
kubectl create -f ingress-test01.yaml
- k8s网络模型图
存储抽象
为什么使用NFS文件服务器呢?
在k8s中进行了目录挂载(node1),如果pod被销毁,在node2服务器了拉起一个pod,备份的数据在node1上导致数据不同步。
环境搭建
安装NFS工具
yum install -y nfs-utils
- 主节点(让master作为主节点(server))
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports
mkdir -p /nfs/data
systemctl enable rpcbind --now
systemctl enable nfs-server --now
exportfs -r
测试环境
showmount -e 172.31.0.4
mkdir -p /nfs/data
mount -t nfs 172.31.0.4:/nfs/data /nfs/data
echo "hello nfs server" > /nfs/data/test.txt
[root@k8s-master data]
hello nfs server222
原生挂载方式
创建部署文件nfs-test.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-pv-demo
name: nginx-pv-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-pv-demo
template:
metadata:
labels:
app: nginx-pv-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
nfs:
server: 172.31.0.4
path: /nfs/data/nginx-pv
mkdir /nfs/data/nginx-pv
kubectl apply -f nfs-test.yaml
echo "hello nginx" > /nfs/data/nginx-pv/index.html
[root@k8s-master nginx-pv]
nginx-pv-demo-5ffb6666b-dqm7q 1/1 Running 0 5m27s 192.168.36.87 k8s-node1 <none> <none>
nginx-pv-demo-5ffb6666b-qmllk 1/1 Running 0 5m27s 192.168.169.150 k8s-node2 <none> <none>
[root@k8s-master nginx-pv]
hello nginx
[root@k8s-master nginx-pv]
hello nginx
PV&PVC
PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置
PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格
pv静态供应
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01-10m
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/01
server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv02-1gi
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/02
server: 172.31.0.4
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv03-3gi
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/03
server: 172.31.0.4
[root@k8s-master ~]
persistentvolume/pv01-10m created
persistentvolume/pv02-1gi created
persistentvolume/pv03-3gi created
pvc申请绑定到nfs的pv上
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: nfs
[root@k8s-master ~]
persistentvolumeclaim/nginx-pvc created
[root@k8s-master ~]
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv01-10m 10M RWX Retain Available nfs 19m
pv02-1gi 1Gi RWX Retain Bound default/nginx-pvc nfs 19m
pv03-3gi 3Gi RWX Retain Available nfs 19m
[root@k8s-master ~]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nginx-pvc Bound pv02-1gi 1Gi RWX nfs 11s
[root@k8s-master ~]
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy-pvc
name: nginx-deploy-pvc
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deploy-pvc
template:
metadata:
labels:
app: nginx-deploy-pvc
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
persistentVolumeClaim:
claimName: nginx-pvc
。。。创建这次部署
这次挂载是挂载到pv02
编辑pv02查看
[root@k8s-master ~]
[root@k8s-master 02]
[root@k8s-master 02]
nginx-deploy-pvc-79fc8558c7-crg8q 1/1 Running 0 7m51s 192.168.36.89 k8s-node1 <none> <none>
nginx-deploy-pvc-79fc8558c7-ld89r 1/1 Running 0 7m51s 192.168.169.152 k8s-node2 <none> <none>
[root@k8s-master 02]
111
[root@k8s-master 02]
111
pv池 动态供应
pod想要多少空间,就会创建多大内存空间共这个pod进行使用
ConfigMap
配置集
1.先把配置文件创建好
[root@k8s-master ~]
[root@k8s-master ~]
redis-conf 1 17s
[root@k8s-master ~]
Name: redis-conf
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
redis.conf:
----
appendonly yes
Events: <none>
接下来把配置集redis-conf中的redis.conf挂载到容器中的/redis-master文件夹中
创建pod
apiVersion: v1
kind: Pod
metadata:
name: redis
spec:
containers:
- name: redis
image: redis
command:
- redis-server
- "/redis-master/redis.conf"
ports:
- containerPort: 6379
volumeMounts:
- mountPath: /data
name: data
- mountPath: /redis-master
name: config
volumes:
- name: data
emptyDir: {}
- name: config
configMap:
name: redis-conf
items:
- key: redis.conf
path: redis.conf
等待pod创建完成
[root@k8s-master ~]
root@redis:/data
appendonly yes
root@redis:/data
[root@k8s-master ~]
在配置项中在添加一项
[root@k8s-master ~]
root@redis:/data
appendonly yes
requirepass 123456
root@redis:/data
127.0.0.1:6379> config get appendonly
1) "appendonly"
2) "yes"
127.0.0.1:6379> config get requirepass
1) "requirepass"
2) ""
127.0.0.1:6379>
尝试停止pod
[root@k8s-master ~]
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
redis 1/1 Running 1 20m 192.168.169.153 k8s-node2 <none> <none>
[root@k8s-node2 data]
1dcd88b0608b redis "redis-server /redis…" 13 minutes ago Up 13 minutes k8s_redis_redis_default_6b5912eb-d879-4c1b-b171-69530e97ca0d_0
32b2a2524c56 registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/pause:3.2 "/pause" 14 minutes ago Up 14 minutes k8s_POD_redis_default_6b5912eb-d879-4c1b-b171-69530e97ca0d_0
[root@k8s-node2 data]
1dcd88b0608b
[root@k8s-master ~]
root@redis:/data
127.0.0.1:6379> set aa bb
(error) NOAUTH Authentication required.
127.0.0.1:6379> exit
root@redis:/data
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6379> set aa bb
OK
127.0.0.1:6379> config get requirepass
1) "requirepass"
2) "123456"
Secret
docker认证 就是登录即可
没有登录情况 私有仓库的镜像拉取不了(不登录推送也不行)
在k8s中可以使用Secret进行认证
尝试拉取私有仓库
创建私有的nginx文件尝试拉取
apiVersion: v1
kind: Pod
metadata:
name: private-nginx
spec:
containers:
- name: private-nginx
image: zhaoxingyudocker/nginx-zhaoxingyu:1.0
应用完成,查看pod
[root@k8s-master data]
NAME READY STATUS RESTARTS AGE
private-nginx 0/1 ImagePullBackOff 0 35s
[root@k8s-master data]
Warning Failed 21s (x4 over 96s) kubelet Error: ImagePullBackOff
Warning Failed 2s (x4 over 96s) kubelet Failed to pull image "zhaoxingyudocker/nginx-zhaoxingyu:1.0": rpc error: code = Unknown desc = Error response from daemon: pull access denied for zhaoxingyudocker/nginx-zhaoxingyu, repository does not exist or may require 'docker login': denied: requested access to the resource is denied
Warning Failed 2s (x4 over 96s) kubelet Error: ErrImagePull
kubectl create secret docker-registry zhaoxingyu-docker \
--docker-username=zhaoxingyudocker \
--docker-password=z1549165516 \
--docker-email=2920991307@qq.com
kubectl create secret docker-registry regcred \
--docker-server=<你的镜像仓库服务器> \
--docker-username=<你的用户名> \
--docker-password=<你的密码> \
--docker-email=<你的邮箱地址>
[root@k8s-master data]
NAME TYPE DATA AGE
zhaoxingyu-docker kubernetes.io/dockerconfigjson 1 4m5s
apiVersion: v1
kind: Pod
metadata:
name: private-nginx
spec:
containers:
- name: private-nginx
image: zhaoxingyudocker/nginx-zhaoxingyu:1.0
imagePullSecrets:
- name: zhaoxingyu-docker
- 删除之前的pod(private-nginx)
- 应用这次的yaml
[root@k8s-master data]
NAME READY STATUS RESTARTS AGE
private-nginx 0/1 ImagePullBackOff 0 17m
[root@k8s-master data]
pod "private-nginx" deleted
[root@k8s-master data]
pod/private-nginx created
[root@k8s-master data]
NAME READY STATUS RESTARTS AGE
private-nginx 0/1 ContainerCreating 0 4s
[root@k8s-master data]
NAME READY STATUS RESTARTS AGE
private-nginx 1/1 Running 0 14s
|