1. 使用configmap为nginx、mysql提供配置文件
实现配置信息和镜像的解藕,将配置信息存放到configmap对象中,然后在pod对象中导入configmap对象,实现导入配置操作。声明一个configmap对象,作为volume挂载到pod中。
-
配置变更:
- 直接把服务的配置文件放到镜像中
- configmap:把配置和镜像解藕
- 配置中心:Apollo
-
环境变量的传递:
- 通过dockerfile的env定义
- 可以用configmap中提供
- 直接在yaml文件中写明
- 配置文件
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
default: |
server {
listen 80;
server_name www.mysite.com;
index index.html;
location / {
root /data/nginx/html;
if (!-e $request_filename) {
rewrite ^/(.*) /index.html last;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-8080
image: tomcat
ports:
- containerPort: 8080
volumeMounts:
- name: nginx-config
mountPath: /data
- name: ng-deploy-80
image: nginx
ports:
- containerPort: 80
volumeMounts:
- mountPath: /data/nginx/html
name: nginx-static-dir
- name: nginx-config
mountPath: /etc/nginx/conf.d
volumes:
- name: nginx-static-dir
hostPath:
path: /data/nginx/linux39
- name: nginx-config
configMap:
name: nginx-config
items:
- key: default
path: mysite.conf
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-80
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 30019
protocol: TCP
type: NodePort
selector:
app: ng-deploy-80
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config
data:
username: user1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
env:
- name: "test"
value: "value"
- name: MY_USERNAME
valueFrom:
configMapKeyRef:
name: nginx-config
key: username
ports:
- containerPort: 80
2. 基于PV、PVC运行zookeeper集群
zookeeper官网: https://zookeeper.apache.org/doc/current/zookeeperOver.html
注意:一般集群3个节点的性能比5个节点的快,因为存在写放大机制,即写入时,客户端请求首先路由到leader节点,进行写,然后在同步到follower节点,则3个节点写一份,拷贝2份,5个节点是写一份,拷贝4份。所以3个节点少一些网络和IO的开销,速度较快。
[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FJ5XREuC-1634569492523)(/Users/zhangjunyi/Library/Application Support/typora-user-images/image-20211013224028436.png)]
以下是基于JDK 8 的镜像,制作zookeeper镜像,在K8S上启动3节点zookeeper集群的实践
2.1 下载JDK镜像
~
~
~
2.2 构建zookeeper镜像
root@master1:~
FROM harbor.k8s.local/k8s/elevy_slim_java:8
ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
ca-certificates \
gnupg \
tar \
wget && \
apk add --no-cache \
bash && \
export GNUPGHOME="$(mktemp -d)" && \
gpg -q --batch --import /tmp/KEYS && \
gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
cd /zookeeper && \
cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
rm -rf \
*.txt \
*.xml \
bin/README.txt \
bin/*.cmd \
conf/* \
contrib \
dist-maven \
docs \
lib/*.txt \
lib/cobertura \
lib/jdiff \
recipes \
src \
zookeeper-*.asc \
zookeeper-*.md5 \
zookeeper-*.sha1 && \
apk del .build-deps && \
rm -rf /tmp/* "$GNUPGHOME"
COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /
ENV PATH=/zookeeper/bin:${PATH} \
ZOO_LOG_DIR=/zookeeper/log \
ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
JMXPORT=9010
ENTRYPOINT [ "/entrypoint.sh" ]
CMD [ "zkServer.sh", "start-foreground" ]
EXPOSE 2181 2888 3888 9010
/zookeeper/bin/zkServer.sh status | egrep 'Mode: (standalone|leading|following|observing)'
echo ${MYID:-1} > /zookeeper/data/myid
if [ -n "$SERVERS" ]; then
IFS=\, read -a servers <<<"$SERVERS"
for i in "${!servers[@]}"; do
printf "\nserver.%i=%s:2888:3888" "$((1 + $i))" "${servers[$i]}" >> /zookeeper/conf/zoo.cfg
done
fi
cd /zookeeper
exec "$@"
zoo.cfg配置文件中的节点信息形式如下:
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888
即:server(zookeeper的关键字).<server ID, 即entry point.sh 脚本中生成的>=<zookeeper节点的主机名(需要能解析,在k8s中采用每一个zookeeper pod对应一个service的形式解决)>:<leader向follower同步数据的端口>:<集群选举用的端口>
https://zookeeper.apache.org/doc/current/zookeeperStarted.html
zookeeper.root.logger=INFO, CONSOLE, ROLLINGFILE
zookeeper.console.threshold=INFO
zookeeper.log.dir=/zookeeper/log
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=INFO
zookeeper.tracelog.dir=/zookeeper/log
zookeeper.tracelog.file=zookeeper_trace.log
log4j.rootLogger=${zookeeper.root.logger}
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
log4j.appender.ROLLINGFILE.MaxBackupIndex=5
log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}
log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/wal
autopurge.purgeInterval=1
clientPort=2181
quorumListenOnAllIPs=true
http://mirrors.aliyun.com/alpine/v3.6/main
http://mirrors.aliyun.com/alpine/v3.6/community
wget https://downloads.apache.org/zookeeper/KEYS
https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/
TAG=$1
docker build -t harbor.k8s.local/k8s/zookeeper:${TAG} .
sleep 1
docker push harbor.k8s.local/k8s/zookeeper:${TAG}
2.3 测试zookeeper镜像
Mac安装java运行环境:https://www.oracle.com/java/technologies/downloads/
ZooInspector启动:https://www.jianshu.com/p/f45af8027d7f
进入目录ZooInspector\build,运行zookeeper-dev-ZooInspector.jar
2.4 K8S运行zookeeper服务
2.4.1 Yaml文件准备
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-1
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.6.87
path: /data/k8sdata/zookeeper-datadir-1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-2
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.6.87
path: /data/k8sdata/zookeeper-datadir-2
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zookeeper-datadir-pv-3
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.6.87
path: /data/k8sdata/zookeeper-datadir-3
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-1
namespace: zookeeper
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-1
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-2
namespace: zookeeper
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-2
resources:
requests:
storage: 4Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: zookeeper-datadir-pvc-3
namespace: zookeeper
spec:
accessModes:
- ReadWriteOnce
volumeName: zookeeper-datadir-pv-3
resources:
requests:
storage: 4Gi
apiVersion: v1
kind: Service
metadata:
name: zookeeper
namespace: zookeeper
spec:
ports:
- name: client
port: 2181
selector:
app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper1
namespace: zookeeper
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 42181
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper2
namespace: zookeeper
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 42182
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
name: zookeeper3
namespace: zookeeper
spec:
type: NodePort
ports:
- name: client
port: 2181
nodePort: 42183
- name: followers
port: 2888
- name: election
port: 3888
selector:
app: zookeeper
server-id: "3"
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper1
namespace: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "1"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323
imagePullPolicy: Always
env:
- name: MYID
value: "1"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-1
volumes:
- name: zookeeper-datadir-pvc-1
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-1
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper2
namespace: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "2"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323
imagePullPolicy: Always
env:
- name: MYID
value: "2"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-2
volumes:
- name: zookeeper-datadir-pvc-2
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: zookeeper3
namespace: zookeeper
spec:
replicas: 1
selector:
matchLabels:
app: zookeeper
template:
metadata:
labels:
app: zookeeper
server-id: "3"
spec:
volumes:
- name: data
emptyDir: {}
- name: wal
emptyDir:
medium: Memory
containers:
- name: server
image: harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323
imagePullPolicy: Always
env:
- name: MYID
value: "3"
- name: SERVERS
value: "zookeeper1,zookeeper2,zookeeper3"
- name: JVMFLAGS
value: "-Xmx2G"
ports:
- containerPort: 2181
- containerPort: 2888
- containerPort: 3888
volumeMounts:
- mountPath: "/zookeeper/data"
name: zookeeper-datadir-pvc-3
volumes:
- name: zookeeper-datadir-pvc-3
persistentVolumeClaim:
claimName: zookeeper-datadir-pvc-3
2.4.2 创建PV
2.4.3 验证PV
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
zookeeper-datadir-pv-1 5Gi RWO Retain Available 34s
zookeeper-datadir-pv-2 5Gi RWO Retain Available 34s
zookeeper-datadir-pv-3 5Gi RWO Retain Available 34s
2.4.4 创建PVC
2.4.5 验证PVC
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
zookeeper-datadir-pvc-1 Bound zookeeper-datadir-pv-1 5Gi RWO 28s
zookeeper-datadir-pvc-2 Bound zookeeper-datadir-pv-2 5Gi RWO 28s
zookeeper-datadir-pvc-3 Bound zookeeper-datadir-pv-3 5Gi RWO 28s
2.4.6 Dashboard 验证存储卷
2.4.7 运行zookeeper集群
service/zookeeper created
service/zookeeper1 created
service/zookeeper2 created
service/zookeeper3 created
deployment.apps/zookeeper1 created
deployment.apps/zookeeper2 created
deployment.apps/zookeeper3 created
2.4.8 验证zookeeper集群
NAME READY STATUS RESTARTS AGE
zookeeper1-7bd866b955-vdqx2 1/1 Running 1 49s
zookeeper2-ffd4c44c8-lwd9s 1/1 Running 0 49s
zookeeper3-6dc789b469-svw2f 1/1 Running 0 49s
bash-4.3
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader
bash-4.3
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower
bash-4.3
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower
2.4.9 从外部访问zookeeper集群
3. 基于自定义镜像运行Nginx与Tomcat,并结合NFS实现动静分离
基于Nginx+Tomcat+NFS实现通过域名转发动态请求到Tomcat Pod的动静分离架构,要求能够通过负载均衡的VIP访问到K8S集群中运行的Nginx+Tomcat+NFS中的web页面。
- nginx通过注册中心调用tomcat pod(微服务架构)
3.1 制作镜像
3.1.1 制作带有filebeat的系统基础镜像
#自定义Centos 基础镜像
FROM centos:7.9.2009
MAINTAINER Eric.Zhang mail@xx.com
ADD filebeat-7.6.2-x86_64.rpm /tmp
RUN yum install -y /tmp/filebeat-7.6.2-x86_64.rpm vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop && rm -rf /etc/localtime /tmp/filebeat-7.6.2-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd www -u 2020 && useradd nginx -u 2021
docker build -t harbor.k8s.local/k8s/centos-base:7.9.2009 .
docker push harbor.k8s.local/k8s/centos-base:7.9.2009
3.1.2 制作Nginx 基础镜像
#Nginx Base Image
FROM harbor.k8s.local/k8s/centos-base:7.9.2009
MAINTAINER Eric.Zhang mail@xx.com
RUN yum install -y vim wget tree lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
ADD nginx-1.18.0.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.18.0 && ./configure && make && make install && ln -sv /usr/local/nginx/sbin/nginx /usr/sbin/nginx &&rm -rf /usr/local/src/nginx-1.18.0.tar.gz
docker build -t harbor.k8s.local/k8s/nginx-base:v1.18.0 .
sleep 1
docker push harbor.k8s.local/k8s/nginx-base:v1.18.0
3.1.3 制作Nginx业务镜像
#Nginx 1.18.0
FROM harbor.k8s.local/k8s/nginx-base:v1.18.0
ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app1.tar.gz /usr/local/nginx/html/
ADD index.html /usr/local/nginx/html/index.html
#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images
EXPOSE 80 443
CMD ["nginx"]
cat build-command.sh
TAG=$1
docker build -t harbor.k8s.local/k8s/nginx-web1:${TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker push harbor.k8s.local/k8s/nginx-web1:${TAG}
echo "镜像上传到harbor完成"
# cat nginx.conf
user nginx nginx;
worker_processes auto;
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;
#pid logs/nginx.pid;
daemon off;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
#log_format main '$remote_addr - $remote_user [$time_local] "$request" '
# '$status $body_bytes_sent "$http_referer" '
# '"$http_user_agent" "$http_x_forwarded_for"';
#access_log logs/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
upstream tomcat_webserver { #测试版的,先把转给tomcat的去掉了
server tomcat-app1-service.nginx-tomcat.svc.k8s.local:80;
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log logs/host.access.log main;
location / {
root html;
index index.html index.htm;
}
location /webapp {
root html;
index index.html index.htm;
}
location /myapp {
proxy_pass http://tomcat_webserver;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
# another virtual host using mix of IP-, name-, and port-based configuration
#
#server {
# listen 8000;
# listen somename:8080;
# server_name somename alias another.alias;
# location / {
# root html;
# index index.html index.htm;
# }
#}
# HTTPS server
#
#server {
# listen 443 ssl;
# server_name localhost;
# ssl_certificate cert.pem;
# ssl_certificate_key cert.key;
# ssl_session_cache shared:SSL:1m;
# ssl_session_timeout 5m;
# ssl_ciphers HIGH:!aNULL:!MD5;
# ssl_prefer_server_ciphers on;
# location / {
# root html;
# index index.html index.htm;
# }
#}
}
3.1.4 制作JDK基础镜像
#JDK Base Image
FROM harbor.k8s.local/k8s/centos-base:7.9.2009
MAINTAINER Eric.Zhang mail@xx.com
ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile
ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin
docker build -t harbor.k8s.local/k8s/jdk-base:v8.212 .
sleep 1
docker push harbor.k8s.local/k8s/jdk-base:v8.212
3.1.5 制作Tomcat基础镜像
#Tomcat 8.5.43基础镜像
FROM harbor.k8s.local/k8s/jdk-base:v8.212
MAINTAINER Eric.Zhang mail@xx.com
RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
ADD apache-tomcat-8.5.43.tar.gz /apps
RUN useradd tomcat -u 2022 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R
docker build -t harbor.k8s.local/k8s/tomcat-base:v8.5.43 .
sleep 3
docker push harbor.k8s.local/k8s/tomcat-base:v8.5.43
3.1.6 制作Tomcat业务镜像
#tomcat web1
FROM harbor.k8s.local/k8s/tomcat-base:v8.5.43
ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
#ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown -R nginx.nginx /data/ /apps/
#ADD filebeat-7.5.1-x86_64.rpm /tmp/
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb
EXPOSE 8080 8443
CMD ["/apps/tomcat/bin/run_tomcat.sh"]
TAG=$1
docker build -t harbor.k8s.local/k8s/tomcat-app1:${TAG} .
sleep 3
docker push harbor.k8s.local/k8s/tomcat-app1:${TAG}
# cat server.xml
<?xml version='1.0' encoding='utf-8'?>
<Server port="8005" shutdown="SHUTDOWN">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<GlobalNamingResources>
<Resource name="UserDatabase" auth="Container"
type="org.apache.catalina.UserDatabase"
description="User database that can be updated and saved"
factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
pathname="conf/tomcat-users.xml" />
</GlobalNamingResources>
<Service name="Catalina">
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" />
<Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />
<Engine name="Catalina" defaultHost="localhost">
<Realm className="org.apache.catalina.realm.LockOutRealm">
<Realm className="org.apache.catalina.realm.UserDatabaseRealm"
resourceName="UserDatabase"/>
</Realm>
<Host name="localhost" appBase="/data/tomcat/webapps" unpackWARs="false" autoDeploy="false">
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access_log" suffix=".txt"
pattern="%h %l %u %t "%r" %s %b" />
</Host>
</Engine>
</Service>
</Server>
JAVA_OPTS="-server -Xms1g -Xmx1g -Xss512k -Xmn1g -XX:CMSInitiatingOccupancyFraction=65 -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=10 -XX:NewSize=2048M -XX:MaxNewSize=2048M -XX:NewRatio=2 -XX:PermSize=128m -XX:MaxPermSize=512m -XX:CMSFullGCsBeforeCompaction=5 -XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled"
cygwin=false
darwin=false
os400=false
hpux=false
case "`uname`" in
CYGWIN*) cygwin=true;;
Darwin*) darwin=true;;
OS400*) os400=true;;
HP-UX*) hpux=true;;
esac
PRG="$0"
while [ -h "$PRG" ]; do
ls=`ls -ld "$PRG"`
link=`expr "$ls" : '.*-> \(.*\)$'`
if expr "$link" : '/.*' > /dev/null; then
PRG="$link"
else
PRG=`dirname "$PRG"`/"$link"
fi
done
PRGDIR=`dirname "$PRG"`
[ -z "$CATALINA_HOME" ] && CATALINA_HOME=`cd "$PRGDIR/.." >/dev/null; pwd`
[ -z "$CATALINA_BASE" ] && CATALINA_BASE="$CATALINA_HOME"
CLASSPATH=
if [ -r "$CATALINA_BASE/bin/setenv.sh" ]; then
. "$CATALINA_BASE/bin/setenv.sh"
elif [ -r "$CATALINA_HOME/bin/setenv.sh" ]; then
. "$CATALINA_HOME/bin/setenv.sh"
fi
if $cygwin; then
[ -n "$JAVA_HOME" ] && JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
[ -n "$JRE_HOME" ] && JRE_HOME=`cygpath --unix "$JRE_HOME"`
[ -n "$CATALINA_HOME" ] && CATALINA_HOME=`cygpath --unix "$CATALINA_HOME"`
[ -n "$CATALINA_BASE" ] && CATALINA_BASE=`cygpath --unix "$CATALINA_BASE"`
[ -n "$CLASSPATH" ] && CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi
case $CATALINA_HOME in
*:*) echo "Using CATALINA_HOME: $CATALINA_HOME";
echo "Unable to start as CATALINA_HOME contains a colon (:) character";
exit 1;
esac
case $CATALINA_BASE in
*:*) echo "Using CATALINA_BASE: $CATALINA_BASE";
echo "Unable to start as CATALINA_BASE contains a colon (:) character";
exit 1;
esac
if $os400; then
COMMAND='chgjob job('$JOBNAME') runpty(6)'
system $COMMAND
export QIBM_MULTI_THREADED=Y
fi
if $os400; then
. "$CATALINA_HOME"/bin/setclasspath.sh
else
if [ -r "$CATALINA_HOME"/bin/setclasspath.sh ]; then
. "$CATALINA_HOME"/bin/setclasspath.sh
else
echo "Cannot find $CATALINA_HOME/bin/setclasspath.sh"
echo "This file is needed to run this program"
exit 1
fi
fi
if [ ! -z "$CLASSPATH" ] ; then
CLASSPATH="$CLASSPATH":
fi
CLASSPATH="$CLASSPATH""$CATALINA_HOME"/bin/bootstrap.jar
if [ -z "$CATALINA_OUT" ] ; then
CATALINA_OUT="$CATALINA_BASE"/logs/catalina.out
fi
if [ -z "$CATALINA_TMPDIR" ] ; then
# Define the java.io.tmpdir to use for Catalina
CATALINA_TMPDIR="$CATALINA_BASE"/temp
fi
# Add tomcat-juli.jar to classpath
# tomcat-juli.jar can be over-ridden per instance
if [ -r "$CATALINA_BASE/bin/tomcat-juli.jar" ] ; then
CLASSPATH=$CLASSPATH:$CATALINA_BASE/bin/tomcat-juli.jar
else
CLASSPATH=$CLASSPATH:$CATALINA_HOME/bin/tomcat-juli.jar
fi
# Bugzilla 37848: When no TTY is available, don't output to console
have_tty=0
if [ "`tty`" != "not a tty" ]; then
have_tty=1
fi
# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
JAVA_HOME=`cygpath --absolute --windows "$JAVA_HOME"`
JRE_HOME=`cygpath --absolute --windows "$JRE_HOME"`
CATALINA_HOME=`cygpath --absolute --windows "$CATALINA_HOME"`
CATALINA_BASE=`cygpath --absolute --windows "$CATALINA_BASE"`
CATALINA_TMPDIR=`cygpath --absolute --windows "$CATALINA_TMPDIR"`
CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
JAVA_ENDORSED_DIRS=`cygpath --path --windows "$JAVA_ENDORSED_DIRS"`
fi
if [ -z "$JSSE_OPTS" ] ; then
JSSE_OPTS="-Djdk.tls.ephemeralDHKeySize=2048"
fi
JAVA_OPTS="$JAVA_OPTS $JSSE_OPTS"
# Register custom URL handlers
# Do this here so custom URL handles (specifically 'war:...') can be used in the security policy
JAVA_OPTS="$JAVA_OPTS -Djava.protocol.handler.pkgs=org.apache.catalina.webresources"
# Set juli LogManager config file if it is present and an override has not been issued
if [ -z "$LOGGING_CONFIG" ]; then
if [ -r "$CATALINA_BASE"/conf/logging.properties ]; then
LOGGING_CONFIG="-Djava.util.logging.config.file=$CATALINA_BASE/conf/logging.properties"
else
# Bugzilla 45585
LOGGING_CONFIG="-Dnop"
fi
fi
if [ -z "$LOGGING_MANAGER" ]; then
LOGGING_MANAGER="-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager"
fi
# Java 9 no longer supports the java.endorsed.dirs
# system property. Only try to use it if
# JAVA_ENDORSED_DIRS was explicitly set
# or CATALINA_HOME/endorsed exists.
ENDORSED_PROP=ignore.endorsed.dirs
if [ -n "$JAVA_ENDORSED_DIRS" ]; then
ENDORSED_PROP=java.endorsed.dirs
fi
if [ -d "$CATALINA_HOME/endorsed" ]; then
ENDORSED_PROP=java.endorsed.dirs
fi
# Uncomment the following line to make the umask available when using the
# org.apache.catalina.security.SecurityListener
#JAVA_OPTS="$JAVA_OPTS -Dorg.apache.catalina.security.SecurityListener.UMASK=`umask`"
if [ -z "$USE_NOHUP" ]; then
if $hpux; then
USE_NOHUP="true"
else
USE_NOHUP="false"
fi
fi
unset _NOHUP
if [ "$USE_NOHUP" = "true" ]; then
_NOHUP=nohup
fi
# Add the JAVA 9 specific start-up parameters required by Tomcat
JDK_JAVA_OPTIONS="$JDK_JAVA_OPTIONS --add-opens=java.base/java.lang=ALL-UNNAMED"
JDK_JAVA_OPTIONS="$JDK_JAVA_OPTIONS --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED"
export JDK_JAVA_OPTIONS
# ----- Execute The Requested Command -----------------------------------------
# Bugzilla 37848: only output this if we have a TTY
if [ $have_tty -eq 1 ]; then
echo "Using CATALINA_BASE: $CATALINA_BASE"
echo "Using CATALINA_HOME: $CATALINA_HOME"
echo "Using CATALINA_TMPDIR: $CATALINA_TMPDIR"
if [ "$1" = "debug" ] ; then
echo "Using JAVA_HOME: $JAVA_HOME"
else
echo "Using JRE_HOME: $JRE_HOME"
fi
echo "Using CLASSPATH: $CLASSPATH"
if [ ! -z "$CATALINA_PID" ]; then
echo "Using CATALINA_PID: $CATALINA_PID"
fi
fi
if [ "$1" = "jpda" ] ; then
if [ -z "$JPDA_TRANSPORT" ]; then
JPDA_TRANSPORT="dt_socket"
fi
if [ -z "$JPDA_ADDRESS" ]; then
JPDA_ADDRESS="localhost:8000"
fi
if [ -z "$JPDA_SUSPEND" ]; then
JPDA_SUSPEND="n"
fi
if [ -z "$JPDA_OPTS" ]; then
JPDA_OPTS="-agentlib:jdwp=transport=$JPDA_TRANSPORT,address=$JPDA_ADDRESS,server=y,suspend=$JPDA_SUSPEND"
fi
CATALINA_OPTS="$JPDA_OPTS $CATALINA_OPTS"
shift
fi
if [ "$1" = "debug" ] ; then
if $os400; then
echo "Debug command not available on OS400"
exit 1
else
shift
if [ "$1" = "-security" ] ; then
if [ $have_tty -eq 1 ]; then
echo "Using Security Manager"
fi
shift
exec "$_RUNJDB" "$LOGGING_CONFIG" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
-D$ENDORSED_PROP="$JAVA_ENDORSED_DIRS" \
-classpath "$CLASSPATH" \
-sourcepath "$CATALINA_HOME"/../../java \
-Djava.security.manager \
-Djava.security.policy=="$CATALINA_BASE"/conf/catalina.policy \
-Dcatalina.base="$CATALINA_BASE" \
-Dcatalina.home="$CATALINA_HOME" \
-Djava.io.tmpdir="$CATALINA_TMPDIR" \
org.apache.catalina.startup.Bootstrap "$@" start
else
exec "$_RUNJDB" "$LOGGING_CONFIG" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
-D$ENDORSED_PROP="$JAVA_ENDORSED_DIRS" \
-classpath "$CLASSPATH" \
-sourcepath "$CATALINA_HOME"/../../java \
-Dcatalina.base="$CATALINA_BASE" \
-Dcatalina.home="$CATALINA_HOME" \
-Djava.io.tmpdir="$CATALINA_TMPDIR" \
org.apache.catalina.startup.Bootstrap "$@" start
fi
fi
elif [ "$1" = "run" ]; then
shift
if [ "$1" = "-security" ] ; then
if [ $have_tty -eq 1 ]; then
echo "Using Security Manager"
fi
shift
eval exec "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
-D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
-classpath "\"$CLASSPATH\"" \
-Djava.security.manager \
-Djava.security.policy=="\"$CATALINA_BASE/conf/catalina.policy\"" \
-Dcatalina.base="\"$CATALINA_BASE\"" \
-Dcatalina.home="\"$CATALINA_HOME\"" \
-Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
org.apache.catalina.startup.Bootstrap "$@" start
else
eval exec "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
-D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
-classpath "\"$CLASSPATH\"" \
-Dcatalina.base="\"$CATALINA_BASE\"" \
-Dcatalina.home="\"$CATALINA_HOME\"" \
-Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
org.apache.catalina.startup.Bootstrap "$@" start
fi
elif [ "$1" = "start" ] ; then
if [ ! -z "$CATALINA_PID" ]; then
if [ -f "$CATALINA_PID" ]; then
if [ -s "$CATALINA_PID" ]; then
echo "Existing PID file found during start."
if [ -r "$CATALINA_PID" ]; then
PID=`cat "$CATALINA_PID"`
ps -p $PID >/dev/null 2>&1
if [ $? -eq 0 ] ; then
echo "Tomcat appears to still be running with PID $PID. Start aborted."
echo "If the following process is not a Tomcat process, remove the PID file and try again:"
ps -f -p $PID
exit 1
else
echo "Removing/clearing stale PID file."
rm -f "$CATALINA_PID" >/dev/null 2>&1
if [ $? != 0 ]; then
if [ -w "$CATALINA_PID" ]; then
cat /dev/null > "$CATALINA_PID"
else
echo "Unable to remove or clear stale PID file. Start aborted."
exit 1
fi
fi
fi
else
echo "Unable to read PID file. Start aborted."
exit 1
fi
else
rm -f "$CATALINA_PID" >/dev/null 2>&1
if [ $? != 0 ]; then
if [ ! -w "$CATALINA_PID" ]; then
echo "Unable to remove or write to empty PID file. Start aborted."
exit 1
fi
fi
fi
fi
fi
shift
touch "$CATALINA_OUT"
if [ "$1" = "-security" ] ; then
if [ $have_tty -eq 1 ]; then
echo "Using Security Manager"
fi
shift
eval $_NOHUP "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
-D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
-classpath "\"$CLASSPATH\"" \
-Djava.security.manager \
-Djava.security.policy=="\"$CATALINA_BASE/conf/catalina.policy\"" \
-Dcatalina.base="\"$CATALINA_BASE\"" \
-Dcatalina.home="\"$CATALINA_HOME\"" \
-Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
org.apache.catalina.startup.Bootstrap "$@" start \
>> "$CATALINA_OUT" 2>&1 "&"
else
eval $_NOHUP "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
-D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
-classpath "\"$CLASSPATH\"" \
-Dcatalina.base="\"$CATALINA_BASE\"" \
-Dcatalina.home="\"$CATALINA_HOME\"" \
-Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
org.apache.catalina.startup.Bootstrap "$@" start \
>> "$CATALINA_OUT" 2>&1 "&"
fi
if [ ! -z "$CATALINA_PID" ]; then
echo $! > "$CATALINA_PID"
fi
echo "Tomcat started."
elif [ "$1" = "stop" ] ; then
shift
SLEEP=5
if [ ! -z "$1" ]; then
echo $1 | grep "[^0-9]" >/dev/null 2>&1
if [ $? -gt 0 ]; then
SLEEP=$1
shift
fi
fi
FORCE=0
if [ "$1" = "-force" ]; then
shift
FORCE=1
fi
if [ ! -z "$CATALINA_PID" ]; then
if [ -f "$CATALINA_PID" ]; then
if [ -s "$CATALINA_PID" ]; then
kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1
if [ $? -gt 0 ]; then
echo "PID file found but no matching process was found. Stop aborted."
exit 1
fi
else
echo "PID file is empty and has been ignored."
fi
else
echo "\$CATALINA_PID was set but the specified file does not exist. Is Tomcat running? Stop aborted."
exit 1
fi
fi
eval "\"$_RUNJAVA\"" $LOGGING_MANAGER $JAVA_OPTS \
-D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
-classpath "\"$CLASSPATH\"" \
-Dcatalina.base="\"$CATALINA_BASE\"" \
-Dcatalina.home="\"$CATALINA_HOME\"" \
-Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
org.apache.catalina.startup.Bootstrap "$@" stop
# stop failed. Shutdown port disabled? Try a normal kill.
if [ $? != 0 ]; then
if [ ! -z "$CATALINA_PID" ]; then
echo "The stop command failed. Attempting to signal the process to stop through OS signal."
kill -15 `cat "$CATALINA_PID"` >/dev/null 2>&1
fi
fi
if [ ! -z "$CATALINA_PID" ]; then
if [ -f "$CATALINA_PID" ]; then
while [ $SLEEP -ge 0 ]; do
kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1
if [ $? -gt 0 ]; then
rm -f "$CATALINA_PID" >/dev/null 2>&1
if [ $? != 0 ]; then
if [ -w "$CATALINA_PID" ]; then
cat /dev/null > "$CATALINA_PID"
# If Tomcat has stopped don't try and force a stop with an empty PID file
FORCE=0
else
echo "The PID file could not be removed or cleared."
fi
fi
echo "Tomcat stopped."
break
fi
if [ $SLEEP -gt 0 ]; then
sleep 1
fi
if [ $SLEEP -eq 0 ]; then
echo "Tomcat did not stop in time."
if [ $FORCE -eq 0 ]; then
echo "PID file was not removed."
fi
echo "To aid diagnostics a thread dump has been written to standard out."
kill -3 `cat "$CATALINA_PID"`
fi
SLEEP=`expr $SLEEP - 1 `
done
fi
fi
KILL_SLEEP_INTERVAL=5
if [ $FORCE -eq 1 ]; then
if [ -z "$CATALINA_PID" ]; then
echo "Kill failed: \$CATALINA_PID not set"
else
if [ -f "$CATALINA_PID" ]; then
PID=`cat "$CATALINA_PID"`
echo "Killing Tomcat with the PID: $PID"
kill -9 $PID
while [ $KILL_SLEEP_INTERVAL -ge 0 ]; do
kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1
if [ $? -gt 0 ]; then
rm -f "$CATALINA_PID" >/dev/null 2>&1
if [ $? != 0 ]; then
if [ -w "$CATALINA_PID" ]; then
cat /dev/null > "$CATALINA_PID"
else
echo "The PID file could not be removed."
fi
fi
echo "The Tomcat process has been killed."
break
fi
if [ $KILL_SLEEP_INTERVAL -gt 0 ]; then
sleep 1
fi
KILL_SLEEP_INTERVAL=`expr $KILL_SLEEP_INTERVAL - 1 `
done
if [ $KILL_SLEEP_INTERVAL -lt 0 ]; then
echo "Tomcat has not been killed completely yet. The process might be waiting on some system call or might be UNINTERRUPTIBLE."
fi
fi
fi
fi
elif [ "$1" = "configtest" ] ; then
eval "\"$_RUNJAVA\"" $LOGGING_MANAGER $JAVA_OPTS \
-D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
-classpath "\"$CLASSPATH\"" \
-Dcatalina.base="\"$CATALINA_BASE\"" \
-Dcatalina.home="\"$CATALINA_HOME\"" \
-Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
org.apache.catalina.startup.Bootstrap configtest
result=$?
if [ $result -ne 0 ]; then
echo "Configuration error detected!"
fi
exit $result
elif [ "$1" = "version" ] ; then
"$_RUNJAVA" \
-classpath "$CATALINA_HOME/lib/catalina.jar" \
org.apache.catalina.util.ServerInfo
else
echo "Usage: catalina.sh ( commands ... )"
echo "commands:"
if $os400; then
echo " debug Start Catalina in a debugger (not available on OS400)"
echo " debug -security Debug Catalina with a security manager (not available on OS400)"
else
echo " debug Start Catalina in a debugger"
echo " debug -security Debug Catalina with a security manager"
fi
echo " jpda start Start Catalina under JPDA debugger"
echo " run Start Catalina in the current window"
echo " run -security Start in the current window with security manager"
echo " start Start Catalina in a separate window"
echo " start -security Start in a separate window with security manager"
echo " stop Stop Catalina, waiting up to 5 seconds for the process to end"
echo " stop n Stop Catalina, waiting up to n seconds for the process to end"
echo " stop -force Stop Catalina, wait up to 5 seconds and then use kill -KILL if still running"
echo " stop n -force Stop Catalina, wait up to n seconds and then use kill -KILL if still running"
echo " configtest Run a basic syntax check on server.xml - check exit code for result"
echo " version What version of tomcat are you running?"
echo "Note: Waiting for the process to end and use of the -force option require that \$CATALINA_PID is defined"
exit 1
fi
su - nginx -c "/apps/tomcat/bin/catalina.sh start"
tail -f /etc/hosts
3.2 K8S运行Nginx
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: nginx-deployment-label
name: nginx-deployment
namespace: nginx-tomcat
spec:
replicas: 1
selector:
matchLabels:
app: nginx-selector
template:
metadata:
labels:
app: nginx-selector
spec:
containers:
- name: nginx-container
image: harbor.k8s.local/k8s/nginx-web1:v3
imagePullPolicy: Always
ports:
- containerPort: 80
protocol: TCP
name: http
- containerPort: 443
protocol: TCP
name: https
env:
- name: "password"
value: "123456"
- name: "age"
value: "20"
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 500m
memory: 1Gi
volumeMounts:
- name: images
mountPath: /usr/local/nginx/html/webapp/images
readOnly: false
- name: static
mountPath: /usr/local/nginx/html/webapp/static
readOnly: false
volumes:
- name: images
nfs:
server: 192.168.6.87
path: /data/k8sdata/nginx-tomcat/images
- name: static
nfs:
server: 192.168.6.87
path: /data/k8sdata/nginx-tomcat/static
---
kind: Service
apiVersion: v1
metadata:
labels:
app: nginx-service-label
name: nginx-service
namespace: nginx-tomcat
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
nodePort: 40002
- name: https
port: 443
protocol: TCP
targetPort: 443
nodePort: 40443
selector:
app: nginx-selector
3.3 K8S运行Tomcat
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
app: tomcat-app1-deployment-label
name: tomcat-app1-deployment
namespace: nginx-tomcat
spec:
replicas: 1
selector:
matchLabels:
app: tomcat-app1-selector
template:
metadata:
labels:
app: tomcat-app1-selector
spec:
containers:
- name: tomcat-app1-container
image: harbor.k8s.local/k8s/tomcat-app1:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
volumeMounts:
- name: images
mountPath: /usr/local/nginx/html/webapp/images
readOnly: false
- name: static
mountPath: /usr/local/nginx/html/webapp/static
readOnly: false
volumes:
- name: images
nfs:
server: 192.168.6.87
path: /data/k8sdata/nginx-tomcat/images
- name: static
nfs:
server: 192.168.6.87
path: /data/k8sdata/nginx-tomcat/static
---
kind: Service
apiVersion: v1
metadata:
labels:
app: tomcat-app1-service-label
name: tomcat-app1-service
namespace: nginx-tomcat
spec:
type: NodePort
ports:
- name: http
port: 80
protocol: TCP
targetPort: 8080
nodePort: 40003
selector:
app: tomcat-app1-selector
4. K8S基于Ceph rbd和cephfs实现数据的持久化和共享
4.1 rbd结合k8s提供存储卷及动态存储使用案例
为使k8s可以使用ceph提供的rbd镜像作为存储设备,需要在ceph上创建rbd,并且让k8s node能够通过ceph的认证。
k8s在使用ceph作为动态卷存储的时候,需要kube-controller-manager组件能够访问ceph,因此需要在包括k8s master及node的所有节点上均配置认证文件。rbd一般给需要数据完全独立的pod(如mysql,zookeeper集群)使用。
4.1.1 创建初始化rbd
~
~
~
4.1.2 创建image
~
~
~
rbd image 'k8s-img1':
size 3 GiB in 768 objects
order 22 (4 MiB objects)
snapshot_count: 0
id: 4ccb05ae12403
block_name_prefix: rbd_data.4ccb05ae12403
format: 2
features: layering
op_features:
flags:
create_timestamp: Thu Oct 14 23:22:06 2021
access_timestamp: Thu Oct 14 23:22:06 2021
modify_timestamp: Thu Oct 14 23:22:06 2021
4.1.3 客户端安装ceph-common
4.1.3.1 master和node各节点导入key
4.1.3.2 配置镜像源,安装ceph-common
配置所有master和node的镜像源,ceph版本与ceph集群保持一致
~
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific focal main
~
~
~
4.14 创建ceph用户和授权
~
[client.k8s-rbd]
key = AQCeXWhhFyrlDRAA0ynWQ6+yEJvfUnWOGWsm0A==
You have new mail in /var/mail/root
~
[client.k8s-rbd]
key = AQCeXWhhFyrlDRAA0ynWQ6+yEJvfUnWOGWsm0A==
caps mon = "allow r"
caps osd = "allow * pool=rbd-k8s-pool1"
exported keyring for client.k8s-rbd
~
~
...
~
~
4.1.5 k8s所有节点配置主机名解析
~
...
192.168.6.64 ceph_mon1.cluster.exp ceph_mon1
192.168.6.65 ceph_mon2.cluster.exp ceph_mon2
192.168.6.66 ceph_mon3.cluster.exp ceph_mon3
192.168.6.67 ceph_node1.cluster.exp ceph_node1
192.168.6.68 ceph_node2.cluster.exp ceph_node2
192.168.6.72 ceph_node3.cluster.exp ceph_node3
192.168.6.71 ceph_node4.cluster.exp ceph_node4
192.168.6.69 ceph_mgr1.cluster.exp ceph_mgr1
192.168.6.70 ceph_mgr2.cluster.exp ceph_mgr2
192.168.6.73 ceph_deploy.cluster.exp ceph_deploy
4.1.6 使用keyring文件挂载rbd
基于ceph提供的rbd实现存储卷的动态提供,有两种方式实现,一是通过宿主机的keyring文件挂载rbd,一种是将keyring中的key定义为k8s中的secret,然后pod通过secret挂载rbd。
4.1.6.1 通过keyring文件直接挂载-busybox
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: Always
name: busybox
volumeMounts:
- name: rbd-data1
mountPath: /data
volumes:
- name: rbd-data1
rbd:
monitors:
- '172.16.244.11:6789'
- '172.16.244.12:6789'
- '172.16.244.13:6789'
pool: rbd-k8s-pool1
image: k8s-img1
fsType: ext4
readOnly: false
user: k8s-rbd
keyring: /etc/ceph/ceph.client.k8s-rbd.keyring
4.1.6.2 通过keyring文件直接挂载-nginx
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: rbd-data1
mountPath: /data
volumes:
- name: rbd-data1
rbd:
monitors:
- '172.16.244.11:6789'
- '172.16.244.12:6789'
- '172.16.244.13:6789'
pool: rbd-k8s-pool1
image: k8s-img1
fsType: ext4
readOnly: false
user: k8s-rbd
keyring: /etc/ceph/ceph.client.k8s-rbd.keyring
4.1.6.3 宿主机验证rbd
...
/dev/rbd0 ext4 2.9G 9.0M 2.9G 1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-pool1-image-k8s-img1
...
id pool namespace image snap device
0 rbd-k8s-pool1 k8s-img1 - /dev/rbd0
4.1.7 使用secret挂载rbd
将key定义为secret,然后在挂载到pod,则每个k8s节点就不再需要保存keyring文件。
4.1.7.1 创建secret
首先创建secret,secret中主要包含ceph中被授权用户的keyring文件中的key,需要将key内容通过base64编码后创建secret即可。
root@ceph_deploy:~/ansible
AQBSAG1hRfnOABAA6FCEoiq82YqdV4deJIkb1w==
root@ceph_deploy:~/ansible
QVFCU0FHMWhSZm5PQUJBQTZGQ0VvaXE4MllxZFY0ZGVKSWtiMXc9PQ==
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-k8s-rbd
type: "kubernetes.io/rbd"
data:
key: QVFCU0FHMWhSZm5PQUJBQTZGQ0VvaXE4MllxZFY0ZGVKSWtiMXc9PQ==
4.1.7.2 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: rbd-data1
mountPath: /data
volumes:
- name: rbd-data1
rbd:
monitors:
- '172.16.244.11:6789'
- '172.16.244.12:6789'
- '172.16.244.13:6789'
pool: rbd-k8s-pool1
image: k8s-img1
fsType: ext4
readOnly: false
user: k8s-rbd
secretRef:
name: ceph-secret-k8s-rbd
4.1.7.3 pod验证挂载
root@deploy:~/exp
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deployment-66b4dcbc96-nwkxn:/
overlay 120G 11G 110G 9% /
tmpfs 64M 0 64M 0% /dev
tmpfs 980M 0 980M 0% /sys/fs/cgroup
/dev/rbd0 2.9G 9.0M 2.9G 1% /data
/dev/sda2 120G 11G 110G 9% /etc/hosts
shm 64M 0 64M 0% /dev/shm
tmpfs 980M 12K 980M 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 980M 0 980M 0% /proc/acpi
tmpfs 980M 0 980M 0% /proc/scsi
tmpfs 980M 0 980M 0% /sys/firmware
4.1.7.4 宿主机验证挂载
...
/dev/rbd0 ext4 2.9G 9.0M 2.9G 1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-pool1-image-k8s-img1
...
id pool namespace image snap device
0 rbd-k8s-pool1 k8s-img1 - /dev/rbd0
4.1.8 动态存储供给-需要使用二进制安装k8s
4.1.8.1 创建admin用户secret
root@ceph_deploy:~/ansible
QVFBZVIybGhXb3U1Q0JBQVVIMk1nRXIyZndUbTJ2c3R0UmUyc0E9PQ==
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
key: QVFBZVIybGhXb3U1Q0JBQVVIMk1nRXIyZndUbTJ2c3R0UmUyc0E9PQ==
4.1.8.2 创建普通用户的secret
4.1.8.3 创建存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-storage-class-ceph
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/rbd
parameters:
monitors: 172.16.244.11:6789,172.16.244.12:6789,172.16.244.13:6789
adminId: admin
adminSecretName: ceph-secret-admin
adminSecretNamespace: default
pool: rbd-k8s-pool1
userId: k8s-rbd
userSecretName: ceph-secret-k8s-rbd
root@deploy:~/exp
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ceph-storage-class-ceph (default) kubernetes.io/rbd Delete Immediate false 12s
root@ceph_deploy:~/ansible
k8s-img1
kubernetes-dynamic-pvc-d2eaac29-3628-448c-9042-dcb039d2f462
4.1.8.4 创建基于存储类的PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mysql-data-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-storage-class-ceph
resources:
requests:
storage: '5Gi'
root@deploy:~/exp
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
mysql-data-pvc Bound pvc-e84d0d23-6b32-47c9-9028-179bddb7e64f 5Gi RWO ceph-storage-class-ceph 8m58s
4.1.8.5 运行单机mysql并验证
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysql
spec:
selector:
matchLabels:
app: mysql
strategy:
type: Recreate
template:
metadata:
labels:
app: mysql
spec:
containers:
- image: harbor.k8s.local/k8s/mysql:5.6.46
name: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: test123456
ports:
- containerPort: 3306
name: mysql
volumeMounts:
- name: mysql-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mysql-persistent-storage
persistentVolumeClaim:
claimName: mysql-data-pvc
---
kind: Service
apiVersion: v1
metadata:
labels:
app: mysql-service-label
name: mysql-service
spec:
type: NodePort
ports:
- name: http
port: 3306
protocol: TCP
targetPort: 3306
nodePort: 43306
selector:
app: mysql
4.2 Cephfs的使用
4.2.1 创建secret
4.2.2 创建pod
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 3
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: staticdata-cephfs
mountPath: /usr/share/nginx/html/
volumes:
- name: staticdata-cephfs
cephfs:
monitors:
- '172.16.244.11:6789'
- '172.16.244.12:6789'
- '172.16.244.13:6789'
path: /
user: admin
secretRef:
name: ceph-secret-admin
4.2.3 pod验证挂载
root@deploy:~/exp
NAME READY STATUS RESTARTS AGE
nginx-deployment-668744f975-254h5 1/1 Running 0 11m
nginx-deployment-668744f975-cxwtk 1/1 Running 0 11m
nginx-deployment-668744f975-dktfs 1/1 Running 0 11m
root@deploy:~/exp
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deployment-668744f975-254h5:/
Filesystem Size Used Avail Use% Mounted on
overlay
...
172.16.244.11:6789,172.16.244.12:6789,172.16.244.13:6789:/ 380G 0 380G 0% /usr/share/nginx/html
...
root@nginx-deployment-668744f975-254h5:/
4.2.4 pod多副本验证
root@deploy:~/exp
root@nginx-deployment-668744f975-cxwtk:/
321312312
4.2.5 宿主机验证
参见:4.1.7.4
5 .实现基于探针对pod中的访问进行健康检查。
5.1 Pod的状态
https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/
第一阶段:
- Pending:正在创建Pod但是pod中的容器还没有全部被创建完成(处于此状态的pod应检查pod依赖的存储是否有挂载权限,镜像是否可以下载,调度是否正常等)。
- Failed :Pod中有容器启动失败而导致pod工作异常。
- Unknown:由于某种原因无法获得pod当前状态,通常是由于pod所在节点通信错误造成。
- Succeeded:Pod中的所有容器都被成功终止,即pod里所有的containers均已经terminated。
第二阶段:
- Unschedulable: pod无法被调度,kube-scheduler没有匹配到合适的node节点。
- PodScheduled: pod正处于调度中,在kube-scheduler刚开始调度的时候,还没有将pod分配到指定的node,在选出合适的节点后就会更新etcd数据,将pod分配到指定的node。
- Initialized: 所有pod中的初始化容器已经完成了。
- ImagePullBackoff:pod所在的node节点下载镜像失败。
- Running:Pod内的所有容器已经被创建并启动。
- Ready: 表示pod中的容器已经可以提供访问服务。
状态 | 解释 |
---|
Error | pod 启动过程中发生错误 | NodeLost | Pod 所在节点失联 | Unkown | Pod 所在节点失联或其它未知异常 | Waiting | Pod 等待启动 | Pending | Pod 等待被调度 | Terminating | Pod 正在被销毁 | CrashLoopBackOff | Pod 启动、停止 | InvalidImageName | node节点无法解析镜像名称导致的镜像无法下载 | ImageInspectError | 无法校验镜像,镜像不完整导致 | ErrImageNeverPull | 策略禁止拉取镜像,镜像中心权限是私有等 | ImagePullBackOff | 镜像拉取失败,但是正在重新拉取 | RegistryUnavailable | 镜像服务器不可用,网络原因或harbor宕机 | ErrImagePull | 镜像拉取出错,超时或下载被强制终止 | CreateContainerConfigError | 不能创建kubelet使用的容器配置 | CreateContainerError | 创建容器失败 | PreStartContainer | 执行preStart hook报错,Pod hook(钩子)是由 Kubernetes 管理的 kubelet 发 起的,当容器中的进程启动前或者容器中的进程终止之前运行,比如容器创建完成后里面的服务启动之前可以检查一下 依赖的其它服务是否启动,或者容器退出之前可以把容器中的服务先通过命令停止。 | PostStartHookError | 执行 postStart hook 报错 | RunContainerError | pod运行失败,容器中没有初始化PID为1的守护进程等 | ContainersNotInitialized | pod没有初始化完毕 | ContainersNotReady | pod没有准备完毕 | ContainerCreating | pod正在创建中 | PodInitializing | pod正在初始化中 | DockerDaemonNotReady | node节点decker服务没有启动 | NetworkPluginNotReady | 网络插件还没有完全启动 |
5.2 Pod探针
https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/
5.2.1 概念
探针是由kubelet 对容器执行的定期诊断,以保证Pod的状态始终处于运行状态,要执行诊断,kubelet 调用由容器实现的Handler(处理程序),有三种类型的处理程序:
-
ExecAction: #在容器内执行指定命令,如果命令退出时返回码为0则认为诊断成功。 -
TCPSocketAction:对指定端口上的容器的IP地址进行TCP检查,如果端口打开,则诊断被认为是成功的。 -
HTTPGetAction: 对指定的端口和路径上的容器的IP地址执行HTTPGet请求,如果响应的状态码大于等于200且小于 400,则诊断被认 为是成功的。
每次探测都将获得以下三种结果之一:
成功:容器通过了诊断。 失败:容器未通过诊断。 未知:诊断失败,因此不会采取任何行动。
5.2.2 配置探针
5.2.2.1 探针的类型
就绪探针,如果就绪探测失败,端点控制器将从与Pod匹配的所有Service的端点中删除该Pod的IP地址,初始延迟之 前的就绪状态默认为Failure(失败),如果容器不提供就绪探针,则默认状态为 Success,readinessProbe用于控 制pod是否添加至service。
存活探针,检测容器容器是否正在运行,如果存活探测失败,则kubelet会杀死容器,并且容器将受到其重启策略的影响,如果容器不提供存活探针,则默认状态为 Success,livenessProbe用于控制是否重启pod。
5.2.2.2 探针的配置
https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
探针的配置字段:
-
initialDelaySeconds: 120 #初始化延迟时间,告诉kubelet在执行第一次探测前应该等待多少秒,默认是0秒,最小值是0 -
periodSeconds: 60 #探测周期间隔时间,指定了kubelet应该每多少秒秒执行一次存活探测,默认是 10 秒。最小值是 1 -
timeoutSeconds: 5 #单次探测超时时间,探测的超时后等待多少秒,默认值是1秒,最小值是1。 -
successThreshold: 1 #从失败转为成功的重试次数,探测器在失败后,被视为成功的最小连续成功数,默认值是1,存活探测的这个值必须是 1,最小值是 1。 -
failureThreshold: 3 #从成功转为失败的重试次数,当Pod启动了并且探测到失败,Kubernetes的重试次数,存活探测情况下的放弃就意味 着重新启动容器,就绪探测情况下的放弃Pod 会被打上未就绪的标签,默认值是3,最小值是1。
HTTP 探测器可以在 httpGet 上配置额外的字段:
#连接使用的主机名,默认是Pod的 IP,也可以在HTTP头中设置 “Host” 来代替。
#用于设置连接主机的方式(HTTP 还是 HTTPS),默认是 HTTP。
- path: /monitor/index.html
#访问 HTTP 服务的路径。 httpHeaders:
#请求中自定义的 HTTP 头,HTTP 头字段允许重复。
- port: 80
#访问容器的端口号或者端口名,如果数字必须在 1 ~ 65535 之间。
5.2.2.3 Http探针示例
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx:1.17.5
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /index1.html
port: 80
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-80
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 40012
protocol: TCP
type: NodePort
selector:
app: ng-deploy-80
5.2.2.4 TCP探针示例
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
app: ng-deploy-80
template:
metadata:
labels:
app: ng-deploy-80
spec:
containers:
- name: ng-deploy-80
image: nginx:1.17.5
ports:
- containerPort: 80
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: ng-deploy-80
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 40012
protocol: TCP
type: NodePort
selector:
app: ng-deploy-80
5.2.2.5 ExecAction探针示例
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-deployment
spec:
replicas: 1
selector:
matchLabels:
app: redis-deploy-6379
template:
metadata:
labels:
app: redis-deploy-6379
spec:
containers:
- name: redis-deploy-6379
image: redis
ports:
- containerPort: 6379
readinessProbe:
exec:
command:
- /usr/local/bin/redis-cli
- quit
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
livenessProbe:
exec:
command:
- /usr/local/bin/redis-cli
- quit
initialDelaySeconds: 5
periodSeconds: 3
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 3
---
apiVersion: v1
kind: Service
metadata:
name: redis-deploy-6379
spec:
ports:
- name: http
port: 6379
targetPort: 6379
nodePort: 40016
protocol: TCP
type: NodePort
selector:
app: redis-deploy-6379
如果端口检测连续超过指定的三次都没有通过,则Pod状态如下:
5.2.2.6 livenessProbe和readinessProbe的对比
-
配置参数一样 -
livenessProbe #连续探测失败会重启、重建pod,readinessProbe不会执行重启或者重建Pod操作。 -
livenessProbe #连续检测指定次数失败后会将容器置于(Crash Loop BackOff)且不可用,readinessProbe不会。 -
readinessProbe #连续探测失败会从service的endpointd中删除该Pod,livenessProbe不具备此功能,但是会将容器挂起。 -
livenessProbe用户控制是否重启pod,readinessProbe用于控制pod是否添加至service -
建议: 两个探针都配置。
5.3 Pod的重启策略
k8s在Pod出现异常的时候会自动将Pod重启以恢复Pod中的服务。
-
Always:当容器异常时,k8s自动重启该容器,ReplicationController/Replicaset/Deployment。 -
OnFailure:当容器失败时(容器停止运行且退出码不为0),k8s自动重启该容器。 -
Never:不论容器运行状态如何都不会重启该容器,Job或CronJob。
containers:
- name: magedu-tomcat-app1-container
image: harbor.magedu.local/magedu/tomcat-app1:v1
imagePullPolicy: Always
ports:
- containerPort: 8080
protocol: TCP
name: http
env:
- name: "password"
value: "123456"
- name: "age"
value: "18"
resources:
limits:
cpu: 1
memory: "512Mi"
requests:
cpu: 500m
memory: "512Mi"
restartPolicy: Always
5.4 镜像的拉取策略
https://kubernetes.io/zh/docs/concepts/configuration/overview/
imagePullPolicy: IfNotPresent
imagePullPolicy: Always
imagePullPolicy: Never
|