IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 系统运维 -> 7. K8S实战实践 -> 正文阅读

[系统运维]7. K8S实战实践

1. 使用configmap为nginx、mysql提供配置文件

实现配置信息和镜像的解藕,将配置信息存放到configmap对象中,然后在pod对象中导入configmap对象,实现导入配置操作。声明一个configmap对象,作为volume挂载到pod中。

  • 配置变更:

    1. 直接把服务的配置文件放到镜像中
    2. configmap:把配置和镜像解藕
    3. 配置中心:Apollo
  • 环境变量的传递:

    1. 通过dockerfile的env定义
    2. 可以用configmap中提供
    3. 直接在yaml文件中写明
    4. 配置文件
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
 default: |
    server {
       listen       80;
       server_name  www.mysite.com;
       index        index.html;

       location / {
           root /data/nginx/html;
           if (!-e $request_filename) {
               rewrite ^/(.*) /index.html last;
           }
       }
    }


---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-8080
        image: tomcat
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: nginx-config
          mountPath:  /data
      - name: ng-deploy-80
        image: nginx 
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /data/nginx/html
          name: nginx-static-dir
        - name: nginx-config
          mountPath:  /etc/nginx/conf.d
      volumes:
      - name: nginx-static-dir
        hostPath:
          path: /data/nginx/linux39
      - name: nginx-config
        configMap:
          name: nginx-config
          items:
             - key: default
               path: mysite.conf

---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 30019
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
data:
  username: user1


---
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx 
        env:
        - name: "test"
          value: "value"
        - name: MY_USERNAME
          valueFrom:
            configMapKeyRef:
              name: nginx-config
              key: username
        ports:
        - containerPort: 80

2. 基于PV、PVC运行zookeeper集群

zookeeper官网: https://zookeeper.apache.org/doc/current/zookeeperOver.html

注意:一般集群3个节点的性能比5个节点的快,因为存在写放大机制,即写入时,客户端请求首先路由到leader节点,进行写,然后在同步到follower节点,则3个节点写一份,拷贝2份,5个节点是写一份,拷贝4份。所以3个节点少一些网络和IO的开销,速度较快。

image-20211013213843216

  • JDK版本:JDK 8 LTS 或JDK 11 LTS

  • zookeeper集群的选举规则

    1. 优先对比事物ID
    2. 对比Server ID
  • 在K8S中的zookeeper集群架构

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-FJ5XREuC-1634569492523)(/Users/zhangjunyi/Library/Application Support/typora-user-images/image-20211013224028436.png)]

以下是基于JDK 8 的镜像,制作zookeeper镜像,在K8S上启动3节点zookeeper集群的实践

2.1 下载JDK镜像

~# docker pull elevy/slim_java:8
~# docker tag elevy/slim_java:8 harbor.k8s.local/k8s/elevy_slim_java:8
~# docker push harbor.k8s.local/k8s/elevy_slim_java:8

2.2 构建zookeeper镜像

root@master1:~# cd /opt/k8s-data/dockerfile/web/apps/zookeeper
# chmod a+x *.sh
# chmod a+x bin/*.sh
# bash build-command.sh jksie1ks-20211010-120323
# docker push harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323 #用build-command.sh可以不用此步骤。
# cat Dockerfile
FROM harbor.k8s.local/k8s/elevy_slim_java:8

ENV ZK_VERSION 3.4.14
ADD repositories /etc/apk/repositories 
# Download Zookeeper
COPY zookeeper-3.4.14.tar.gz /tmp/zk.tgz
COPY zookeeper-3.4.14.tar.gz.asc /tmp/zk.tgz.asc
COPY KEYS /tmp/KEYS
RUN apk add --no-cache --virtual .build-deps \
      ca-certificates   \
      gnupg             \
      tar               \
      wget &&           \
    #
    # Install dependencies
    apk add --no-cache  \
      bash &&           \
    #
    #
    # Verify the signature
    export GNUPGHOME="$(mktemp -d)" && \
    gpg -q --batch --import /tmp/KEYS && \
    gpg -q --batch --no-auto-key-retrieve --verify /tmp/zk.tgz.asc /tmp/zk.tgz && \
    #
    # Set up directories
    #
    mkdir -p /zookeeper/data /zookeeper/wal /zookeeper/log && \
    #
    # Install
    tar -x -C /zookeeper --strip-components=1 --no-same-owner -f /tmp/zk.tgz && \
    #
    # Slim down
    cd /zookeeper && \
    cp dist-maven/zookeeper-${ZK_VERSION}.jar . && \
    rm -rf \
      *.txt \
      *.xml \
      bin/README.txt \
      bin/*.cmd \
      conf/* \
      contrib \
      dist-maven \
      docs \
      lib/*.txt \
      lib/cobertura \
      lib/jdiff \
      recipes \
      src \
      zookeeper-*.asc \
      zookeeper-*.md5 \
      zookeeper-*.sha1 && \
    #
    # Clean up
    apk del .build-deps && \
    rm -rf /tmp/* "$GNUPGHOME"

COPY conf /zookeeper/conf/
COPY bin/zkReady.sh /zookeeper/bin/
COPY entrypoint.sh /

ENV PATH=/zookeeper/bin:${PATH} \
    ZOO_LOG_DIR=/zookeeper/log \
    ZOO_LOG4J_PROP="INFO, CONSOLE, ROLLINGFILE" \
    JMXPORT=9010

ENTRYPOINT [ "/entrypoint.sh" ]

CMD [ "zkServer.sh", "start-foreground" ]

EXPOSE 2181 2888 3888 9010

# cat bin/zkReady.sh
#!/bin/bash

/zookeeper/bin/zkServer.sh status | egrep 'Mode: (standalone|leading|following|observing)'

# cat entrypoint.sh   #向zoo.cfg注入节点信息
#!/bin/bash

echo ${MYID:-1} > /zookeeper/data/myid   #MYID变量后期通过Deployment传ENV进来
#SERVERS变量后期通过Deployment传ENV进来
if [ -n "$SERVERS" ]; then
	IFS=\, read -a servers <<<"$SERVERS"
	for i in "${!servers[@]}"; do 
		printf "\nserver.%i=%s:2888:3888" "$((1 + $i))" "${servers[$i]}" >> /zookeeper/conf/zoo.cfg
	done
fi

cd /zookeeper
exec "$@"


###########
zoo.cfg配置文件中的节点信息形式如下:
server.1=zoo1:2888:3888
server.2=zoo2:2888:3888
server.3=zoo3:2888:3888
即:server(zookeeper的关键字).<server ID, 即entry point.sh 脚本中生成的>=<zookeeper节点的主机名(需要能解析,在k8s中采用每一个zookeeper pod对应一个service的形式解决)>:<leader向follower同步数据的端口>:<集群选举用的端口>
https://zookeeper.apache.org/doc/current/zookeeperStarted.html
# cat conf/log4j.properties
# Define some default values that can be overridden by system properties
zookeeper.root.logger=INFO, CONSOLE, ROLLINGFILE
zookeeper.console.threshold=INFO
zookeeper.log.dir=/zookeeper/log
zookeeper.log.file=zookeeper.log
zookeeper.log.threshold=INFO
zookeeper.tracelog.dir=/zookeeper/log
zookeeper.tracelog.file=zookeeper_trace.log

#
# ZooKeeper Logging Configuration
#

# Format is "<default threshold> (, <appender>)+

# DEFAULT: console appender only
log4j.rootLogger=${zookeeper.root.logger}

# Example with rolling log file
#log4j.rootLogger=DEBUG, CONSOLE, ROLLINGFILE

# Example with rolling log file and tracing
#log4j.rootLogger=TRACE, CONSOLE, ROLLINGFILE, TRACEFILE

#
# Log INFO level and above messages to the console
#
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.Threshold=${zookeeper.console.threshold}
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n

#
# Add ROLLINGFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.ROLLINGFILE=org.apache.log4j.RollingFileAppender
log4j.appender.ROLLINGFILE.Threshold=${zookeeper.log.threshold}
log4j.appender.ROLLINGFILE.File=${zookeeper.log.dir}/${zookeeper.log.file}

# Max log file size of 10MB
log4j.appender.ROLLINGFILE.MaxFileSize=10MB
# uncomment the next line to limit number of backup files
log4j.appender.ROLLINGFILE.MaxBackupIndex=5

log4j.appender.ROLLINGFILE.layout=org.apache.log4j.PatternLayout
log4j.appender.ROLLINGFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L] - %m%n


#
# Add TRACEFILE to rootLogger to get log file output
#    Log DEBUG level and above messages to a log file
log4j.appender.TRACEFILE=org.apache.log4j.FileAppender
log4j.appender.TRACEFILE.Threshold=TRACE
log4j.appender.TRACEFILE.File=${zookeeper.tracelog.dir}/${zookeeper.tracelog.file}

log4j.appender.TRACEFILE.layout=org.apache.log4j.PatternLayout
### Notice we are including log4j's NDC here (%x)
log4j.appender.TRACEFILE.layout.ConversionPattern=%d{ISO8601} [myid:%X{myid}] - %-5p [%t:%C{1}@%L][%x] - %m%n

# cat conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/zookeeper/data
dataLogDir=/zookeeper/wal
#snapCount=100000
autopurge.purgeInterval=1
clientPort=2181
quorumListenOnAllIPs=true

# cat repositories
http://mirrors.aliyun.com/alpine/v3.6/main
http://mirrors.aliyun.com/alpine/v3.6/community

# 获取KEYS
wget https://downloads.apache.org/zookeeper/KEYS

#官网获取zookeeper的tar.gz和tar.gz.asc文件
https://archive.apache.org/dist/zookeeper/zookeeper-3.4.14/

# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.k8s.local/k8s/zookeeper:${TAG} .
sleep 1
docker push  harbor.k8s.local/k8s/zookeeper:${TAG}

2.3 测试zookeeper镜像

# docker run -it --rm -p 2181:2181 harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323
Mac安装java运行环境:https://www.oracle.com/java/technologies/downloads/#jdk17-mac
ZooInspector启动:https://www.jianshu.com/p/f45af8027d7f
进入目录ZooInspector\build,运行zookeeper-dev-ZooInspector.jar
#java -jar zookeeper-dev-ZooInspector.jar

image-20211014010127163

image-20211014010027778

image-20211014010155729

2.4 K8S运行zookeeper服务

2.4.1 Yaml文件准备

# cat zookeeper-pv.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-1
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce 
  nfs:
    server: 192.168.6.87
    path: /data/k8sdata/zookeeper-datadir-1 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-2
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.6.87 
    path: /data/k8sdata/zookeeper-datadir-2 

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper-datadir-pv-3
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    server: 192.168.6.87  
    path: /data/k8sdata/zookeeper-datadir-3 

# cat zookeeper-pvc.yaml
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-1
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-1
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-2
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-2
  resources:
    requests:
      storage: 4Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-datadir-pvc-3
  namespace: zookeeper
spec:
  accessModes:
    - ReadWriteOnce
  volumeName: zookeeper-datadir-pv-3
  resources:
    requests:
      storage: 4Gi

# cat zookeeper.yaml
apiVersion: v1  #这个service可以不写
kind: Service
metadata:
  name: zookeeper
  namespace: zookeeper
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper1
  namespace: zookeeper
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 42181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper2
  namespace: zookeeper
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 42182
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "2"
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper3
  namespace: zookeeper
spec:
  type: NodePort        
  ports:
    - name: client
      port: 2181
      nodePort: 42183
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "3"
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper1
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-1 
      volumes:
        - name: zookeeper-datadir-pvc-1 
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-1
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper2
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "2"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323 
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "2"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-2 
      volumes:
        - name: zookeeper-datadir-pvc-2
          persistentVolumeClaim:
            claimName: zookeeper-datadir-pvc-2
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  name: zookeeper3
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "3"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: harbor.k8s.local/k8s/zookeeper:jksie1ks-20211010-120323
          imagePullPolicy: Always
          env:
            - name: MYID
              value: "3"
            - name: SERVERS
              value: "zookeeper1,zookeeper2,zookeeper3"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
          - mountPath: "/zookeeper/data"
            name: zookeeper-datadir-pvc-3
      volumes:
        - name: zookeeper-datadir-pvc-3
          persistentVolumeClaim:
           claimName: zookeeper-datadir-pvc-3

2.4.2 创建PV

# kubectl apply -f zookeeper-persistentvolume.yaml

2.4.3 验证PV

# kubectl get pv
NAME                     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
zookeeper-datadir-pv-1   5Gi        RWO            Retain           Available                                   34s
zookeeper-datadir-pv-2   5Gi        RWO            Retain           Available                                   34s
zookeeper-datadir-pv-3   5Gi        RWO            Retain           Available                                   34s

2.4.4 创建PVC

# kubectl apply -f zookeeper-persistentvolumeclaim.yaml 

2.4.5 验证PVC

# kubectl get pvc -n zookeeper
NAME                      STATUS   VOLUME                   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
zookeeper-datadir-pvc-1   Bound    zookeeper-datadir-pv-1   5Gi        RWO                           28s
zookeeper-datadir-pvc-2   Bound    zookeeper-datadir-pv-2   5Gi        RWO                           28s
zookeeper-datadir-pvc-3   Bound    zookeeper-datadir-pv-3   5Gi        RWO                           28s

2.4.6 Dashboard 验证存储卷

image-20211014101005996

2.4.7 运行zookeeper集群

# kubectl apply -f zookeeper.yaml
service/zookeeper created
service/zookeeper1 created
service/zookeeper2 created
service/zookeeper3 created
deployment.apps/zookeeper1 created
deployment.apps/zookeeper2 created
deployment.apps/zookeeper3 created

2.4.8 验证zookeeper集群

# kubectl get pod -n zookeeper
NAME                          READY   STATUS    RESTARTS   AGE
zookeeper1-7bd866b955-vdqx2   1/1     Running   1          49s
zookeeper2-ffd4c44c8-lwd9s    1/1     Running   0          49s
zookeeper3-6dc789b469-svw2f   1/1     Running   0          49s

# kubectl exec -it zookeeper3-6dc789b469-svw2f  -n zookeeper bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: leader

# kubectl exec -it zookeeper2-ffd4c44c8-lwd9s -n zookeeper bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower

# kubectl exec -it zookeeper1-7bd866b955-vdqx2 -n zookeeper bash
bash-4.3# /zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
ZooKeeper remote JMX Port set to 9010
ZooKeeper remote JMX authenticate set to false
ZooKeeper remote JMX ssl set to false
ZooKeeper remote JMX log4j set to true
Using config: /zookeeper/bin/../conf/zoo.cfg
Mode: follower

#删除其中一个pod,比如删除leader,验证在其余两个pod能否自动选择出新的leader,然后验证删除的pod重建后是否以follower的身份加入到集群。

2.4.9 从外部访问zookeeper集群

image-20211014105303268

image-20211014105234086

3. 基于自定义镜像运行Nginx与Tomcat,并结合NFS实现动静分离

基于Nginx+Tomcat+NFS实现通过域名转发动态请求到Tomcat Pod的动静分离架构,要求能够通过负载均衡的VIP访问到K8S集群中运行的Nginx+Tomcat+NFS中的web页面。

  • 通过service进行服务之间的调用

image-20211014151429681

  • nginx通过注册中心调用tomcat pod(微服务架构)

image-20211014151331003

3.1 制作镜像

3.1.1 制作带有filebeat的系统基础镜像

#自定义Centos 基础镜像
FROM centos:7.9.2009
MAINTAINER Eric.Zhang  mail@xx.com

ADD filebeat-7.6.2-x86_64.rpm /tmp
RUN yum install -y /tmp/filebeat-7.6.2-x86_64.rpm vim wget tree  lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop &&  rm -rf /etc/localtime /tmp/filebeat-7.6.2-x86_64.rpm && ln -snf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && useradd  www -u 2020 && useradd nginx -u 2021
# cat build-command.sh
#!/bin/bash
docker build -t  harbor.k8s.local/k8s/centos-base:7.9.2009 .

docker push harbor.k8s.local/k8s/centos-base:7.9.2009

3.1.2 制作Nginx 基础镜像

#Nginx Base Image
FROM harbor.k8s.local/k8s/centos-base:7.9.2009

MAINTAINER  Eric.Zhang  mail@xx.com

RUN yum install -y vim wget tree  lrzsz gcc gcc-c++ automake pcre pcre-devel zlib zlib-devel openssl openssl-devel iproute net-tools iotop
ADD nginx-1.18.0.tar.gz /usr/local/src/
RUN cd /usr/local/src/nginx-1.18.0 && ./configure  && make && make install && ln -sv  /usr/local/nginx/sbin/nginx /usr/sbin/nginx  &&rm -rf /usr/local/src/nginx-1.18.0.tar.gz 
# cat build-command.sh
#!/bin/bash
docker build -t harbor.k8s.local/k8s/nginx-base:v1.18.0  .
sleep 1
docker push  harbor.k8s.local/k8s/nginx-base:v1.18.0

3.1.3 制作Nginx业务镜像

#Nginx 1.18.0
FROM harbor.k8s.local/k8s/nginx-base:v1.18.0

ADD nginx.conf /usr/local/nginx/conf/nginx.conf
ADD app1.tar.gz  /usr/local/nginx/html/
ADD index.html  /usr/local/nginx/html/index.html

#静态资源挂载路径
RUN mkdir -p /usr/local/nginx/html/webapp/static /usr/local/nginx/html/webapp/images

EXPOSE 80 443

CMD ["nginx"] 
cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t harbor.k8s.local/k8s/nginx-web1:${TAG} .
echo "镜像构建完成,即将上传到harbor"
sleep 1
docker push harbor.k8s.local/k8s/nginx-web1:${TAG}
echo "镜像上传到harbor完成"       
# cat nginx.conf
user  nginx nginx;
worker_processes  auto;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;
daemon off;

events {
    worker_connections  1024;
}


http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;

upstream  tomcat_webserver {       #测试版的,先把转给tomcat的去掉了
        server   tomcat-app1-service.nginx-tomcat.svc.k8s.local:80;
}

    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }

        location /webapp {
            root   html;
            index  index.html index.htm;
        }

        location /myapp {
             proxy_pass  http://tomcat_webserver;
             proxy_set_header   Host    $host;
             proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;
             proxy_set_header X-Real-IP $remote_addr;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}

3.1.4 制作JDK基础镜像

#JDK Base Image
FROM harbor.k8s.local/k8s/centos-base:7.9.2009

MAINTAINER Eric.Zhang  mail@xx.com


ADD jdk-8u212-linux-x64.tar.gz /usr/local/src/
RUN ln -sv /usr/local/src/jdk1.8.0_212 /usr/local/jdk
ADD profile /etc/profile


ENV JAVA_HOME /usr/local/jdk
ENV JRE_HOME $JAVA_HOME/jre
ENV CLASSPATH $JAVA_HOME/lib/:$JRE_HOME/lib/
ENV PATH $PATH:$JAVA_HOME/bin
# cat build-command.sh
#!/bin/bash
docker build -t harbor.k8s.local/k8s/jdk-base:v8.212  .
sleep 1
docker push  harbor.k8s.local/k8s/jdk-base:v8.212

3.1.5 制作Tomcat基础镜像

#Tomcat 8.5.43基础镜像
FROM harbor.k8s.local/k8s/jdk-base:v8.212

MAINTAINER Eric.Zhang  mail@xx.com

RUN mkdir /apps /data/tomcat/webapps /data/tomcat/logs -pv
ADD apache-tomcat-8.5.43.tar.gz  /apps
RUN useradd tomcat -u 2022 && ln -sv /apps/apache-tomcat-8.5.43 /apps/tomcat && chown -R tomcat.tomcat /apps /data -R
# cat build-command.sh
#!/bin/bash
docker build -t harbor.k8s.local/k8s/tomcat-base:v8.5.43  .
sleep 3
docker push  harbor.k8s.local/k8s/tomcat-base:v8.5.43

3.1.6 制作Tomcat业务镜像

#tomcat web1
FROM harbor.k8s.local/k8s/tomcat-base:v8.5.43 

ADD catalina.sh /apps/tomcat/bin/catalina.sh
ADD server.xml /apps/tomcat/conf/server.xml
#ADD myapp/* /data/tomcat/webapps/myapp/
ADD app1.tar.gz /data/tomcat/webapps/myapp/
ADD run_tomcat.sh /apps/tomcat/bin/run_tomcat.sh
#ADD filebeat.yml /etc/filebeat/filebeat.yml
RUN chown  -R nginx.nginx /data/ /apps/
#ADD filebeat-7.5.1-x86_64.rpm /tmp/
#RUN cd /tmp && yum localinstall -y filebeat-7.5.1-amd64.deb

EXPOSE 8080 8443

CMD ["/apps/tomcat/bin/run_tomcat.sh"]
# cat build-command.sh
#!/bin/bash
TAG=$1
docker build -t  harbor.k8s.local/k8s/tomcat-app1:${TAG} .
sleep 3
docker push  harbor.k8s.local/k8s/tomcat-app1:${TAG}
# cat server.xml
<?xml version='1.0' encoding='utf-8'?>
<!--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the "License"); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
-->
<!-- Note:  A "Server" is not itself a "Container", so you may not
     define subcomponents such as "Valves" at this level.
     Documentation at /docs/config/server.html
 -->
<Server port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />
  <!-- Security listener. Documentation at /docs/config/listeners.html
  <Listener className="org.apache.catalina.security.SecurityListener" />
  -->
  <!--APR library loader. Documentation at /docs/apr.html -->
  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
  <!-- Prevent memory leaks due to use of particular java/javax APIs-->
  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!-- Global JNDI resources
       Documentation at /docs/jndi-resources-howto.html
  -->
  <GlobalNamingResources>
    <!-- Editable user database that can also be used by
         UserDatabaseRealm to authenticate users
    -->
    <Resource name="UserDatabase" auth="Container"
              type="org.apache.catalina.UserDatabase"
              description="User database that can be updated and saved"
              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"
              pathname="conf/tomcat-users.xml" />
  </GlobalNamingResources>

  <!-- A "Service" is a collection of one or more "Connectors" that share
       a single "Container" Note:  A "Service" is not itself a "Container",
       so you may not define subcomponents such as "Valves" at this level.
       Documentation at /docs/config/service.html
   -->
  <Service name="Catalina">

    <!--The connectors can use a shared executor, you can define one or more named thread pools-->
    <!--
    <Executor name="tomcatThreadPool" namePrefix="catalina-exec-"
        maxThreads="150" minSpareThreads="4"/>
    -->


    <!-- A "Connector" represents an endpoint by which requests are received
         and responses are returned. Documentation at :
         Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)
         Java AJP  Connector: /docs/config/ajp.html
         APR (HTTP/AJP) Connector: /docs/apr.html
         Define a non-SSL/TLS HTTP/1.1 Connector on port 8080
    -->
    <Connector port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    <!-- A "Connector" using the shared thread pool-->
    <!--
    <Connector executor="tomcatThreadPool"
               port="8080" protocol="HTTP/1.1"
               connectionTimeout="20000"
               redirectPort="8443" />
    -->
    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443
         This connector uses the NIO implementation that requires the JSSE
         style configuration. When using the APR/native implementation, the
         OpenSSL style configuration is required as described in the APR/native
         documentation -->
    <!--
    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"
               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"
               clientAuth="false" sslProtocol="TLS" />
    -->

    <!-- Define an AJP 1.3 Connector on port 8009 -->
    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />


    <!-- An Engine represents the entry point (within Catalina) that processes
         every request.  The Engine implementation for Tomcat stand alone
         analyzes the HTTP headers included with the request, and passes them
         on to the appropriate Host (virtual host).
         Documentation at /docs/config/engine.html -->

    <!-- You should set jvmRoute to support load-balancing via AJP ie :
    <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">
    -->
    <Engine name="Catalina" defaultHost="localhost">

      <!--For clustering, please take a look at documentation at:
          /docs/cluster-howto.html  (simple how to)
          /docs/config/cluster.html (reference documentation) -->
      <!--
      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>
      -->

      <!-- Use the LockOutRealm to prevent attempts to guess user passwords
           via a brute-force attack -->
      <Realm className="org.apache.catalina.realm.LockOutRealm">
        <!-- This Realm uses the UserDatabase configured in the global JNDI
             resources under the key "UserDatabase".  Any edits
             that are performed against this UserDatabase are immediately
             available for use by the Realm.  -->
        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
               resourceName="UserDatabase"/>
      </Realm>

      <Host name="localhost"  appBase="/data/tomcat/webapps"  unpackWARs="false" autoDeploy="false">

        <!-- SingleSignOn valve, share authentication between web applications
             Documentation at: /docs/config/valve.html -->
        <!--
        <Valve className="org.apache.catalina.authenticator.SingleSignOn" />
        -->

        <!-- Access log processes all example.
             Documentation at: /docs/config/valve.html
             Note: The pattern used is equivalent to using pattern="common" -->
        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
               prefix="localhost_access_log" suffix=".txt"
               pattern="%h %l %u %t &quot;%r&quot; %s %b" />

      </Host>
    </Engine>
  </Service>
</Server>
# cat catalina.sh
#!/bin/sh

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# -----------------------------------------------------------------------------
# Control Script for the CATALINA Server
#
# Environment Variable Prerequisites
#
#   Do not set the variables in this script. Instead put them into a script
#   setenv.sh in CATALINA_BASE/bin to keep your customizations separate.
#
#   CATALINA_HOME   May point at your Catalina "build" directory.
#
#   CATALINA_BASE   (Optional) Base directory for resolving dynamic portions
#                   of a Catalina installation.  If not present, resolves to
#                   the same directory that CATALINA_HOME points to.
#
#   CATALINA_OUT    (Optional) Full path to a file where stdout and stderr
#                   will be redirected.
#                   Default is $CATALINA_BASE/logs/catalina.out
#
#   CATALINA_OPTS   (Optional) Java runtime options used when the "start",
#                   "run" or "debug" command is executed.
#                   Include here and not in JAVA_OPTS all options, that should
#                   only be used by Tomcat itself, not by the stop process,
#                   the version command etc.
#                   Examples are heap size, GC logging, JMX ports etc.
#
#   CATALINA_TMPDIR (Optional) Directory path location of temporary directory
#                   the JVM should use (java.io.tmpdir).  Defaults to
#                   $CATALINA_BASE/temp.
#
#   JAVA_HOME       Must point at your Java Development Kit installation.
#                   Required to run the with the "debug" argument.
#
#   JRE_HOME        Must point at your Java Runtime installation.
#                   Defaults to JAVA_HOME if empty. If JRE_HOME and JAVA_HOME
#                   are both set, JRE_HOME is used.
#
#   JAVA_OPTS       (Optional) Java runtime options used when any command
#                   is executed.
#                   Include here and not in CATALINA_OPTS all options, that
#                   should be used by Tomcat and also by the stop process,
#                   the version command etc.
#                   Most options should go into CATALINA_OPTS.
#
#   JAVA_ENDORSED_DIRS (Optional) Lists of of colon separated directories
#                   containing some jars in order to allow replacement of APIs
#                   created outside of the JCP (i.e. DOM and SAX from W3C).
#                   It can also be used to update the XML parser implementation.
#                   Note that Java 9 no longer supports this feature.
#                   Defaults to $CATALINA_HOME/endorsed.
#
#   JPDA_TRANSPORT  (Optional) JPDA transport used when the "jpda start"
#                   command is executed. The default is "dt_socket".
#
#   JPDA_ADDRESS    (Optional) Java runtime options used when the "jpda start"
#                   command is executed. The default is localhost:8000.
#
#   JPDA_SUSPEND    (Optional) Java runtime options used when the "jpda start"
#                   command is executed. Specifies whether JVM should suspend
#                   execution immediately after startup. Default is "n".
#
#   JPDA_OPTS       (Optional) Java runtime options used when the "jpda start"
#                   command is executed. If used, JPDA_TRANSPORT, JPDA_ADDRESS,
#                   and JPDA_SUSPEND are ignored. Thus, all required jpda
#                   options MUST be specified. The default is:
#
#                   -agentlib:jdwp=transport=$JPDA_TRANSPORT,
#                       address=$JPDA_ADDRESS,server=y,suspend=$JPDA_SUSPEND
#
#   JSSE_OPTS       (Optional) Java runtime options used to control the TLS
#                   implementation when JSSE is used. Default is:
#                   "-Djdk.tls.ephemeralDHKeySize=2048"
#
#   CATALINA_PID    (Optional) Path of the file which should contains the pid
#                   of the catalina startup java process, when start (fork) is
#                   used
#
#   LOGGING_CONFIG  (Optional) Override Tomcat's logging config file
#                   Example (all one line)
#                   LOGGING_CONFIG="-Djava.util.logging.config.file=$CATALINA_BASE/conf/logging.properties"
#
#   LOGGING_MANAGER (Optional) Override Tomcat's logging manager
#                   Example (all one line)
#                   LOGGING_MANAGER="-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager"
#
#   USE_NOHUP       (Optional) If set to the string true the start command will
#                   use nohup so that the Tomcat process will ignore any hangup
#                   signals. Default is "false" unless running on HP-UX in which
#                   case the default is "true"
# -----------------------------------------------------------------------------

JAVA_OPTS="-server -Xms1g -Xmx1g -Xss512k -Xmn1g -XX:CMSInitiatingOccupancyFraction=65  -XX:+UseFastAccessorMethods -XX:+AggressiveOpts -XX:+UseBiasedLocking -XX:+DisableExplicitGC -XX:MaxTenuringThreshold=10 -XX:NewSize=2048M -XX:MaxNewSize=2048M -XX:NewRatio=2 -XX:PermSize=128m -XX:MaxPermSize=512m -XX:CMSFullGCsBeforeCompaction=5 -XX:+ExplicitGCInvokesConcurrent -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:+CMSParallelRemarkEnabled"

# OS specific support.  $var _must_ be set to either true or false.
cygwin=false
darwin=false
os400=false
hpux=false
case "`uname`" in
CYGWIN*) cygwin=true;;
Darwin*) darwin=true;;
OS400*) os400=true;;
HP-UX*) hpux=true;;
esac

# resolve links - $0 may be a softlink
PRG="$0"

while [ -h "$PRG" ]; do
  ls=`ls -ld "$PRG"`
  link=`expr "$ls" : '.*-> \(.*\)$'`
  if expr "$link" : '/.*' > /dev/null; then
    PRG="$link"
  else
    PRG=`dirname "$PRG"`/"$link"
  fi
done

# Get standard environment variables
PRGDIR=`dirname "$PRG"`

# Only set CATALINA_HOME if not already set
[ -z "$CATALINA_HOME" ] && CATALINA_HOME=`cd "$PRGDIR/.." >/dev/null; pwd`

# Copy CATALINA_BASE from CATALINA_HOME if not already set
[ -z "$CATALINA_BASE" ] && CATALINA_BASE="$CATALINA_HOME"

# Ensure that any user defined CLASSPATH variables are not used on startup,
# but allow them to be specified in setenv.sh, in rare case when it is needed.
CLASSPATH=

if [ -r "$CATALINA_BASE/bin/setenv.sh" ]; then
  . "$CATALINA_BASE/bin/setenv.sh"
elif [ -r "$CATALINA_HOME/bin/setenv.sh" ]; then
  . "$CATALINA_HOME/bin/setenv.sh"
fi

# For Cygwin, ensure paths are in UNIX format before anything is touched
if $cygwin; then
  [ -n "$JAVA_HOME" ] && JAVA_HOME=`cygpath --unix "$JAVA_HOME"`
  [ -n "$JRE_HOME" ] && JRE_HOME=`cygpath --unix "$JRE_HOME"`
  [ -n "$CATALINA_HOME" ] && CATALINA_HOME=`cygpath --unix "$CATALINA_HOME"`
  [ -n "$CATALINA_BASE" ] && CATALINA_BASE=`cygpath --unix "$CATALINA_BASE"`
  [ -n "$CLASSPATH" ] && CLASSPATH=`cygpath --path --unix "$CLASSPATH"`
fi

# Ensure that neither CATALINA_HOME nor CATALINA_BASE contains a colon
# as this is used as the separator in the classpath and Java provides no
# mechanism for escaping if the same character appears in the path.
case $CATALINA_HOME in
  *:*) echo "Using CATALINA_HOME:   $CATALINA_HOME";
       echo "Unable to start as CATALINA_HOME contains a colon (:) character";
       exit 1;
esac
case $CATALINA_BASE in
  *:*) echo "Using CATALINA_BASE:   $CATALINA_BASE";
       echo "Unable to start as CATALINA_BASE contains a colon (:) character";
       exit 1;
esac

# For OS400
if $os400; then
  # Set job priority to standard for interactive (interactive - 6) by using
  # the interactive priority - 6, the helper threads that respond to requests
  # will be running at the same priority as interactive jobs.
  COMMAND='chgjob job('$JOBNAME') runpty(6)'
  system $COMMAND

  # Enable multi threading
  export QIBM_MULTI_THREADED=Y
fi

# Get standard Java environment variables
if $os400; then
  # -r will Only work on the os400 if the files are:
  # 1. owned by the user
  # 2. owned by the PRIMARY group of the user
  # this will not work if the user belongs in secondary groups
  . "$CATALINA_HOME"/bin/setclasspath.sh
else
  if [ -r "$CATALINA_HOME"/bin/setclasspath.sh ]; then
    . "$CATALINA_HOME"/bin/setclasspath.sh
  else
    echo "Cannot find $CATALINA_HOME/bin/setclasspath.sh"
    echo "This file is needed to run this program"
    exit 1
  fi
fi

# Add on extra jar files to CLASSPATH
if [ ! -z "$CLASSPATH" ] ; then
  CLASSPATH="$CLASSPATH":
fi
CLASSPATH="$CLASSPATH""$CATALINA_HOME"/bin/bootstrap.jar

if [ -z "$CATALINA_OUT" ] ; then
  CATALINA_OUT="$CATALINA_BASE"/logs/catalina.out
fi

if [ -z "$CATALINA_TMPDIR" ] ; then
  # Define the java.io.tmpdir to use for Catalina
  CATALINA_TMPDIR="$CATALINA_BASE"/temp
fi

# Add tomcat-juli.jar to classpath
# tomcat-juli.jar can be over-ridden per instance
if [ -r "$CATALINA_BASE/bin/tomcat-juli.jar" ] ; then
  CLASSPATH=$CLASSPATH:$CATALINA_BASE/bin/tomcat-juli.jar
else
  CLASSPATH=$CLASSPATH:$CATALINA_HOME/bin/tomcat-juli.jar
fi

# Bugzilla 37848: When no TTY is available, don't output to console
have_tty=0
if [ "`tty`" != "not a tty" ]; then
    have_tty=1
fi

# For Cygwin, switch paths to Windows format before running java
if $cygwin; then
  JAVA_HOME=`cygpath --absolute --windows "$JAVA_HOME"`
  JRE_HOME=`cygpath --absolute --windows "$JRE_HOME"`
  CATALINA_HOME=`cygpath --absolute --windows "$CATALINA_HOME"`
  CATALINA_BASE=`cygpath --absolute --windows "$CATALINA_BASE"`
  CATALINA_TMPDIR=`cygpath --absolute --windows "$CATALINA_TMPDIR"`
  CLASSPATH=`cygpath --path --windows "$CLASSPATH"`
  JAVA_ENDORSED_DIRS=`cygpath --path --windows "$JAVA_ENDORSED_DIRS"`
fi

if [ -z "$JSSE_OPTS" ] ; then
  JSSE_OPTS="-Djdk.tls.ephemeralDHKeySize=2048"
fi
JAVA_OPTS="$JAVA_OPTS $JSSE_OPTS"

# Register custom URL handlers
# Do this here so custom URL handles (specifically 'war:...') can be used in the security policy
JAVA_OPTS="$JAVA_OPTS -Djava.protocol.handler.pkgs=org.apache.catalina.webresources"

# Set juli LogManager config file if it is present and an override has not been issued
if [ -z "$LOGGING_CONFIG" ]; then
  if [ -r "$CATALINA_BASE"/conf/logging.properties ]; then
    LOGGING_CONFIG="-Djava.util.logging.config.file=$CATALINA_BASE/conf/logging.properties"
  else
    # Bugzilla 45585
    LOGGING_CONFIG="-Dnop"
  fi
fi

if [ -z "$LOGGING_MANAGER" ]; then
  LOGGING_MANAGER="-Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager"
fi

# Java 9 no longer supports the java.endorsed.dirs
# system property. Only try to use it if
# JAVA_ENDORSED_DIRS was explicitly set
# or CATALINA_HOME/endorsed exists.
ENDORSED_PROP=ignore.endorsed.dirs
if [ -n "$JAVA_ENDORSED_DIRS" ]; then
    ENDORSED_PROP=java.endorsed.dirs
fi
if [ -d "$CATALINA_HOME/endorsed" ]; then
    ENDORSED_PROP=java.endorsed.dirs
fi

# Uncomment the following line to make the umask available when using the
# org.apache.catalina.security.SecurityListener
#JAVA_OPTS="$JAVA_OPTS -Dorg.apache.catalina.security.SecurityListener.UMASK=`umask`"

if [ -z "$USE_NOHUP" ]; then
    if $hpux; then
        USE_NOHUP="true"
    else
        USE_NOHUP="false"
    fi
fi
unset _NOHUP
if [ "$USE_NOHUP" = "true" ]; then
    _NOHUP=nohup
fi

# Add the JAVA 9 specific start-up parameters required by Tomcat
JDK_JAVA_OPTIONS="$JDK_JAVA_OPTIONS --add-opens=java.base/java.lang=ALL-UNNAMED"
JDK_JAVA_OPTIONS="$JDK_JAVA_OPTIONS --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED"
export JDK_JAVA_OPTIONS

# ----- Execute The Requested Command -----------------------------------------

# Bugzilla 37848: only output this if we have a TTY
if [ $have_tty -eq 1 ]; then
  echo "Using CATALINA_BASE:   $CATALINA_BASE"
  echo "Using CATALINA_HOME:   $CATALINA_HOME"
  echo "Using CATALINA_TMPDIR: $CATALINA_TMPDIR"
  if [ "$1" = "debug" ] ; then
    echo "Using JAVA_HOME:       $JAVA_HOME"
  else
    echo "Using JRE_HOME:        $JRE_HOME"
  fi
  echo "Using CLASSPATH:       $CLASSPATH"
  if [ ! -z "$CATALINA_PID" ]; then
    echo "Using CATALINA_PID:    $CATALINA_PID"
  fi
fi

if [ "$1" = "jpda" ] ; then
  if [ -z "$JPDA_TRANSPORT" ]; then
    JPDA_TRANSPORT="dt_socket"
  fi
  if [ -z "$JPDA_ADDRESS" ]; then
    JPDA_ADDRESS="localhost:8000"
  fi
  if [ -z "$JPDA_SUSPEND" ]; then
    JPDA_SUSPEND="n"
  fi
  if [ -z "$JPDA_OPTS" ]; then
    JPDA_OPTS="-agentlib:jdwp=transport=$JPDA_TRANSPORT,address=$JPDA_ADDRESS,server=y,suspend=$JPDA_SUSPEND"
  fi
  CATALINA_OPTS="$JPDA_OPTS $CATALINA_OPTS"
  shift
fi

if [ "$1" = "debug" ] ; then
  if $os400; then
    echo "Debug command not available on OS400"
    exit 1
  else
    shift
    if [ "$1" = "-security" ] ; then
      if [ $have_tty -eq 1 ]; then
        echo "Using Security Manager"
      fi
      shift
      exec "$_RUNJDB" "$LOGGING_CONFIG" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
        -D$ENDORSED_PROP="$JAVA_ENDORSED_DIRS" \
        -classpath "$CLASSPATH" \
        -sourcepath "$CATALINA_HOME"/../../java \
        -Djava.security.manager \
        -Djava.security.policy=="$CATALINA_BASE"/conf/catalina.policy \
        -Dcatalina.base="$CATALINA_BASE" \
        -Dcatalina.home="$CATALINA_HOME" \
        -Djava.io.tmpdir="$CATALINA_TMPDIR" \
        org.apache.catalina.startup.Bootstrap "$@" start
    else
      exec "$_RUNJDB" "$LOGGING_CONFIG" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
        -D$ENDORSED_PROP="$JAVA_ENDORSED_DIRS" \
        -classpath "$CLASSPATH" \
        -sourcepath "$CATALINA_HOME"/../../java \
        -Dcatalina.base="$CATALINA_BASE" \
        -Dcatalina.home="$CATALINA_HOME" \
        -Djava.io.tmpdir="$CATALINA_TMPDIR" \
        org.apache.catalina.startup.Bootstrap "$@" start
    fi
  fi

elif [ "$1" = "run" ]; then

  shift
  if [ "$1" = "-security" ] ; then
    if [ $have_tty -eq 1 ]; then
      echo "Using Security Manager"
    fi
    shift
    eval exec "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
      -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
      -classpath "\"$CLASSPATH\"" \
      -Djava.security.manager \
      -Djava.security.policy=="\"$CATALINA_BASE/conf/catalina.policy\"" \
      -Dcatalina.base="\"$CATALINA_BASE\"" \
      -Dcatalina.home="\"$CATALINA_HOME\"" \
      -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
      org.apache.catalina.startup.Bootstrap "$@" start
  else
    eval exec "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
      -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
      -classpath "\"$CLASSPATH\"" \
      -Dcatalina.base="\"$CATALINA_BASE\"" \
      -Dcatalina.home="\"$CATALINA_HOME\"" \
      -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
      org.apache.catalina.startup.Bootstrap "$@" start
  fi

elif [ "$1" = "start" ] ; then

  if [ ! -z "$CATALINA_PID" ]; then
    if [ -f "$CATALINA_PID" ]; then
      if [ -s "$CATALINA_PID" ]; then
        echo "Existing PID file found during start."
        if [ -r "$CATALINA_PID" ]; then
          PID=`cat "$CATALINA_PID"`
          ps -p $PID >/dev/null 2>&1
          if [ $? -eq 0 ] ; then
            echo "Tomcat appears to still be running with PID $PID. Start aborted."
            echo "If the following process is not a Tomcat process, remove the PID file and try again:"
            ps -f -p $PID
            exit 1
          else
            echo "Removing/clearing stale PID file."
            rm -f "$CATALINA_PID" >/dev/null 2>&1
            if [ $? != 0 ]; then
              if [ -w "$CATALINA_PID" ]; then
                cat /dev/null > "$CATALINA_PID"
              else
                echo "Unable to remove or clear stale PID file. Start aborted."
                exit 1
              fi
            fi
          fi
        else
          echo "Unable to read PID file. Start aborted."
          exit 1
        fi
      else
        rm -f "$CATALINA_PID" >/dev/null 2>&1
        if [ $? != 0 ]; then
          if [ ! -w "$CATALINA_PID" ]; then
            echo "Unable to remove or write to empty PID file. Start aborted."
            exit 1
          fi
        fi
      fi
    fi
  fi

  shift
  touch "$CATALINA_OUT"
  if [ "$1" = "-security" ] ; then
    if [ $have_tty -eq 1 ]; then
      echo "Using Security Manager"
    fi
    shift
    eval $_NOHUP "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
      -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
      -classpath "\"$CLASSPATH\"" \
      -Djava.security.manager \
      -Djava.security.policy=="\"$CATALINA_BASE/conf/catalina.policy\"" \
      -Dcatalina.base="\"$CATALINA_BASE\"" \
      -Dcatalina.home="\"$CATALINA_HOME\"" \
      -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
      org.apache.catalina.startup.Bootstrap "$@" start \
      >> "$CATALINA_OUT" 2>&1 "&"

  else
    eval $_NOHUP "\"$_RUNJAVA\"" "\"$LOGGING_CONFIG\"" $LOGGING_MANAGER $JAVA_OPTS $CATALINA_OPTS \
      -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
      -classpath "\"$CLASSPATH\"" \
      -Dcatalina.base="\"$CATALINA_BASE\"" \
      -Dcatalina.home="\"$CATALINA_HOME\"" \
      -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
      org.apache.catalina.startup.Bootstrap "$@" start \
      >> "$CATALINA_OUT" 2>&1 "&"

  fi

  if [ ! -z "$CATALINA_PID" ]; then
    echo $! > "$CATALINA_PID"
  fi

  echo "Tomcat started."

elif [ "$1" = "stop" ] ; then

  shift

  SLEEP=5
  if [ ! -z "$1" ]; then
    echo $1 | grep "[^0-9]" >/dev/null 2>&1
    if [ $? -gt 0 ]; then
      SLEEP=$1
      shift
    fi
  fi

  FORCE=0
  if [ "$1" = "-force" ]; then
    shift
    FORCE=1
  fi

  if [ ! -z "$CATALINA_PID" ]; then
    if [ -f "$CATALINA_PID" ]; then
      if [ -s "$CATALINA_PID" ]; then
        kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1
        if [ $? -gt 0 ]; then
          echo "PID file found but no matching process was found. Stop aborted."
          exit 1
        fi
      else
        echo "PID file is empty and has been ignored."
      fi
    else
      echo "\$CATALINA_PID was set but the specified file does not exist. Is Tomcat running? Stop aborted."
      exit 1
    fi
  fi

  eval "\"$_RUNJAVA\"" $LOGGING_MANAGER $JAVA_OPTS \
    -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
    -classpath "\"$CLASSPATH\"" \
    -Dcatalina.base="\"$CATALINA_BASE\"" \
    -Dcatalina.home="\"$CATALINA_HOME\"" \
    -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
    org.apache.catalina.startup.Bootstrap "$@" stop

  # stop failed. Shutdown port disabled? Try a normal kill.
  if [ $? != 0 ]; then
    if [ ! -z "$CATALINA_PID" ]; then
      echo "The stop command failed. Attempting to signal the process to stop through OS signal."
      kill -15 `cat "$CATALINA_PID"` >/dev/null 2>&1
    fi
  fi

  if [ ! -z "$CATALINA_PID" ]; then
    if [ -f "$CATALINA_PID" ]; then
      while [ $SLEEP -ge 0 ]; do
        kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1
        if [ $? -gt 0 ]; then
          rm -f "$CATALINA_PID" >/dev/null 2>&1
          if [ $? != 0 ]; then
            if [ -w "$CATALINA_PID" ]; then
              cat /dev/null > "$CATALINA_PID"
              # If Tomcat has stopped don't try and force a stop with an empty PID file
              FORCE=0
            else
              echo "The PID file could not be removed or cleared."
            fi
          fi
          echo "Tomcat stopped."
          break
        fi
        if [ $SLEEP -gt 0 ]; then
          sleep 1
        fi
        if [ $SLEEP -eq 0 ]; then
          echo "Tomcat did not stop in time."
          if [ $FORCE -eq 0 ]; then
            echo "PID file was not removed."
          fi
          echo "To aid diagnostics a thread dump has been written to standard out."
          kill -3 `cat "$CATALINA_PID"`
        fi
        SLEEP=`expr $SLEEP - 1 `
      done
    fi
  fi

  KILL_SLEEP_INTERVAL=5
  if [ $FORCE -eq 1 ]; then
    if [ -z "$CATALINA_PID" ]; then
      echo "Kill failed: \$CATALINA_PID not set"
    else
      if [ -f "$CATALINA_PID" ]; then
        PID=`cat "$CATALINA_PID"`
        echo "Killing Tomcat with the PID: $PID"
        kill -9 $PID
        while [ $KILL_SLEEP_INTERVAL -ge 0 ]; do
            kill -0 `cat "$CATALINA_PID"` >/dev/null 2>&1
            if [ $? -gt 0 ]; then
                rm -f "$CATALINA_PID" >/dev/null 2>&1
                if [ $? != 0 ]; then
                    if [ -w "$CATALINA_PID" ]; then
                        cat /dev/null > "$CATALINA_PID"
                    else
                        echo "The PID file could not be removed."
                    fi
                fi
                echo "The Tomcat process has been killed."
                break
            fi
            if [ $KILL_SLEEP_INTERVAL -gt 0 ]; then
                sleep 1
            fi
            KILL_SLEEP_INTERVAL=`expr $KILL_SLEEP_INTERVAL - 1 `
        done
        if [ $KILL_SLEEP_INTERVAL -lt 0 ]; then
            echo "Tomcat has not been killed completely yet. The process might be waiting on some system call or might be UNINTERRUPTIBLE."
        fi
      fi
    fi
  fi

elif [ "$1" = "configtest" ] ; then

    eval "\"$_RUNJAVA\"" $LOGGING_MANAGER $JAVA_OPTS \
      -D$ENDORSED_PROP="\"$JAVA_ENDORSED_DIRS\"" \
      -classpath "\"$CLASSPATH\"" \
      -Dcatalina.base="\"$CATALINA_BASE\"" \
      -Dcatalina.home="\"$CATALINA_HOME\"" \
      -Djava.io.tmpdir="\"$CATALINA_TMPDIR\"" \
      org.apache.catalina.startup.Bootstrap configtest
    result=$?
    if [ $result -ne 0 ]; then
        echo "Configuration error detected!"
    fi
    exit $result

elif [ "$1" = "version" ] ; then

    "$_RUNJAVA"   \
      -classpath "$CATALINA_HOME/lib/catalina.jar" \
      org.apache.catalina.util.ServerInfo

else

  echo "Usage: catalina.sh ( commands ... )"
  echo "commands:"
  if $os400; then
    echo "  debug             Start Catalina in a debugger (not available on OS400)"
    echo "  debug -security   Debug Catalina with a security manager (not available on OS400)"
  else
    echo "  debug             Start Catalina in a debugger"
    echo "  debug -security   Debug Catalina with a security manager"
  fi
  echo "  jpda start        Start Catalina under JPDA debugger"
  echo "  run               Start Catalina in the current window"
  echo "  run -security     Start in the current window with security manager"
  echo "  start             Start Catalina in a separate window"
  echo "  start -security   Start in a separate window with security manager"
  echo "  stop              Stop Catalina, waiting up to 5 seconds for the process to end"
  echo "  stop n            Stop Catalina, waiting up to n seconds for the process to end"
  echo "  stop -force       Stop Catalina, wait up to 5 seconds and then use kill -KILL if still running"
  echo "  stop n -force     Stop Catalina, wait up to n seconds and then use kill -KILL if still running"
  echo "  configtest        Run a basic syntax check on server.xml - check exit code for result"
  echo "  version           What version of tomcat are you running?"
  echo "Note: Waiting for the process to end and use of the -force option require that \$CATALINA_PID is defined"
  exit 1

fi
# cat run_tomcat.sh
#!/bin/bash
#echo "nameserver 223.6.6.6" > /etc/resolv.conf
#echo "192.168.7.248 k8s-vip.example.com" >> /etc/hosts

#/usr/share/filebeat/bin/filebeat -e -c /etc/filebeat/filebeat.yml -path.home /usr/share/filebeat -path.config /etc/filebeat -path.data /var/lib/filebeat -path.logs /var/log/filebeat &
su - nginx -c "/apps/tomcat/bin/catalina.sh start" #以nginx用户启动,因为是处理nginx转过来的请求,如果不是nginx的用户,可能会出现404一些报错。
tail -f /etc/hosts

3.2 K8S运行Nginx

# cat nginx.yaml
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: nginx-deployment-label
  name: nginx-deployment
  namespace: nginx-tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-selector
  template:
    metadata:
      labels:
        app: nginx-selector
    spec:
      containers:
      - name: nginx-container
        image: harbor.k8s.local/k8s/nginx-web1:v3
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "20"
        resources:
          limits:
            cpu: 2
            memory: 2Gi
          requests:
            cpu: 500m
            memory: 1Gi

        volumeMounts:
        - name: images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: images
        nfs:
          server: 192.168.6.87
          path: /data/k8sdata/nginx-tomcat/images
      - name: static
        nfs:
          server: 192.168.6.87
          path: /data/k8sdata/nginx-tomcat/static
      #nodeSelector:
      #  group: magedu

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: nginx-service-label
  name: nginx-service
  namespace: nginx-tomcat
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 40002
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 40443
  selector:
    app: nginx-selector

3.3 K8S运行Tomcat

kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: tomcat-app1-deployment-label
  name: tomcat-app1-deployment
  namespace: nginx-tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat-app1-selector
  template:
    metadata:
      labels:
        app: tomcat-app1-selector
    spec:
      containers:
      - name: tomcat-app1-container
        image: harbor.k8s.local/k8s/tomcat-app1:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
        resources:
          limits:
            cpu: 1
            memory: "512Mi"
          requests:
            cpu: 500m
            memory: "512Mi"
        volumeMounts:
        - name: images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: images
        nfs:
          server: 192.168.6.87
          path: /data/k8sdata/nginx-tomcat/images
      - name: static
        nfs:
          server: 192.168.6.87
          path: /data/k8sdata/nginx-tomcat/static
#      nodeSelector:
#        project: test
#        app: tomcat
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: tomcat-app1-service-label
  name: tomcat-app1-service
  namespace: nginx-tomcat
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 40003
  selector:
    app: tomcat-app1-selector

4. K8S基于Ceph rbd和cephfs实现数据的持久化和共享

4.1 rbd结合k8s提供存储卷及动态存储使用案例

为使k8s可以使用ceph提供的rbd镜像作为存储设备,需要在ceph上创建rbd,并且让k8s node能够通过ceph的认证。

k8s在使用ceph作为动态卷存储的时候,需要kube-controller-manager组件能够访问ceph,因此需要在包括k8s master及node的所有节点上均配置认证文件。rbd一般给需要数据完全独立的pod(如mysql,zookeeper集群)使用。

4.1.1 创建初始化rbd

#创建存储池# ceph osd pool create rbd-k8s-pool1 32 32 

#启用rbd# ceph osd pool application enable rbd-k8s-pool1 rbd

#初始化rbd# rbd pool init -p rbd-k8s-pool1

4.1.2 创建image

#创建镜像# rbd create k8s-img1 --size 3G --pool rbd-k8s-pool1 --image-format 2 --image-feature layering

#验证# rbd ls --pool rbd-k8s-pool1# rbd --image k8s-img1 --pool rbd-k8s-pool1 info
rbd image 'k8s-img1':
        size 3 GiB in 768 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 4ccb05ae12403
        block_name_prefix: rbd_data.4ccb05ae12403
        format: 2
        features: layering
        op_features:
        flags:
        create_timestamp: Thu Oct 14 23:22:06 2021
        access_timestamp: Thu Oct 14 23:22:06 2021
        modify_timestamp: Thu Oct 14 23:22:06 2021

4.1.3 客户端安装ceph-common

4.1.3.1 master和node各节点导入key

# wget -q -O-  'https://download.ceph.com/keys/release.asc' |sudo apt-key add -

4.1.3.2 配置镜像源,安装ceph-common

配置所有master和node的镜像源,ceph版本与ceph集群保持一致

~# cat /etc/apt/sources.list
# 默认注释了源码镜像以提高 apt update 速度,如有需要可自行取消注释
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-updates main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-backports main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-security main restricted universe multiverse

# 预发布软件源,不建议启用
# deb https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
# deb-src https://mirrors.tuna.tsinghua.edu.cn/ubuntu/ focal-proposed main restricted universe multiverse
deb https://mirrors.tuna.tsinghua.edu.cn/ceph/debian-pacific focal main

~# apt update

~# apt-cache madison ceph-common

~# apt install ceph-common=16.2.6-1focal -y

4.14 创建ceph用户和授权

#创建用户
~# ceph auth get-or-create client.k8s-rbd mon 'allow r' osd 'allow * pool=rbd-k8s-pool1'
[client.k8s-rbd]
        key = AQCeXWhhFyrlDRAA0ynWQ6+yEJvfUnWOGWsm0A==
You have new mail in /var/mail/root

#验证用户
~# ceph auth get client.k8s-rbd
[client.k8s-rbd]
        key = AQCeXWhhFyrlDRAA0ynWQ6+yEJvfUnWOGWsm0A==
        caps mon = "allow r"
        caps osd = "allow * pool=rbd-k8s-pool1"
exported keyring for client.k8s-rbd

#导出用户信息到keyring文件# ceph auth get client.k8s-rbd -o ceph.client.k8s-rbd.keyring

#文件同步到k8s各个master及node节点上
~# scp /etc/ceph/ceph.conf ceph.client.k8s-rbd.keyring root@192.168.6.79:/etc/ceph/
...

# k8s各个节点验证用户权限
~# ceph --user k8s-rbd -s
~# rbd --id k8s-rbd ls --pool=rbd-k8s-pool1 

4.1.5 k8s所有节点配置主机名解析

~# cat /etc/hosts
...
192.168.6.64    ceph_mon1.cluster.exp   ceph_mon1
192.168.6.65    ceph_mon2.cluster.exp   ceph_mon2
192.168.6.66    ceph_mon3.cluster.exp   ceph_mon3
192.168.6.67    ceph_node1.cluster.exp  ceph_node1
192.168.6.68    ceph_node2.cluster.exp  ceph_node2
192.168.6.72    ceph_node3.cluster.exp  ceph_node3
192.168.6.71    ceph_node4.cluster.exp  ceph_node4
192.168.6.69    ceph_mgr1.cluster.exp   ceph_mgr1
192.168.6.70    ceph_mgr2.cluster.exp   ceph_mgr2
192.168.6.73    ceph_deploy.cluster.exp ceph_deploy

4.1.6 使用keyring文件挂载rbd

基于ceph提供的rbd实现存储卷的动态提供,有两种方式实现,一是通过宿主机的keyring文件挂载rbd,一种是将keyring中的key定义为k8s中的secret,然后pod通过secret挂载rbd。

4.1.6.1 通过keyring文件直接挂载-busybox

apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - image: busybox
    command:
      - sleep
      - "3600"
    imagePullPolicy: Always
    name: busybox
    #restartPolicy: Always
    volumeMounts:
    - name: rbd-data1
      mountPath: /data
  volumes:
    - name: rbd-data1
      rbd:
        monitors:
        - '172.16.244.11:6789'
        - '172.16.244.12:6789'
        - '172.16.244.13:6789'
        pool: rbd-k8s-pool1
        image: k8s-img1
        fsType: ext4
        readOnly: false
        user: k8s-rbd
        keyring: /etc/ceph/ceph.client.k8s-rbd.keyring

4.1.6.2 通过keyring文件直接挂载-nginx

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - name: rbd-data1
          mountPath: /data
      volumes:
        - name: rbd-data1
          rbd:
            monitors:
            - '172.16.244.11:6789'
            - '172.16.244.12:6789'
            - '172.16.244.13:6789'
            pool: rbd-k8s-pool1
            image: k8s-img1
            fsType: ext4
            readOnly: false
            user: k8s-rbd
            keyring: /etc/ceph/ceph.client.k8s-rbd.keyring

4.1.6.3 宿主机验证rbd

# root@node1:~# df -Th  #实际上是挂载到宿主机后在映射给容器
...
/dev/rbd0      ext4      2.9G  9.0M  2.9G   1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-pool1-image-k8s-img1
...

# root@node1:~# rbd showmapped
id  pool           namespace  image     snap  device   
0   rbd-k8s-pool1             k8s-img1  -     /dev/rbd0

4.1.7 使用secret挂载rbd

将key定义为secret,然后在挂载到pod,则每个k8s节点就不再需要保存keyring文件。

4.1.7.1 创建secret

首先创建secret,secret中主要包含ceph中被授权用户的keyring文件中的key,需要将key内容通过base64编码后创建secret即可。

#将key进行编码
root@ceph_deploy:~/ansible# ceph auth print-key client.k8s-rbd
AQBSAG1hRfnOABAA6FCEoiq82YqdV4deJIkb1w==

root@ceph_deploy:~/ansible# ceph auth print-key client.k8s-rbd | base64
QVFCU0FHMWhSZm5PQUJBQTZGQ0VvaXE4MllxZFY0ZGVKSWtiMXc9PQ==
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-k8s-rbd
type: "kubernetes.io/rbd"
data:
  key: QVFCU0FHMWhSZm5PQUJBQTZGQ0VvaXE4MllxZFY0ZGVKSWtiMXc9PQ==

4.1.7.2 创建pod

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - name: rbd-data1
          mountPath: /data
      volumes:
        - name: rbd-data1
          rbd:
            monitors:
            - '172.16.244.11:6789'
            - '172.16.244.12:6789'
            - '172.16.244.13:6789'
            pool: rbd-k8s-pool1
            image: k8s-img1
            fsType: ext4
            readOnly: false
            user: k8s-rbd
            secretRef:
              name: ceph-secret-k8s-rbd 

4.1.7.3 pod验证挂载

root@deploy:~/exp# kubectl exec -it nginx-deployment-66b4dcbc96-nwkxn bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deployment-66b4dcbc96-nwkxn:/# df -h
overlay         120G   11G  110G   9% /
tmpfs            64M     0   64M   0% /dev
tmpfs           980M     0  980M   0% /sys/fs/cgroup
/dev/rbd0       2.9G  9.0M  2.9G   1% /data
/dev/sda2       120G   11G  110G   9% /etc/hosts
shm              64M     0   64M   0% /dev/shm
tmpfs           980M   12K  980M   1% /run/secrets/kubernetes.io/serviceaccount
tmpfs           980M     0  980M   0% /proc/acpi
tmpfs           980M     0  980M   0% /proc/scsi
tmpfs           980M     0  980M   0% /sys/firmware

4.1.7.4 宿主机验证挂载

# root@node1:~# df -Th  #实际上是挂载到宿主机后在映射给容器
...
/dev/rbd0      ext4      2.9G  9.0M  2.9G   1% /var/lib/kubelet/plugins/kubernetes.io/rbd/mounts/rbd-k8s-pool1-image-k8s-img1
...

# root@node1:~# rbd showmapped
id  pool           namespace  image     snap  device   
0   rbd-k8s-pool1             k8s-img1  -     /dev/rbd0

4.1.8 动态存储供给-需要使用二进制安装k8s

4.1.8.1 创建admin用户secret

root@ceph_deploy:~/ansible# ceph auth print-key client.admin | base64
QVFBZVIybGhXb3U1Q0JBQVVIMk1nRXIyZndUbTJ2c3R0UmUyc0E9PQ==
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret-admin
type: "kubernetes.io/rbd"
data:
  key: QVFBZVIybGhXb3U1Q0JBQVVIMk1nRXIyZndUbTJ2c3R0UmUyc0E9PQ== 

4.1.8.2 创建普通用户的secret

  • [同4.1.7.1] (#4.1.7.1)

4.1.8.3 创建存储类

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: ceph-storage-class-ceph
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" #设置为默认存储类
provisioner: kubernetes.io/rbd
parameters:
  monitors: 172.16.244.11:6789,172.16.244.12:6789,172.16.244.13:6789
  adminId: admin   #ceph的admin账号
  adminSecretName: ceph-secret-admin
  adminSecretNamespace: default 
  pool: rbd-k8s-pool1
  userId: k8s-rbd
  userSecretName: ceph-secret-k8s-rbd 
root@deploy:~/exp# kubectl get sc
NAME                                PROVISIONER         RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
ceph-storage-class-ceph (default)   kubernetes.io/rbd   Delete          Immediate           false                  12s

root@ceph_deploy:~/ansible# rbd ls --pool rbd-k8s-pool1
k8s-img1
kubernetes-dynamic-pvc-d2eaac29-3628-448c-9042-dcb039d2f462

4.1.8.4 创建基于存储类的PVC

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-data-pvc
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: ceph-storage-class-ceph 
  resources:
    requests:
      storage: '5Gi'
root@deploy:~/exp# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS              AGE
mysql-data-pvc   Bound    pvc-e84d0d23-6b32-47c9-9028-179bddb7e64f   5Gi        RWO            ceph-storage-class-ceph   8m58s

4.1.8.5 运行单机mysql并验证

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: harbor.k8s.local/k8s/mysql:5.6.46 
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: test123456
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-data-pvc 


---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: mysql-service-label 
  name: mysql-service
spec:
  type: NodePort
  ports:
  - name: http
    port: 3306
    protocol: TCP
    targetPort: 3306
    nodePort: 43306
  selector:
    app: mysql

4.2 Cephfs的使用

4.2.1 创建secret

  • [同4.1.7.1] (#4.1.7.1)

4.2.2 创建pod

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx
        ports:
        - containerPort: 80

        volumeMounts:
        - name: staticdata-cephfs 
          mountPath: /usr/share/nginx/html/ 
      volumes:
        - name: staticdata-cephfs
          cephfs:
            monitors:
            - '172.16.244.11:6789'
            - '172.16.244.12:6789'
            - '172.16.244.13:6789'
            path: /
            user: admin   #可以使用普通用户,也可以使用admin
            secretRef:
              name: ceph-secret-admin

4.2.3 pod验证挂载

root@deploy:~/exp# kubectl get pod
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-668744f975-254h5   1/1     Running   0          11m
nginx-deployment-668744f975-cxwtk   1/1     Running   0          11m
nginx-deployment-668744f975-dktfs   1/1     Running   0          11m


root@deploy:~/exp# kubectl exec -it  nginx-deployment-668744f975-254h5  bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
root@nginx-deployment-668744f975-254h5:/# df -h 
Filesystem                                                  Size  Used Avail Use% Mounted on
overlay  
...
172.16.244.11:6789,172.16.244.12:6789,172.16.244.13:6789:/  380G     0  380G   0% /usr/share/nginx/html
...

root@nginx-deployment-668744f975-254h5:/# echo "321312312"> /usr/share/nginx/html/index.html

4.2.4 pod多副本验证

root@deploy:~/exp# kubectl exec -it nginx-deployment-668744f975-cxwtk bash

root@nginx-deployment-668744f975-cxwtk:/# cat /usr/share/nginx/html/index.html 
321312312

4.2.5 宿主机验证

参见:4.1.7.4

5 .实现基于探针对pod中的访问进行健康检查。

5.1 Pod的状态

https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/

image-20211018165316409

image-20211018170522664

第一阶段:

  • Pending:正在创建Pod但是pod中的容器还没有全部被创建完成(处于此状态的pod应检查pod依赖的存储是否有挂载权限,镜像是否可以下载,调度是否正常等)。
  • Failed :Pod中有容器启动失败而导致pod工作异常。
  • Unknown:由于某种原因无法获得pod当前状态,通常是由于pod所在节点通信错误造成。
  • Succeeded:Pod中的所有容器都被成功终止,即pod里所有的containers均已经terminated。

第二阶段:

  • Unschedulable: pod无法被调度,kube-scheduler没有匹配到合适的node节点。
  • PodScheduled: pod正处于调度中,在kube-scheduler刚开始调度的时候,还没有将pod分配到指定的node,在选出合适的节点后就会更新etcd数据,将pod分配到指定的node。
  • Initialized: 所有pod中的初始化容器已经完成了。
  • ImagePullBackoff:pod所在的node节点下载镜像失败。
  • Running:Pod内的所有容器已经被创建并启动。
  • Ready: 表示pod中的容器已经可以提供访问服务。

image-20211018175030140

状态解释
Errorpod 启动过程中发生错误
NodeLostPod 所在节点失联
UnkownPod 所在节点失联或其它未知异常
WaitingPod 等待启动
PendingPod 等待被调度
TerminatingPod 正在被销毁
CrashLoopBackOffPod 启动、停止
InvalidImageNamenode节点无法解析镜像名称导致的镜像无法下载
ImageInspectError无法校验镜像,镜像不完整导致
ErrImageNeverPull策略禁止拉取镜像,镜像中心权限是私有等
ImagePullBackOff镜像拉取失败,但是正在重新拉取
RegistryUnavailable镜像服务器不可用,网络原因或harbor宕机
ErrImagePull镜像拉取出错,超时或下载被强制终止
CreateContainerConfigError不能创建kubelet使用的容器配置
CreateContainerError创建容器失败
PreStartContainer执行preStart hook报错,Pod hook(钩子)是由 Kubernetes 管理的 kubelet 发 起的,当容器中的进程启动前或者容器中的进程终止之前运行,比如容器创建完成后里面的服务启动之前可以检查一下 依赖的其它服务是否启动,或者容器退出之前可以把容器中的服务先通过命令停止。
PostStartHookError执行 postStart hook 报错
RunContainerErrorpod运行失败,容器中没有初始化PID为1的守护进程等
ContainersNotInitializedpod没有初始化完毕
ContainersNotReadypod没有准备完毕
ContainerCreatingpod正在创建中
PodInitializingpod正在初始化中
DockerDaemonNotReadynode节点decker服务没有启动
NetworkPluginNotReady网络插件还没有完全启动

5.2 Pod探针

https://kubernetes.io/zh/docs/concepts/workloads/pods/pod-lifecycle/

5.2.1 概念

探针是由kubelet 对容器执行的定期诊断,以保证Pod的状态始终处于运行状态,要执行诊断,kubelet 调用由容器实现的Handler(处理程序),有三种类型的处理程序:

  • ExecAction: #在容器内执行指定命令,如果命令退出时返回码为0则认为诊断成功。

  • TCPSocketAction:对指定端口上的容器的IP地址进行TCP检查,如果端口打开,则诊断被认为是成功的。

  • HTTPGetAction: 对指定的端口和路径上的容器的IP地址执行HTTPGet请求,如果响应的状态码大于等于200且小于 400,则诊断被认 为是成功的。

每次探测都将获得以下三种结果之一:

成功:容器通过了诊断。
失败:容器未通过诊断。
未知:诊断失败,因此不会采取任何行动。

5.2.2 配置探针

5.2.2.1 探针的类型

  • readinessProbe 就绪探针:

就绪探针,如果就绪探测失败,端点控制器将从与Pod匹配的所有Service的端点中删除该Pod的IP地址,初始延迟之 前的就绪状态默认为Failure(失败),如果容器不提供就绪探针,则默认状态为 Success,readinessProbe用于控 制pod是否添加至service。

  • livenessProbe存活探针:

存活探针,检测容器容器是否正在运行,如果存活探测失败,则kubelet会杀死容器,并且容器将受到其重启策略的影响,如果容器不提供存活探针,则默认状态为 Success,livenessProbe用于控制是否重启pod。

5.2.2.2 探针的配置

https://kubernetes.io/zh/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/

探针的配置字段:

  • initialDelaySeconds: 120 #初始化延迟时间,告诉kubelet在执行第一次探测前应该等待多少秒,默认是0秒,最小值是0

  • periodSeconds: 60 #探测周期间隔时间,指定了kubelet应该每多少秒秒执行一次存活探测,默认是 10 秒。最小值是 1

  • timeoutSeconds: 5 #单次探测超时时间,探测的超时后等待多少秒,默认值是1秒,最小值是1。

  • successThreshold: 1 #从失败转为成功的重试次数,探测器在失败后,被视为成功的最小连续成功数,默认值是1,存活探测的这个值必须是 1,最小值是 1。

  • failureThreshold: 3 #从成功转为失败的重试次数,当Pod启动了并且探测到失败,Kubernetes的重试次数,存活探测情况下的放弃就意味 着重新启动容器,就绪探测情况下的放弃Pod 会被打上未就绪的标签,默认值是3,最小值是1。

HTTP 探测器可以在 httpGet 上配置额外的字段:

  • host:

#连接使用的主机名,默认是Pod的 IP,也可以在HTTP头中设置 “Host” 来代替。

  • scheme: http

#用于设置连接主机的方式(HTTP 还是 HTTPS),默认是 HTTP。

  • path: /monitor/index.html

#访问 HTTP 服务的路径。 httpHeaders:

#请求中自定义的 HTTP 头,HTTP 头字段允许重复。

  • port: 80
    #访问容器的端口号或者端口名,如果数字必须在 1 ~ 65535 之间。

5.2.2.3 Http探针示例

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
    #matchExpressions:
    #  - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.17.5 
        ports:
        - containerPort: 80
        #readinessProbe:
        livenessProbe:
          httpGet:
            #path: /monitor/monitor.html
            path: /index1.html
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3


---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80 
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 40012
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80

5.2.2.4 TCP探针示例

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: ng-deploy-80
    #matchExpressions:
    #  - {key: app, operator: In, values: [ng-deploy-80,ng-rs-81]}
  template:
    metadata:
      labels:
        app: ng-deploy-80
    spec:
      containers:
      - name: ng-deploy-80
        image: nginx:1.17.5 
        ports:
        - containerPort: 80
        livenessProbe:    # tcp探测,一般两个探针都写上
          tcpSocket:
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3

        readinessProbe:
          tcpSocket:
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
      
---
apiVersion: v1
kind: Service
metadata:
  name: ng-deploy-80 
spec:
  ports:
  - name: http
    port: 81
    targetPort: 80
    nodePort: 40012
    protocol: TCP
  type: NodePort
  selector:
    app: ng-deploy-80

5.2.2.5 ExecAction探针示例

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-deployment
spec:
  replicas: 1
  selector:
    matchLabels: #rs or deployment
      app: redis-deploy-6379
    #matchExpressions:
    #  - {key: app, operator: In, values: [redis-deploy-6379,ng-rs-81]}
  template:
    metadata:
      labels:
        app: redis-deploy-6379
    spec:
      containers:
      - name: redis-deploy-6379
        image: redis
        ports:
        - containerPort: 6379
        readinessProbe:
          exec:
            command:
            - /usr/local/bin/redis-cli
            - quit
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3

        livenessProbe:
          exec:
            command:
            - /usr/local/bin/redis-cli
            - quit
          initialDelaySeconds: 5
          periodSeconds: 3
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 3
      
---
apiVersion: v1
kind: Service
metadata:
  name: redis-deploy-6379 
spec:
  ports:
  - name: http
    port: 6379
    targetPort: 6379
    nodePort: 40016
    protocol: TCP
  type: NodePort
  selector:
    app: redis-deploy-6379

如果端口检测连续超过指定的三次都没有通过,则Pod状态如下:

image-20211018221724284

5.2.2.6 livenessProbe和readinessProbe的对比

  • 配置参数一样

  • livenessProbe #连续探测失败会重启、重建pod,readinessProbe不会执行重启或者重建Pod操作。

  • livenessProbe #连续检测指定次数失败后会将容器置于(Crash Loop BackOff)且不可用,readinessProbe不会。

  • readinessProbe #连续探测失败会从service的endpointd中删除该Pod,livenessProbe不具备此功能,但是会将容器挂起。

  • livenessProbe用户控制是否重启pod,readinessProbe用于控制pod是否添加至service

  • 建议: 两个探针都配置。

5.3 Pod的重启策略

k8s在Pod出现异常的时候会自动将Pod重启以恢复Pod中的服务。

  • restartPolicy:
  • Always:当容器异常时,k8s自动重启该容器,ReplicationController/Replicaset/Deployment。

  • OnFailure:当容器失败时(容器停止运行且退出码不为0),k8s自动重启该容器。

  • Never:不论容器运行状态如何都不会重启该容器,Job或CronJob。

containers:
- name: magedu-tomcat-app1-container
  image: harbor.magedu.local/magedu/tomcat-app1:v1
  #command: ["/apps/tomcat/bin/run_tomcat.sh"]
  #imagePullPolicy: IfNotPresent
  imagePullPolicy: Always
  ports:
  - containerPort: 8080
    protocol: TCP
    name: http
  env:
  - name: "password"
    value: "123456"
  - name: "age"
    value: "18"
  resources:
    limits:
      cpu: 1
      memory: "512Mi"
    requests:
      cpu: 500m
      memory: "512Mi"
  restartPolicy: Always

5.4 镜像的拉取策略

https://kubernetes.io/zh/docs/concepts/configuration/overview/

imagePullPolicy: IfNotPresent #node节点没有此镜像就去指定的镜像仓库拉取,node有就使用node本地镜像。
imagePullPolicy: Always #每次重建pod都会重新拉取镜像。
imagePullPolicy: Never #从不到镜像中心拉取镜像,只使用本地镜像。
  系统运维 最新文章
配置小型公司网络WLAN基本业务(AC通过三层
如何在交付运维过程中建立风险底线意识,提
快速传输大文件,怎么通过网络传大文件给对
从游戏服务端角度分析移动同步(状态同步)
MySQL使用MyCat实现分库分表
如何用DWDM射频光纤技术实现200公里外的站点
国内顺畅下载k8s.gcr.io的镜像
自动化测试appium
ctfshow ssrf
Linux操作系统学习之实用指令(Centos7/8均
上一篇文章      下一篇文章      查看所有文章
加:2021-10-19 12:18:43  更:2021-10-19 12:19:43 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/15 19:32:58-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码