1.灰度发布 #

1.1 准备新版本Service #

cp deployment-user-v1.yaml deployment-user-v2.yaml
apiVersion: apps/v1  #API 配置版本
kind: Deployment     #资源类型
metadata:
+  name: user-v2     #资源名称
spec:
  selector:
    matchLabels:
+      app: user-v2 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
  replicas: 3 #声明一个 Pod,副本的数量
  template:
    metadata:
      labels:
+        app: user-v2 #Pod的名称
    spec:   #组内创建的 Pod 信息
      containers:
      - name: nginx #容器的名称
+        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v2
        ports:
        - containerPort: 80 #容器内映射的端口

service-user-v2.yaml

apiVersion: v1
kind: Service
metadata:
+  name: service-user-v2
spec:
  selector:
+    app: user-v2
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: NodePort
kubectl apply -f deployment-user-v2.yaml service-user-v2.yaml
vi ingress-gray.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: user-canary
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/canary: "true"
+    nginx.ingress.kubernetes.io/canary-by-header: "name"
+    nginx.ingress.kubernetes.io/canary-by-header-value: "vip"
spec:
  rules:
  - http:
      paths: 
       - backend:
          serviceName: service-user-v2
          servicePort: 80
  backend:
     serviceName: service-user-v2
     servicePort: 80
kubectl apply -f ingress-gray.yaml
curl --header "name:vip"  http://172.31.178.169:31234/user

1.4 基于权重切分流量 #

vi ingress-gray.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: user-canary
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
    nginx.ingress.kubernetes.io/canary: "true"
+   nginx.ingress.kubernetes.io/canary-weight: "50"
spec:
  rules:
  - http:
      paths: 
       - backend:
          serviceName: service-user-v2
          servicePort: 80
  backend:
     serviceName: service-user-v2
     servicePort: 80
kubectl apply -f ingress-gray.yaml
for ((i=1; i<=10; i++)); do curl http://172.31.178.169:31234/user; done

1.5 优先级 #

2.滚动发布 #

2.1 发布流程和策略 #

2.2 配置文件 #

deployment-user-v1.yaml

apiVersion: apps/v1  #API 配置版本
kind: Deployment     #资源类型
metadata:
  name: user-v1     #资源名称
spec:
  minReadySeconds: 1
+ strategy:
+   type: RollingUpdate
+   rollingUpdate:
+     maxSurge: 1
+     maxUnavailable: 0
+ selector:
+   matchLabels:
+     app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
  replicas: 10 #声明一个 Pod,副本的数量
  template:
    metadata:
      labels:
        app: user-v1 #Pod的名称
    spec:   #组内创建的 Pod 信息
      containers:
      - name: nginx #容器的名称
+       image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
        ports:
        - containerPort: 80 #容器内映射的端口
参数 含义
minReadySeconds 容器接受流量延缓时间:单位为秒,默认为0。如果没有设置的话,k8s会认为容器启动成功后就可以用了。设置该值可以延缓容器流量切分
strategy.type = RollingUpdate ReplicaSet 发布类型,声明为滚动发布,默认也为滚动发布
strategy.rollingUpdate.maxSurge 最多Pod数量:为数字类型/百分比。如果 maxSurge 设置为1,replicas 设置为10,则在发布过程中pod数量最多为10 + 1个(多出来的为旧版本pod,平滑期不可用状态)。maxUnavailable 为 0 时,该值也不能设置为0
strategy.rollingUpdate.maxUnavailable 升级中最多不可用pod的数量:为数字类型/百分比。当 maxSurge 为 0 时,该值也不能设置为0
kubectl apply -f ./deployment-user-v1.yaml
deployment.apps/user-v1 configured
kubectl rollout status deployment/user-v1
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 3 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
Waiting for deployment "user-v1" rollout to finish: 4 of 10 updated replicas are available...
deployment "user-v1" successfully rolled out

3.服务可用性探针 #

3.1 什么是健康度检查? #

3.2 什么是服务探针? #

3.2.2 可用探针 ReadinessProbe #

3.2.3 启动探针 StartupProbe #

探针名称 在哪个环节触发 作用 检测失败对Pod的反应
启动探针 Pod 运行时 检测服务是否启动成功 杀死 Pod 并重启
存活探针 Pod 运行时 检测服务是否崩溃,是否需要重启服务 杀死 Pod 并重启
可用探针 Pod 运行时 检测服务是不是允许被访问到 停止Pod的访问调度,不会被杀死重启

3.3 探测方式 #

3.3.1 ExecAction #

vi shell-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: shell-probe
  name: shell-probe
spec:
  containers:
  - name: shell-probe
    image: registry.aliyuncs.com/google_containers/busybox
    args:
    - /bin/sh
    - -c
    - touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
    livenessProbe:
      exec:
        command:
        - cat
        - /tmp/healthy
      initialDelaySeconds: 5
      periodSeconds: 5
kubectl apply -f liveness.yaml 
kubectl get pods | grep liveness-exec
kubectl describe pods liveness-exec

Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  2m44s                default-scheduler  Successfully assigned default/liveness-exec to node1
  Normal   Pulled     2m41s                kubelet            Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 1.669600584s
  Normal   Pulled     86s                  kubelet            Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 605.008964ms
  Warning  Unhealthy  41s (x6 over 2m6s)   kubelet            Liveness probe failed: cat: can't open '/tmp/healthy': No such file or directory
  Normal   Killing    41s (x2 over 116s)   kubelet            Container liveness failed liveness probe, will be restarted
  Normal   Created    11s (x3 over 2m41s)  kubelet            Created container liveness
  Normal   Started    11s (x3 over 2m41s)  kubelet            Started container liveness
  Normal   Pulling    11s (x3 over 2m43s)  kubelet            Pulling image "registry.aliyuncs.com/google_containers/busybox"
  Normal   Pulled     11s                  kubelet            Successfully pulled image "registry.aliyuncs.com/google_containers/busybox" in 521.70892ms

3.3.2 TCPSocketAction #

tcp-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  name: tcp-probe
  labels:
    app: tcp-probe
spec:
  containers:
  - name: tcp-probe
    image: nginx
    ports:
    - containerPort: 80
    readinessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 5
      periodSeconds: 10
kubectl apply -f tcp-probe.yaml 
kubectl get pods | grep tcp-probe
kubectl describe pods tcp-probe
kubectl exec -it tcp-probe  -- /bin/sh
apt-get update
apt-get install vim -y
vi  /etc/nginx/conf.d/default.conf
80=>8080
nginx -s reload
kubectl describe pod tcp-probe
Warning  Unhealthy  6s    kubelet            Readiness probe failed: dial tcp 10.244.1.47:80: connect: connection 

3.3.3 HTTPGetAction #

vi http-probe.yaml

apiVersion: v1
kind: Pod
metadata:
  labels:
    test: http-probe
  name: http-probe
spec:
  containers:
  - name: http-probe
    image: registry.cn-beijing.aliyuncs.com/zhangrenyang/http-probe:1.0.0
    livenessProbe:
      httpGet:
        path: /liveness
        port: 80
        httpHeaders:
        - name: source
          value: probe
      initialDelaySeconds: 3
      periodSeconds: 3
vim ./http-probe.yaml
kubectl apply -f ./http-probe.yaml
kubectl describe pods http-probe
Normal   Killing    5s                kubelet            Container http-probe failed liveness probe, will be restarted
docker pull registry.cn-beijing.aliyuncs.com/zhangrenyang/http-probe:1.0.0
kubectl replace --force -f http-probe.yaml 

Dockerfile

FROM node
COPY ./app /app
WORKDIR /app
EXPOSE 3000
CMD node index.js
let http = require('http');
let start = Date.now();
http.createServer(function(req,res){
  if(req.url === '/liveness'){
    let value = req.headers['source'];
    if(value === 'probe'){
     let duration = Date.now()-start;
      if(duration>10*1000){
          res.statusCode=500;
          res.end('error');
      }else{
          res.statusCode=200;
          res.end('success');
      }
    }else{
     res.statusCode=200;
     res.end('liveness');
    }
  }else{
     res.statusCode=200;
     res.end('liveness');
  }
}).listen(3000,function(){console.log("http server started on 3000")});

4.储存机密信息 #

4.1 什么是 Secret #

4.2 Opaque 类型 #

4.2.1 命令行创建 #

kubectl create secret generic mysql-account --from-literal=username=zhufeng --from-literal=password=123456
kubectl get secret
字段 含义
NAME Secret的名称
TYPE Secret的类型
DATA 存储内容的数量
AGE 创建到现在的时间
//编辑值
kubectl edit secret account
//输出yaml格式
kubectl get secret account -o yaml
//输出json格式
kubectl get secret account -o json
//对Base64进行解码
echo MTIzNDU2 | base64 -d

4.2.2 配置文件创建 #

mysql-account.yaml

apiVersion: v1
kind: Secret
metadata:
  name: mysql-account
stringData:
  username: root
  password: root
type: Opaque
kubectl apply -f mysql-account.yaml 
secret/mysql-account created
kubectl get secret mysql-account -o yaml

4.3 私有镜像库认证 #

4.3.1 命令行创建 #

kubectl create secret docker-registry private-registry \
--docker-username=[用户名] \
--docker-password=[密码] \
--docker-email=[邮箱] \
--docker-server=[私有镜像库地址]
//查看私有库密钥组
kubectl get secret private-registry -o yaml
echo [value] | base64 -d

4.3.2 通过文件创建 #

vi private-registry-file.yaml

apiVersion: v1
kind: Secret
metadata:
  name: private-registry-file
data:
  .dockerconfigjson: eyJhdXRocyI6eyJodHRwczo
type: kubernetes.io/dockerconfigjson
kubectl apply -f ./private-registry-file.yaml
kubectl get secret private-registry-file -o yaml

4.4 使用 #

4.4.1 Volume 挂载 #

apiVersion: apps/v1  #API 配置版本
kind: Deployment     #资源类型
metadata:
  name: user-v1     #资源名称
spec:
  minReadySeconds: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
+ replicas: 1 #声明一个 Pod,副本的数量
  template:
    metadata:
      labels:
        app: user-v1 #Pod的名称
    spec:   #组内创建的 Pod 信息
+     volumes:
+       - name: mysql-account
+         secret:
+           secretName: mysql-account
      containers:
      - name: nginx #容器的名称
        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
+       volumeMounts:
+       - name: mysql-account
+         mountPath: /mysql-account
+         readOnly: true
        ports:
        - containerPort: 80 #容器内映射的端口
kubectl describe pods  user-v1-b88799944-tjgrs 
kubectl exec -it user-v1-b88799944-tjgrs  -- ls /root

4.4.2 环境变量注入 #

deployment-user-v1.yaml

apiVersion: apps/v1  #API 配置版本
kind: Deployment     #资源类型
metadata:
  name: user-v1     #资源名称
spec:
  minReadySeconds: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
  selector:
    matchLabels:
      app: user-v1 #告诉deployment根据规则匹配相应的Pod进行控制和管理,matchLabels字段匹配Pod的label值
  replicas: 1 #声明一个 Pod,副本的数量
  template:
    metadata:
      labels:
        app: user-v1 #Pod的名称
    spec:   #组内创建的 Pod 信息
      volumes:
        - name: mysql-account
          secret:
            secretName: mysql-account
      containers:
      - name: nginx #容器的名称
+       env:
+       - name: USERNAME
+         valueFrom:
+           secretKeyRef:
+             name: mysql-account
+             key: username
+       - name: PASSWORD
+         valueFrom:
+           secretKeyRef:
+             name: mysql-account
+             key: password
        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
        volumeMounts:
        - name: mysql-account
          mountPath: /mysql-account
          readOnly: true
        ports:
        - containerPort: 80 #容器内映射的端口
kubectl apply -f deployment-user-v1.yaml 
kubectl get pods
kubectl describe pod  user-v1-5f48f78d86-hjkcl
kubectl exec -it user-v1-688486759f-9snpx -- env | grep USERNAME

4.4.3 Docker 私有库认证 #

vi v4.yaml

image: [仅有镜像库地址]/[镜像名称]:[镜像标签]
kubectl apply -f v4.yaml
kubectl get pods
kubectl describe pods [POD_NAME]

vi v4.yaml

+imagePullSecrets:
+ - name: private-registry-file
containers:
  - name: nginx
kubectl apply -f v4.yaml

5.服务发现 #

5.1 服务发现 #

5.2 CoreDNS #

kubectl -n kube-system get all  -l k8s-app=kube-dns -o wide

5.3 服务发现规则 #

kubectl get pods
kubectl get svc
kubectl exec -it user-v1-688486759f-9snpx -- /bin/sh
curl http://service-user-v2
curl http://service-user-v2.default.svc.cluster.local

6.统一管理服务环境变量 #

6.1 什么是 ConfigMap #

6.2 创建 #

6.2.1 命令行创建 #

kubectl create configmap [config_name] --from-literal=[key]=[value]
kubectl create configmap mysql-config --from-literal=MYSQL_HOST=192.168.1.172 --from-literal=MYSQL_PORT=3306
kubectl get cm
kubectl describe cm mysql-config

6.2.2 配置清单创建 #

mysql-config-file.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: mysql-config-file
data:
  MYSQL_HOST: "192.168.1.172"
  MYSQL_PORT: "3306"
kubectl apply -f ./mysql-config-file.yaml
kubectl describe cm mysql-config-file

6.2.3 文件创建 #

kubectl create configmap [configname] --from-file=[key]=[file_path]

env.config

HOST: 192.168.0.1
PORT: 8080
kubectl create configmap env-from-file --from-file=env=./env.config
configmap/env-from-file created
kubectl get cm env-from-file -o yaml

6.2.4 目录创建 #

kubectl create configmap [configname] --from-file=[dir_path]
mkdir env && cd ./env
echo 'local' > env.local
echo 'test' > env.test
echo 'prod' > env.prod
kubectl create configmap env-from-dir --from-file=./
kubectl get cm env-from-dir -o yaml

6.3 使用方式 #

6.3.1 环境变量注入 #

containers:
  - name: nginx #容器的名称
+   env:
+     - name: MYSQL_HOST
+       valueFrom:
+         configMapKeyRef:
+           name: mysql-config
+           key: MYSQL_HOST
kubectl apply -f ./v1.yaml
//kubectl exec -it [POD_NAME] -- env | grep MYSQL_HOST
kubectl exec  -it user-v1-744f48d6bd-9klqr -- env | grep MYSQL_HOST
kubectl exec  -it user-v1-744f48d6bd-9klqr  -- env | grep MYSQL_PORT
      containers:
      - name: nginx #容器的名称
        env:
+       envFrom:
+       - configMapRef:
+           name: mysql-config
+           optional: true  
        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
        volumeMounts:
        - name: mysql-account
          mountPath: /mysql-account
          readOnly: true
        ports:
        - containerPort: 80 #容器内映射的端口

6.3.2 存储卷挂载 #

  template:
    metadata:
      labels:
        app: user-v1 #Pod的名称
    spec:   #组内创建的 Pod 信息
      volumes:
        - name: mysql-account
          secret:
            secretName: mysql-account
+       - name: envfiles
+         configMap:
+           name: env-from-dir
      containers:
      - name: nginx #容器的名称
        env:
        - name: USERNAME
          valueFrom:
            secretKeyRef:
              name: mysql-account
              key: username
        - name: PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-account
              key: password
        envFrom:
        - configMapRef:
            name: mysql-config
            optional: true  
        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 #使用哪个镜像
        volumeMounts:
        - name: mysql-account
          mountPath: /mysql-account
          readOnly: true
+       - name: envfiles
+         mountPath: /envfiles
+         readOnly: true
        ports:
        - containerPort: 80 #容器内映射的端口
kubectl apply -f deployment-user-v1.yaml 
kubectl get pods
kubectl describe pod user-v1-79b8768f54-r56kd
kubectl exec -it user-v1-744f48d6bd-9klqr -- ls /envfiles
 spec:   #组内创建的 Pod 信息
      volumes:
        - name: mysql-account
          secret:
            secretName: mysql-account
        - name: envfiles
          configMap:
            name: env-from-dir
+           items:
+           - key: env.local
+             path: env.local

7.污点与容忍 #

7.1 么是污点和容忍度? #

kubectl taint nodes [Node_Name] [key]=[value]:NoSchedule
//添加污点
kubectl taint nodes node1 user-v4=true:NoSchedule
//查看污点
kubectl describe node node1
kubectl describe node master
Taints: node-role.kubernetes.io/master:NoSchedule

vi deployment-user-v4.yaml

apiVersion: apps/v1 
kind: Deployment 
metadata:
  name: user-v4 
spec:
  minReadySeconds: 1
  selector:
    matchLabels:
      app: user-v4 
  replicas: 1
  template:
    metadata:
      labels:
        app: user-v4 
    spec:
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 
        ports:
        - containerPort: 80
kubectl apply -f deployment-user-v4.yaml

vi deployment-user-v4.yaml

apiVersion: apps/v1 
kind: Deployment 
metadata:
  name: user-v4 
spec:
  minReadySeconds: 1
  selector:
    matchLabels:
      app: user-v4 
  replicas: 1
  template:
    metadata:
      labels:
        app: user-v4 
    spec:  
+     tolerations:
+     - key: "user-v4"
+       operator: "Equal"
+       value: "true"
+       effect: "NoSchedule"
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 
        ports:
        - containerPort: 80
kubectl taint nodes node1 user-v4=true:NoSchedule
kubectl describe node node1
kubectl describe node master

vi deployment-user-v4.yaml

apiVersion: apps/v1 
kind: Deployment 
metadata:
  name: user-v4 
spec:
  minReadySeconds: 1
  selector:
    matchLabels:
      app: user-v4 
  replicas: 1 
  template:
    metadata:
      labels:
        app: user-v4 
    spec:  
+     tolerations:
+     - key: "node-role.kubernetes.io/master"
+       operator: "Exists"
+       effect: "NoSchedule"
      containers:
      - name: nginx
        image: registry.cn-beijing.aliyuncs.com/zhangrenyang/nginx:user-v3 
        ports:
        - containerPort: 80
 kubectl apply -f deployment-user-v4.yaml

apiVersion: v1kind: Podmetadata: name: private-regspec: containers: - name: private-reg-container image: imagePullSecrets: - name: har