Deployment
Deployment

前面我们学习了 ReplicaSet 控制器,了解到该控制器是用来维护集群中运行的 Pod 数量的,但是往往在实际操作的时候,我们反而不会去直接使用 RS,而是会使用更上层的控制器,比如我们今天要学习的主角 Deployment。Deployment 一个非常重要的功能就是实现了 Pod 的滚动更新,比如我们应用要更新,我们只需要更新我们的容器镜像,然后修改 Deployment 里面的 Pod 模板镜像,那么 Deployment 就会用**滚动更新(Rolling Update)**的方式来升级现在的 Pod,这个能力是非常重要的。因为对于线上的服务我们需要做到不中断服务,所以滚动更新就成了必须的一个功能。而 Deployment 这个能力的实现,依赖的就是上节课我们学习的ReplicaSet 这个资源对象,实际上我们可以通俗的理解就是每个Deployment 就对应集群中的一次部署,这样就更好理解了。
Deployment是最常用的K8s工作负载控制器(Workload Controllers),是K8s的一个抽象概念,用于更高级层次对象,部署和管理Pod,其他控制器还有DaemonSet、StatefulSet等。
Deployment的主要功能:
管理Pod和ReplicaSet
具有上线部署、副本设定、滚动升级、回滚等功能
提供声明式更新,例如只更新一个新的Image
应用场景:网站、API、微服务

deployment应用生命周期管理流程:

控制器的协调工作原理

💘 实战:Deployment之水平伸缩和滚动更新测试-2022.12.15(成功测试)
- 实验环境
11、win10,vmwrokstation虚机;
22、k8s集群:3台centos7.6 1810虚机,2个master节点,1个node节点
3 k8s version:v1.20
4 CONTAINER-RUNTIME:containerd:v1.6.10
实验软件(无)
Deployment 资源对象的格式和 ReplicaSet 几乎一致,如下资源对象就是一个常见的 Deployment 资源类型:
1# nginx-deploy.yaml
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5 name: nginx-deploy
6 namespace: default
7 labels: #这个标签仅仅用于标记deployment本身这个资源对象。注意:deployment里metadata里的label标签定不定义都无所谓,没有什么实际意义,因为deployment在k8s已经是一个很高级的概念了,没有什么人可以管它了哈哈。
8 role: deploy
9spec:
10 replicas: 3 # 期望的 Pod 副本数量,默认值为1
11 selector: # Label Selector,必须匹配Pod模板中的标签。
12 matchLabels: #这里也可以是matchExpressions
13 app: nginx
14 template: # Pod 模板
15 metadata:
16 labels:
17 app: nginx #一定要包含上面的 matchLabels里面的标签。
18 spec:
19 containers:
20 - name: nginx
21 image: nginx:latest # latest标签最好别用在线上
22 ports:
23 - containerPort: 80
- 我们这里只是将类型替换成了 Deployment,我们可以先来创建下这个资源对象:
1[root@master1 ~]#kubectl apply -f nginx-deploy.yaml
2deployment.apps/nginx-deploy created
3[root@master1 ~]#kubectl get deployments.apps
4NAME READY UP-TO-DATE AVAILABLE AGE
5nginx-deploy 3/3 3 3 23s
- 创建完成后,查看 Pod 状态:
1[root@master1 ~]#kubectl get po
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-cd55c47f5-gwrb5 1/1 Running 0 48s
4nginx-deploy-cd55c47f5-h6spx 1/1 Running 0 48s
5nginx-deploy-cd55c47f5-x5mgl 1/1 Running 0 48s
到这里我们发现和之前的 RS 对象是否没有什么两样,都是根据spec.replicas 来维持的副本数量。
- 我们随意查看一个Pod 的描述信息:
1[root@master1 ~]#kubectl describe po nginx-deploy-cd55c47f5-gwrb5
2Name: nginx-deploy-cd55c47f5-gwrb5
3Namespace: default
4Priority: 0
5Service Account: default
6Node: node1/172.29.9.62
7Start Time: Thu, 15 Dec 2022 07:24:20 +0800
8Labels: app=nginx
9 pod-template-hash=cd55c47f5
10Annotations: <none>
11Status: Running
12IP: 10.244.1.16
13IPs:
14 IP: 10.244.1.16
15Controlled By: ReplicaSet/nginx-deploy-cd55c47f5
16……
17Events:
18 Type Reason Age From Message
19 ---- ------ ---- ---- -------
20 Normal Scheduled 103s default-scheduler Successfully assigned default/nginx-deploy-cd55c47f5-gwrb5 to node1
21 Normal Pulling 102s kubelet Pulling image "nginx:latest"
22 Normal Pulled 87s kubelet Successfully pulled image "nginx:latest" in 15.23603838s
23 Normal Created 87s kubelet Created container nginx
24 Normal Started 87s kubelet Started container nginx
我们仔细查看其中有这样一个信息 Controlled By: ReplicaSet/nginx-deploy-cd55c47f5,什么意思?是不是表示当前我们这个 Pod 的控制器是一个 ReplicaSet 对象啊,我们不是创建的一个 Deployment 吗?为什么Pod 会被 RS 所控制呢?
- 那我们再去看下这个对应的 RS 对象的详细信息如何呢:
1[root@master1 ~]#kubectl describe rs nginx-deploy-cd55c47f5
2Name: nginx-deploy-cd55c47f5
3Namespace: default
4Selector: app=nginx,pod-template-hash=cd55c47f5
5Labels: app=nginx
6 pod-template-hash=cd55c47f5
7Annotations: deployment.kubernetes.io/desired-replicas: 3
8 deployment.kubernetes.io/max-replicas: 4
9 deployment.kubernetes.io/revision: 1
10Controlled By: Deployment/nginx-deploy
11Replicas: 3 current / 3 desired
12Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
13Pod Template:
14 Labels: app=nginx
15 pod-template-hash=cd55c47f5
16 Containers:
17 nginx:
18 Image: nginx:latest
19 Port: 80/TCP
20 Host Port: 0/TCP
21 Environment: <none>
22 Mounts: <none>
23 Volumes: <none>
24Events:
25 Type Reason Age From Message
26 ---- ------ ---- ---- -------
27 Normal SuccessfulCreate 3m41s replicaset-controller Created pod: nginx-deploy-cd55c47f5-gwrb5
28 Normal SuccessfulCreate 3m41s replicaset-controller Created pod: nginx-deploy-cd55c47f5-x5mgl
29 Normal SuccessfulCreate 3m41s replicaset-controller Created pod: nginx-deploy-cd55c47f5-h6spx
其中有这样的一个信息: Controlled By: Deployment/nginx-deploy,明白了吧?意思就是我们的 Pod 依赖的控制器 RS 实际上被我们的 Deployment 控制着呢。
- 我们可以用下图来说明 Pod、ReplicaSet、Deployment 三者之间的关系:

通过上图我们可以很清楚的看到,定义了 3 个副本的 Deployment 与 ReplicaSet 和 Pod 的关系,就是一层一层进行控制的。ReplicaSet 作用和之前一样还是来保证 Pod 的个数始终保存指定的数量,所以 Deployment 中的容器restartPolicy=Always 是唯一的就是这个原因,因为容器必须始终保证自己处于 Running 状态,ReplicaSet 才可以去明确调整 Pod 的个数。而 Deployment 是通过管理 ReplicaSet 的数量和属性来实现水平扩展/收缩以及滚动更新两个功能的。
目录
[toc]
1、水平伸缩
**水平扩展/收缩**的功能比较简单,因为 ReplicaSet 就可以实现,所以 Deployment 控制器只需要去修改它所控制的 ReplicaSet 的 Pod 副本数量就可以了。比如现在我们把 Pod 的副本调整到 4 个,那么 Deployment 所对应的 ReplicaSet 就会自动创建一个新的 Pod 出来,这样就水平扩展了。

- 先来看下当前deployment、rs的数量:
1[root@master1 ~]#kubectl get deploy
2NAME READY UP-TO-DATE AVAILABLE AGE
3nginx-deploy 3/3 3 3 9m48s #这里表示有3个replica
4[root@master1 ~]#kubectl get rs
5NAME DESIRED CURRENT READY AGE
6nginx-deploy-cd55c47f5 3 3 3 9m50s #这里表示有3个replica 即:deployment和rs这的replicas的数量是保持一致的
- 我们可以使用一个新的命令
kubectl scale命令来完成这个操作:
开始扩容pod
1[root@master1 ~]#kubectl scale deployment nginx-deploy --replicas=4
2deployment.apps/nginx-deploy scaled
- 扩展完成后可以查看当前的 RS 对象
1[root@master1 ~]#kubectl get rs
2NAME DESIRED CURRENT READY AGE
3nginx-deploy-cd55c47f5 4 4 3 9m #可以看到期望的 Pod 数量已经变成 4 了,只是 Pod 还没准备完成,所以 READY 状态数量还是 3
4[root@master1 ~]#kubectl get rs
5NAME DESIRED CURRENT READY AGE
6nginx-deploy-cd55c47f5 4 4 4 11m
7
8[root@master1 ~]#kubectl get deploy
9NAME READY UP-TO-DATE AVAILABLE AGE
10nginx-deploy 4/4 4 4 11m
11[root@master1 ~]#kubectl get po
12NAME READY STATUS RESTARTS AGE
13nginx-deploy-cd55c47f5-gwrb5 1/1 Running 0 11m
14nginx-deploy-cd55c47f5-h6spx 1/1 Running 0 11m
15nginx-deploy-cd55c47f5-vt427 1/1 Running 0 40s
16nginx-deploy-cd55c47f5-x5mgl 1/1 Running 0 11m
可以看到期望的 Pod 数量已经变成 4 了,只是 Pod 还没准备完成,所以 READY 状态数量还是 3。
- 同样查看 RS 的详细信息:
1[root@master1 ~]#kubectl describe rs nginx-deploy-cd55c47f5
2Name: nginx-deploy-cd55c47f5
3Namespace: default
4Selector: app=nginx,pod-template-hash=cd55c47f5
5Labels: app=nginx
6 pod-template-hash=cd55c47f5
7Annotations: deployment.kubernetes.io/desired-replicas: 4
8 deployment.kubernetes.io/max-replicas: 5
9 deployment.kubernetes.io/revision: 1
10Controlled By: Deployment/nginx-deploy
11Replicas: 4 current / 4 desired
12Pods Status: 4 Running / 0 Waiting / 0 Succeeded / 0 Failed
13Pod Template:
14 Labels: app=nginx
15 pod-template-hash=cd55c47f5
16 Containers:
17 nginx:
18 Image: nginx:latest
19 Port: 80/TCP
20 Host Port: 0/TCP
21 Environment: <none>
22 Mounts: <none>
23 Volumes: <none>
24Events:
25 Type Reason Age From Message
26 ---- ------ ---- ---- -------
27 Normal SuccessfulCreate 13m replicaset-controller Created pod: nginx-deploy-cd55c47f5-gwrb5
28 Normal SuccessfulCreate 13m replicaset-controller Created pod: nginx-deploy-cd55c47f5-x5mgl
29 Normal SuccessfulCreate 13m replicaset-controller Created pod: nginx-deploy-cd55c47f5-h6spx
30 Normal SuccessfulCreate 3m2s replicaset-controller Created pod: nginx-deploy-cd55c47f5-vt427
可以看到 ReplicaSet 控制器增加了一个新的 Pod。
- 同样的 Deployment 资源对象的事件中也可以看到完成了扩容的操作:
1[root@master1 ~]#kubectl describe deploy nginx-deploy
2Name: nginx-deploy
3Namespace: default
4CreationTimestamp: Thu, 15 Dec 2022 07:24:20 +0800
5Labels: <none>
6Annotations: deployment.kubernetes.io/revision: 1
7Selector: app=nginx
8Replicas: 4 desired | 4 updated | 4 total | 4 available | 0 unavailable
9StrategyType: RollingUpdate
10MinReadySeconds: 0
11RollingUpdateStrategy: 25% max unavailable, 25% max surge
12Pod Template:
13 Labels: app=nginx
14 Containers:
15 nginx:
16 Image: nginx:latest
17 Port: 80/TCP
18 Host Port: 0/TCP
19 Environment: <none>
20 Mounts: <none>
21 Volumes: <none>
22Conditions:
23 Type Status Reason
24 ---- ------ ------
25 Available True MinimumReplicasAvailable
26 Progressing True NewReplicaSetAvailable
27OldReplicaSets: <none>
28NewReplicaSet: nginx-deploy-cd55c47f5 (4/4 replicas created)
29Events:
30 Type Reason Age From Message
31 ---- ------ ---- ---- -------
32 Normal ScalingReplicaSet 14m deployment-controller Scaled up replica set nginx-deploy-cd55c47f5 to 3
33 Normal ScalingReplicaSet 3m58s deployment-controller Scaled up replica set nginx-deploy-cd55c47f5 to 4 from 3
- 注意:水平伸缩,并不是一次升级,因此这里的revision不会变的 水平扩容/缩容不会创建新的rs,只有更新镜像,触发滚动升级时才会触发rs,才回记录那个版本; scale up/down
1[root@master1 ~]#kubectl get po,rs,deploy
2NAME READY STATUS RESTARTS AGE
3pod/nginx-deploy-cd55c47f5-gwrb5 1/1 Running 0 16m
4pod/nginx-deploy-cd55c47f5-h6spx 1/1 Running 0 16m
5pod/nginx-deploy-cd55c47f5-vt427 1/1 Running 0 5m49s
6pod/nginx-deploy-cd55c47f5-x5mgl 1/1 Running 0 16m
7
8NAME DESIRED CURRENT READY AGE
9replicaset.apps/nginx-deploy-cd55c47f5 4 4 4 16m
10
11NAME READY UP-TO-DATE AVAILABLE AGE
12deployment.apps/nginx-deploy 4/4 4 4 16m
13[root@master1 ~]#kubectl rollout history deployment nginx-deploy
14deployment.apps/nginx-deploy
15REVISION CHANGE-CAUSE
161 <none>
好了,接下来我们看一下deployment的滚动更新。
2、滚动更新
滚动升级:K8s对Pod升级的默认策略,通过使用新版本Pod逐步更新旧版本Pod,实现零停机发布,用户无感知。
它是一个一个来的,不是一下子来的; Deployment实际上有维护着replicaset控制器,而这个控制器我们是不会直接去操作它的,而是从由deployment管理的,这是他的私属管理器;



如果只是水平扩展/收缩这两个功能,就完全没必要设计 Deployment 这个资源对象了,Deployment 最突出的一个功能是支持滚动更新。
- 我们先查看下deployment的升级策略:
1[root@master1 ~]#kubectl explain deploy.spec.strategy
2KIND: Deployment
3VERSION: apps/v1
4
5RESOURCE: strategy <Object>
6
7DESCRIPTION:
8 The deployment strategy to use to replace existing pods with new ones.
9
10 DeploymentStrategy describes how to replace existing pods with new ones.
11
12FIELDS:
13 rollingUpdate <Object>
14 Rolling update config params. Present only if DeploymentStrategyType =
15 RollingUpdate.
16
17 type <string>
18 Type of deployment. Can be "Recreate" or "RollingUpdate". Default is
19 RollingUpdate.#升级策略默认是RollingUpdate(滚动升级)
- 首先,我们先把默认的滚动升级策略改为
recreate,并且把pod模板的image改为nginx:1.7.9看下效果:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: nginx-deploy
5 namespace: default
6 labels: #这个标签仅仅用于标记deployment本身这个资源对象
7 role: deploy
8
9spec:
10 replicas: 4 #期望的Pod副本数量
11 strategy:
12 type: Recreate
13 selector: #label selector
14 matchLabels:
15 app: nginx
16 test: course
17 template: #Pod模板
18 metadata:
19 labels: #一定要和上面的selector 保持一致
20 app: nginx
21 test: course
22 spec:
23 containers:
24 - name: nginx
25 image: nginx:1.7.9
26 ports:
27 - containerPort: 80
- 再测试之前,我们先来再次看下当前的测试环境:
1[root@master1 ~]#kubectl get po,deploy,rs
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-fd46765d4-8nzmp 1/1 Running 0 9m8s
4nginx-deploy-fd46765d4-9rzqt 1/1 Running 0 9m8s
5nginx-deploy-fd46765d4-ckdhw 1/1 Running 0 9m8s
6nginx-deploy-fd46765d4-tdkjv 1/1 Running 0 2m38s
7
8NAME READY UP-TO-DATE AVAILABLE AGE
9deployment.apps/nginx-deploy 4/4 4 4 3m15s
10
11NAME DESIRED CURRENT READY AGE
12replicaset.apps/nginx-deploy-fd46765d4 4 4 4 3m15s
13
14[root@master1 ~]#kubectl rollout history deployment nginx-deploy
15deployment.apps/nginx-deploy
16REVISION CHANGE-CAUSE
171 <none>
- 此时再打开一个终端,用watch命令监控下pod的变化情况:
1[root@master1 ~]#kubectl get po --watch
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-fd46765d4-8nzmp 1/1 Running 0 9m34s
4nginx-deploy-fd46765d4-9rzqt 1/1 Running 0 9m34s
5nginx-deploy-fd46765d4-ckdhw 1/1 Running 0 9m34s
6nginx-deploy-fd46765d4-tdkjv 1/1 Running 0 3m4s
7
8-- test Recreate----
- 更新一下资源配置清单:
1[root@master1 ~]#kubectl apply -f nginx-deploy.yaml
2deployment.apps/nginx-deploy configured
- 此时观察下刚才那个watch pod的终端发生的变化:
我们会发现**Recreate**表示全部重新创建,即把旧的pod全部删除掉,然后再用新镜像创建新版本的pod;
1[root@master1 ~]#kubectl get po --watch
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-fd46765d4-8nzmp 1/1 Running 0 9m34s
4nginx-deploy-fd46765d4-9rzqt 1/1 Running 0 9m34s
5nginx-deploy-fd46765d4-ckdhw 1/1 Running 0 9m34s
6nginx-deploy-fd46765d4-tdkjv 1/1 Running 0 3m4s
7
8-- test Recreate----
9
10nginx-deploy-fd46765d4-tdkjv 1/1 Terminating 0 3m25s
11nginx-deploy-fd46765d4-8nzmp 1/1 Terminating 0 9m55s
12nginx-deploy-fd46765d4-ckdhw 1/1 Terminating 0 9m55s
13nginx-deploy-fd46765d4-9rzqt 1/1 Terminating 0 9m55s
14nginx-deploy-fd46765d4-ckdhw 0/1 Terminating 0 9m56s
15nginx-deploy-fd46765d4-ckdhw 0/1 Terminating 0 9m56s
16nginx-deploy-fd46765d4-ckdhw 0/1 Terminating 0 9m56s
17nginx-deploy-fd46765d4-9rzqt 0/1 Terminating 0 9m56s
18nginx-deploy-fd46765d4-9rzqt 0/1 Terminating 0 9m56s
19nginx-deploy-fd46765d4-9rzqt 0/1 Terminating 0 9m56s
20nginx-deploy-fd46765d4-tdkjv 0/1 Terminating 0 3m26s
21nginx-deploy-fd46765d4-tdkjv 0/1 Terminating 0 3m26s
22nginx-deploy-fd46765d4-tdkjv 0/1 Terminating 0 3m26s
23nginx-deploy-fd46765d4-8nzmp 0/1 Terminating 0 9m56s
24nginx-deploy-fd46765d4-8nzmp 0/1 Terminating 0 9m56s
25nginx-deploy-fd46765d4-8nzmp 0/1 Terminating 0 9m56s #原来old pod被一起删除
26nginx-deploy-6c5ff87cf-4f229 0/1 Pending 0 0s
27nginx-deploy-6c5ff87cf-4f229 0/1 Pending 0 0s
28nginx-deploy-6c5ff87cf-w2csq 0/1 Pending 0 0s
29nginx-deploy-6c5ff87cf-p866j 0/1 Pending 0 0s
30nginx-deploy-6c5ff87cf-w2csq 0/1 Pending 0 0s
31nginx-deploy-6c5ff87cf-ttm2v 0/1 Pending 0 0s
32nginx-deploy-6c5ff87cf-p866j 0/1 Pending 0 0s
33nginx-deploy-6c5ff87cf-ttm2v 0/1 Pending 0 0s
34nginx-deploy-6c5ff87cf-p866j 0/1 ContainerCreating 0 0s
35nginx-deploy-6c5ff87cf-4f229 0/1 ContainerCreating 0 1s
36nginx-deploy-6c5ff87cf-w2csq 0/1 ContainerCreating 0 2s
37nginx-deploy-6c5ff87cf-ttm2v 0/1 ContainerCreating 0 2s
38nginx-deploy-6c5ff87cf-ttm2v 1/1 Running 0 4s
39nginx-deploy-6c5ff87cf-p866j 1/1 Running 0 17s
40nginx-deploy-6c5ff87cf-w2csq 1/1 Running 0 19s
41nginx-deploy-6c5ff87cf-4f229 1/1 Running 0 34s
42
43
44[root@master1 ~]#kubectl get deploy,rs,po
45NAME READY UP-TO-DATE AVAILABLE AGE
46deployment.apps/nginx-deploy 4/4 4 4 11m
47
48NAME DESIRED CURRENT READY AGE
49replicaset.apps/nginx-deploy-6c5ff87cf 4 4 4 91s #当前rs
50replicaset.apps/nginx-deploy-fd46765d4 0 0 0 11m
51
52NAME READY STATUS RESTARTS AGE
53pod/nginx-deploy-6c5ff87cf-4f229 1/1 Running 0 91s
54pod/nginx-deploy-6c5ff87cf-p866j 1/1 Running 0 91s
55pod/nginx-deploy-6c5ff87cf-ttm2v 1/1 Running 0 91s
56pod/nginx-deploy-6c5ff87cf-w2csq 1/1 Running 0 91s
57[root@master1 ~]#
58
59
60[root@master1 ~]#kubectl rollout history deployment nginx-deploy
61deployment.apps/nginx-deploy
62REVISION CHANGE-CAUSE
631 <none>
642 <none> #当前rs
65
66
67#我们可以在看下当前rs的revisio是不是2呢:
68[root@master1 ~]#kubectl describe rs nginx-deploy-6c5ff87cf |grep revision
69 deployment.kubernetes.io/revision: 2
70
71#我们再看下nginx-deploy deploy的 event:
72[root@master1 ~]#kubectl describe deployments.apps |tail -8
73NewReplicaSet: nginx-deploy-6c5ff87cf (4/4 replicas created)
74Events:
75 Type Reason Age From Message
76 ---- ------ ---- ---- -------
77 Normal ScalingReplicaSet 12m deployment-controller Scaled up replica set nginx-deploy-fd46765d4 to 3
78 Normal ScalingReplicaSet 5m46s deployment-controller Scaled up replica set nginx-deploy-fd46765d4 to 4
79 Normal ScalingReplicaSet 2m21s deployment-controller Scaled down replica set nginx-deploy-fd46765d4 to 0
80 Normal ScalingReplicaSet 2m20s deployment-controller Scaled up replica set nginx-deploy-6c5ff87cf to 4
上面Recreate升级策略验证完了,接下来我们验证下RollingUpdate滚动更新升级策略。
- 先来看下
RollingUpdate可以配置的选项:
1[root@master1 ~]#kubectl explain deploy.spec.strategy.rollingUpdate
2KIND: Deployment
3VERSION: apps/v1
4
5RESOURCE: rollingUpdate <Object>
6
7DESCRIPTION:
8 Rolling update config params. Present only if DeploymentStrategyType =
9 RollingUpdate.
10
11 Spec to control the desired behavior of rolling update.
12
13FIELDS:
14 maxSurge <string>
15 The maximum(最大限度) number of pods that can be scheduled above(超过) the desired number
16 of pods. Value can be an absolute(绝对的) number (ex: 5) or a percentage(百分比) of desired
17 pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number
18 is calculated(计算) from percentage by rounding up(四舍五入). Defaults to 25%. Example:
19 when this is set to 30%, the new ReplicaSet can be scaled up immediately
20 when the rolling update starts, such that the total number of old and new
21 pods do not exceed(超过) 130% of desired pods. Once(一旦) old pods have been killed,
22 new ReplicaSet can be scaled up further(进一步), ensuring that total number of pods
23 running at any time during the update is at most 130% of desired pods.
24
25 maxUnavailable <string>
26 The maximum number of pods that can be unavailable during the update. Value
27 can be an absolute number (ex: 5) or a percentage of desired pods (ex:
28 10%). Absolute number is calculated from percentage by rounding down(舍入,去尾法). This
29 can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set
30 to 30%, the old ReplicaSet can be scaled down to 70% of desired pods
31 immediately when the rolling update starts. Once new pods are ready, old
32 ReplicaSet can be scaled down further, followed by scaling up the new
33 ReplicaSet, ensuring that the total number of pods available at all times
34 during the update is at least 70% of desired pods.
- 现在我们修改本次的升级策略为RollingUpdate,当然默认的策略就是RollingUpdate。同时,本次nginx镜像tag修改为latest。
1#nginx-deploy.yaml
2apiVersion: apps/v1
3kind: Deployment
4metadata:
5 name: nginx-deploy
6 namespace: default
7 labels: #这个标签仅仅用于标记deployment本身这个资源对象
8 role: deploy
9
10spec:
11 replicas: 4 #期望的Pod副本数量
12 minReadySeconds: 5
13 strategy:
14 type: RollingUpdate #指定滚动更新策略,默认
15 rollingUpdate:
16 maxUnavailable: 1 #最大不可用的pod数量
17 maxSurge: 1
18 selector: #label selector
19 matchLabels:
20 app: nginx
21 test: course
22 template: #Pod模板
23 metadata:
24 labels: #一定要和上面的selector 保持一致
25 app: nginx
26 test: course
27 spec:
28 containers:
29 - name: nginx
30 image: nginx:latest #本次nginx镜像tag修改为latest
31 ports:
32 - containerPort: 80
与前面相比较,除了更改了镜像之外,我们还指定了更新策略:
1minReadySeconds: 5
2strategy:
3 type: RollingUpdate
4 rollingUpdate:
5 maxSurge: 1
6 maxUnavailable: 1
**minReadySeconds**:表示 Kubernetes 在等待设置的时间后才进行升级,如果没有设置该值,Kubernetes 会假设该容器启动起来后就提供服务了,如果没有设置该值,在某些极端情况下可能会造成服务不正常运行,默认值就是0。(这里应该说的是新版本容器启动后需要等待的时间;)type=RollingUpdate:表示设置更新策略为滚动更新,可以设置为Recreate和RollingUpdate两个值,**Recreate**表示全部重新创建,默认值就是RollingUpdate。maxSurge:表示升级过程中最多可以比原先设置多出的 Pod 数量,例如:maxSurage=1,replicas=5,就表示Kubernetes 会先启动一个新的 Pod,然后才删掉一个旧的 Pod,整个升级过程中最多会有5+1个 Pod。maxUnavaible:表示升级过程中最多有多少个 Pod 处于无法提供服务的状态,当**maxSurge**不为0时,该值也不能为0,例如:maxUnavaible=1,则表示 Kubernetes 整个升级过程中最多会有1个 Pod 处于无法服务的状态(这里指的是老版本pod的不可用个数)。
maxSurge 和 maxUnavailable 属性的值不可同时为 0,否则 Pod 对象的副本数量在符合用户期望的数量后无法做出合理变动以进行滚动更新操作。
特别注意:
滚动升级时,有可能先创建新的pod,也有可能先删除老的pod,这个和配置的maxSurge和maxUnavailable参数有关。并且,他们的配置数量也可以大于1;
- 本次测试前的实验话环境我们再次来确认下:
1[root@master1 ~]#kubectl get deploy,rs,pod
2NAME READY UP-TO-DATE AVAILABLE AGE
3deployment.apps/nginx-deploy 4/4 4 4 17m
4
5NAME DESIRED CURRENT READY AGE
6replicaset.apps/nginx-deploy-6c5ff87cf 4 4 4 4m50s
7replicaset.apps/nginx-deploy-fd46765d4 0 0 0 17m
8
9NAME READY STATUS RESTARTS AGE
10nginx-deploy-6c5ff87cf-4f229 1/1 Running 0 3m14s
11nginx-deploy-6c5ff87cf-p866j 1/1 Running 0 3m14s
12nginx-deploy-6c5ff87cf-ttm2v 1/1 Running 0 3m14s
13nginx-deploy-6c5ff87cf-w2csq 1/1 Running 0 3m14s
14[root@master1 ~]#
15
16
17[root@master1 ~]#kubectl rollout history deployment nginx-deploy
18deployment.apps/nginx-deploy
19REVISION CHANGE-CAUSE
201 <none>
212 <none>
22
23[root@master1 ~]#
24[root@master1 ~]#kubectl describe rs nginx-deploy-6c5ff87cf|grep revision
25 deployment.kubernetes.io/revision: 2
26[root@master1 ~]#
27
28[root@master1 ~]#kubectl describe po nginx-deploy-6c5ff87cf-4f229 |grep Image
29 Image: nginx:1.7.9
30 Image ID: sha256:35d28df486f6150fa3174367499d1eb01f22f5a410afe4b9581ac0e0e58b3eaf
31[root@master1 ~]#
- 这里和上面一样的方法,我们另外打开一个窗口用
--watch来监视下pod的变化状态:
1[root@master1 ~]#kubectl get po --watch
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-6c5ff87cf-4f229 1/1 Running 0 4m17s
4nginx-deploy-6c5ff87cf-p866j 1/1 Running 0 4m17s
5nginx-deploy-6c5ff87cf-ttm2v 1/1 Running 0 4m17s
6nginx-deploy-6c5ff87cf-w2csq 1/1 Running 0 4m17s
7
8---- test Roollingupdate-----
- 现在我们来直接更新上面的 Deployment 资源对象:
1[root@master1 ~]#kubectl apply -f nginx-deploy.yaml
2deployment.apps/nginx-deploy configured
record 参数:我们可以添加了一个额外的 --record 参数来记录下我们的每次操作所执行的命令,以方便后面查看。
说明:这里只有通过命令行来更新镜像的话,这里是才会把当前执行的命令给记录下来;但是如果通过.yaml文件来更新的话,这里依然不会有任何记录的,会出现none;
其实:这种方法也是不好用的;
1[root@k8s-master ~]#kubectl set image deployment web777 nginx=nginx:1.20 --record -n test


回滚
1#回滚(项目升级失败恢复到正常版本)
2kubectl rollout history deployment/web # 查看历史发布版本
3
4kubectl rollout undo deployment/web # 回滚上一个版本
5kubectl rollout undo deployment/web --to-revision=2 # 回滚历史指定版本
6#注:回滚是重新部署某一次部署时的状态,即当时版本所有配置
7
8说明:
9k8s原生的"回滚"功能非常鸡肋,不能很明显地看到上一个版本具体是什么样的;
10一般大厂都是自己开发这个"版本控制"功能模块的;
11
12我们可以采用如下2种方式来改进:
131.写一个shell脚本(可以关联出回滚序号和其相关版本)
142.写平台的话,可以在mysql数据量里查询;
- 更新后,我们可以执行下面的
kubectl rollout status命令来查看我们此次滚动更新的状态:
我们先来看下rollout命令的可以参数:
1[root@master1 ~]#kubectl rollout --help
2Manage the rollout of a resource.
3
4 Valid resource types include:
5
6 * deployments
7 * daemonsets
8 * statefulsets
9
10Examples:
11 # Rollback to the previous deployment
12 kubectl rollout undo deployment/abc
13
14 # Check the rollout status of a daemonset
15 kubectl rollout status daemonset/foo
16
17Available Commands:
18 history View rollout history
19 pause(暂停) Mark the provided resource as paused
20 restart Restart a resource
21 resume(恢复) Resume a paused resource
22 status Show the status of the rollout
23 undo(撤销,使恢复原状) Undo a previous rollout
24
25Usage:
26 kubectl rollout SUBCOMMAND [options]
27
28Use "kubectl <command> --help" for more information about a given command.
29Use "kubectl options" for a list of global command-line options (applies to all commands).
30[root@master1 ~]#
- 从上面的信息可以看出我们的滚动更新已经有两个 Pod 已经更新完成了,在滚动更新过程中,我们还可以执行如下的命令来暂停更新:
1[root@master1 ~]#kubectl rollout pause deployment/nginx-deploy
2deployment.apps/nginx-deploy paused
这个时候我们的滚动更新就暂停了,此时我们可以查看下 Deployment 的详细信息:
1[root@master1 ~]#kubectl describe deployments.apps nginx-deploy
2Name: nginx-deploy
3Namespace: default
4CreationTimestamp: Sat, 13 Nov 2021 10:41:09 +0800
5Labels: role=deploy
6Annotations: deployment.kubernetes.io/revision: 3
7Selector: app=nginx,test=course
8Replicas: 4 desired | 2 updated | 5 total | 5 available | 0 unavailable
9StrategyType: RollingUpdate
10MinReadySeconds: 5
11RollingUpdateStrategy: 1 max unavailable, 1 max surge
12Pod Template:
13 Labels: app=nginx
14 test=course
15 Containers:
16 nginx:
17 Image: nginx:latest
18 Port: 80/TCP
19 Host Port: 0/TCP
20 Environment: <none>
21 Mounts: <none>
22 Volumes: <none>
23Conditions:
24 Type Status Reason
25 ---- ------ ------
26 Available True MinimumReplicasAvailable
27 Progressing Unknown DeploymentPaused
28OldReplicaSets: nginx-deploy-6c5ff87cf (3/3 replicas created)
29NewReplicaSet: nginx-deploy-595b8954f7 (2/2 replicas created)
30Events:
31 Type Reason Age From Message
32 ---- ------ ---- ---- -------
33 Normal ScalingReplicaSet 15m deployment-controller Scaled up replica set nginx-deploy-fd46765d4 to 3
34 Normal ScalingReplicaSet 9m8s deployment-controller Scaled up replica set nginx-deploy-fd46765d4 to 4
35 Normal ScalingReplicaSet 5m43s deployment-controller Scaled down replica set nginx-deploy-fd46765d4 to 0
36 Normal ScalingReplicaSet 5m42s deployment-controller Scaled up replica set nginx-deploy-6c5ff87cf to 4
37 Normal ScalingReplicaSet 41s deployment-controller Scaled up replica set nginx-deploy-595b8954f7 to 1
38 Normal ScalingReplicaSet 41s deployment-controller Scaled down replica set nginx-deploy-6c5ff87cf to 3
39 Normal ScalingReplicaSet 41s deployment-controller Scaled up replica set nginx-deploy-595b8954f7 to 2

我们仔细观察 Events 事件区域的变化,上面我们用 kubectl scale 命令将 Pod 副本调整到了 4,现在我们更新的时候是不是声明又变成 3 了,所以 Deployment 控制器首先是将之前控制的 nginx-deploy-85ff79dd56 这个 RS 资源对象进行缩容操作,然后滚动更新开始了,可以发现 Deployment 为一个新的 nginx-deploy-5b7b9ccb95 RS 资源对象首先新建了一个新的 Pod,然后将之前的 RS 对象缩容到 2 了,再然后新的 RS 对象扩容到 2,后面由于我们暂停滚动升级了,所以没有后续的事件了,大家有看明白这个过程吧?这个过程就是滚动更新的过程,启动一个新的 Pod,杀掉一个旧的 Pod,然后再启动一个新的 Pod,这样滚动更新下去,直到全都变成新的 Pod,这个时候系统中应该存在 4 个 Pod,因为我们设置的策略maxSurge=1,所以在升级过程中是允许的,而且是两个新的 Pod,两个旧的 Pod:
1[root@master1 ~]#kubectl get po -l app=nginx
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-595b8954f7-sp5nj 1/1 Running 0 86s
4nginx-deploy-595b8954f7-z6pn4 1/1 Running 0 86s
5nginx-deploy-6c5ff87cf-p866j 1/1 Running 0 6m27s
6nginx-deploy-6c5ff87cf-ttm2v 1/1 Running 0 6m27s
7nginx-deploy-6c5ff87cf-w2csq 1/1 Running 0 6m27s
查看 Deployment 的状态也可以看到当前的 Pod 状态:
1[root@master1 ~]#kubectl get deployments.apps
2NAME READY UP-TO-DATE AVAILABLE AGE
3nginx-deploy 5/4 2 5 16m
- 这个时候我们可以使用
kubectl rollout resume来恢复我们的滚动更新:
1[root@master1 ~]#kubectl rollout resume deployment nginx-deploy
2deployment.apps/nginx-deploy resumed
3[root@master1 ~]#
4[root@master1 ~]#kubectl rollout status deployment nginx-deploy
5deployment "nginx-deploy" successfully rolled out
6[root@master1 ~]#
看到上面的信息证明我们的滚动更新已经成功了,同样可以查看下资源状态:
1[root@master1 ~]#kubectl get po -l app=nginx
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-595b8954f7-p2qht 1/1 Running 0 68s
4nginx-deploy-595b8954f7-qw6gz 1/1 Running 0 68s
5nginx-deploy-595b8954f7-sp5nj 1/1 Running 0 3m25s
6nginx-deploy-595b8954f7-z6pn4 1/1 Running 0 3m25s
7[root@master1 ~]#kubectl get deployments.apps
8NAME READY UP-TO-DATE AVAILABLE AGE
9nginx-deploy 4/4 4 4 18m
10[root@master1 ~]#
这个时候我们查看 ReplicaSet 对象,可以发现会出现3个:
1[root@master1 ~]#kubectl get rs
2NAME DESIRED CURRENT READY AGE
3nginx-deploy-595b8954f7 4 4 4 3m43s
4nginx-deploy-6c5ff87cf 0 0 0 8m44s
5nginx-deploy-fd46765d4 0 0 0 18m
从上面可以看出滚动更新之前我们使用的 RS 资源对象的 Pod 副本数已经变成 0 了,而滚动更新后的 RS 资源对象变成了 4 个副本,我们可以导出之前的 RS 对象查看:
1[root@master1 ~]#kubectl get rs nginx-deploy-6c5ff87cf -oyaml
2apiVersion: apps/v1
3kind: ReplicaSet
4metadata:
5 annotations:
6 deployment.kubernetes.io/desired-replicas: "4"
7 deployment.kubernetes.io/max-replicas: "5"
8 deployment.kubernetes.io/revision: "2"
9 creationTimestamp: "2021-11-13T02:51:05Z"
10 generation: 4
11 labels:
12 app: nginx
13 pod-template-hash: 6c5ff87cf
14 test: course
15 name: nginx-deploy-6c5ff87cf
16 namespace: default
17 ownerReferences:
18 - apiVersion: apps/v1
19 blockOwnerDeletion: true
20 controller: true
21 kind: Deployment
22 name: nginx-deploy
23 uid: ac7c0147-2ed9-4e61-91fa-b4bfdf185564
24 resourceVersion: "319487"
25 uid: d3455813-e6eb-480d-b88b-d4761d16c131
26spec:
27 replicas: 0
28 selector:
29 matchLabels:
30 app: nginx
31 pod-template-hash: 6c5ff87cf
32 test: course
33 template:
34 metadata:
35 creationTimestamp: null
36 labels:
37 app: nginx
38 pod-template-hash: 6c5ff87cf
39 test: course
40 spec:
41 containers:
42 - image: nginx:1.7.9
43 imagePullPolicy: Always
44 name: nginx
45 ports:
46 - containerPort: 80
47 protocol: TCP
48 resources: {}
49 terminationMessagePath: /dev/termination-log
50 terminationMessagePolicy: File
51 dnsPolicy: ClusterFirst
52 restartPolicy: Always
53 schedulerName: default-scheduler
54 securityContext: {}
55 terminationGracePeriodSeconds: 30
56status:
57 observedGeneration: 4
58 replicas: 0 #replicas为0
我们仔细观察这个资源对象里面的描述信息除了副本数变成了 replicas=0 之外,和更新之前没有什么区别吧?大家看到这里想到了什么?有了这个 RS 的记录存在,是不是我们就可以回滚了啊?而且还可以回滚到前面的任意一个版本。
- 这个版本是如何定义的呢?我们可以通过命令
rollout history来获取:
1[root@master1 ~]#kubectl rollout history deployment nginx-deploy
2deployment.apps/nginx-deploy
3REVISION CHANGE-CAUSE
41 <none>
52 <none>
63 <none>
7
8[root@master1 ~]#
其实 **rollout history** 中记录的 **revision** 是和 **ReplicaSets** **一一对应。**如果我们手动删除某个 ReplicaSet,对应的rollout history就会被删除,也就是说你无法回滚到这个revison了,同样我们还可以查看一个revison的详细信息:
1[root@master1 ~]#kubectl rollout history deployment nginx-deploy --revision=2
2deployment.apps/nginx-deploy with revision #2
3Pod Template:
4 Labels: app=nginx
5 pod-template-hash=6c5ff87cf
6 test=course
7 Containers:
8 nginx:
9 Image: nginx:1.7.9
10 Port: 80/TCP
11 Host Port: 0/TCP
12 Environment: <none>
13 Mounts: <none>
14 Volumes: <none>
15
16[root@master1 ~]#
- 我们先来看下当前的revision是几呢:
1[root@master1 ~]#kubectl rollout history deployment nginx-deploy
2deployment.apps/nginx-deploy
3REVISION CHANGE-CAUSE
41 <none>
52 <none>
63 <none>
7
8[root@master1 ~]#kubectl get po
9NAME READY STATUS RESTARTS AGE
10nginx-deploy-595b8954f7-p2qht 1/1 Running 0 5m51s
11nginx-deploy-595b8954f7-qw6gz 1/1 Running 0 5m51s
12nginx-deploy-595b8954f7-sp5nj 1/1 Running 0 8m8s
13nginx-deploy-595b8954f7-z6pn4 1/1 Running 0 8m8s
14[root@master1 ~]#kubectl describe rs nginx-deploy-595b8954f7 |grep revision
15 deployment.kubernetes.io/revision: 3 #可以看到强的revision是3
16[root@master1 ~]#
- 假如现在要直接回退到当前版本的前一个版本,我们可以直接使用如下命令进行操作:
1➜ ~ kubectl rollout undo deployment nginx-deploy
- 当然也可以回退到指定的
revision版本:
1➜ ~ kubectl rollout undo deployment nginx-deploy --to-revision=1
2deployment "nginx-deploy" rolled back
- 本次假设我们回退到
1版本:
1[root@master1 ~]#kubectl rollout undo deployment nginx-deploy --to-revision=1
2deployment.apps/nginx-deploy rolled back
- 回滚的过程中我们同样可以查看回滚状态:
1[root@master1 ~]#kubectl rollout status deployment nginx-deploy
2Waiting for deployment "nginx-deploy" rollout to finish: 2 out of 4 new replicas have been updated...
3Waiting for deployment "nginx-deploy" rollout to finish: 2 out of 4 new replicas have been updated...
4Waiting for deployment "nginx-deploy" rollout to finish: 2 out of 4 new replicas have been updated...
5Waiting for deployment "nginx-deploy" rollout to finish: 2 out of 4 new replicas have been updated...
6Waiting for deployment "nginx-deploy" rollout to finish: 3 out of 4 new replicas have been updated...
7Waiting for deployment "nginx-deploy" rollout to finish: 3 out of 4 new replicas have been updated...
8Waiting for deployment "nginx-deploy" rollout to finish: 3 out of 4 new replicas have been updated...
9Waiting for deployment "nginx-deploy" rollout to finish: 3 out of 4 new replicas have been updated...
10Waiting for deployment "nginx-deploy" rollout to finish: 3 out of 4 new replicas have been updated...
11Waiting for deployment "nginx-deploy" rollout to finish: 3 out of 4 new replicas have been updated...
12Waiting for deployment "nginx-deploy" rollout to finish: 3 out of 4 new replicas have been updated...
13Waiting for deployment "nginx-deploy" rollout to finish: 1 old replicas are pending termination...
14Waiting for deployment "nginx-deploy" rollout to finish: 3 of 4 updated replicas are available...
15Waiting for deployment "nginx-deploy" rollout to finish: 3 of 4 updated replicas are available...
16deployment "nginx-deploy" successfully rolled out
17[root@master1 ~]#
- 这个时候查看对应的 RS 资源对象可以看到 Pod 副本已经回到之前的 RS 里面去了。
1[root@master1 ~]#kubectl get rs
2NAME DESIRED CURRENT READY AGE
3nginx-deploy-595b8954f7 0 0 0 18m
4nginx-deploy-6c5ff87cf 0 0 0 23m
5nginx-deploy-fd46765d4 4 4 4 33m
6[root@master1 ~]#
- 不过需要注意的是回滚的操作滚动的
revision始终是递增的:
1[root@master1 ~]#kubectl rollout history deployment nginx-deploy
2deployment.apps/nginx-deploy
3REVISION CHANGE-CAUSE
42 <none>
53 <none>
64 <none>
7
8[root@master1 ~]#
保留旧版本
在很早之前的 Kubernetes 版本中,默认情况下会为我们暴露下所有滚动升级的历史记录,也就是 ReplicaSet 对象,但一般情况下没必要保留所有的版本,毕竟会存在 etcd 中,我们可以通过配置 spec.revisionHistoryLimit 属性来设置保留的历史记录数量,不过新版本中该值默认为 10,如果希望多保存几个版本可以设置该字段。
- 我们来查看下watch监控的内容
1[root@master1 ~]#kubectl get po --watch
2NAME READY STATUS RESTARTS AGE
3nginx-deploy-6c5ff87cf-4f229 1/1 Running 0 4m17s
4nginx-deploy-6c5ff87cf-p866j 1/1 Running 0 4m17s
5nginx-deploy-6c5ff87cf-ttm2v 1/1 Running 0 4m17s
6nginx-deploy-6c5ff87cf-w2csq 1/1 Running 0 4m17s
7
8---- test Roollingupdate-----
9
10nginx-deploy-595b8954f7-z6pn4 0/1 Pending 0 0s
11nginx-deploy-595b8954f7-z6pn4 0/1 Pending 0 0s
12nginx-deploy-6c5ff87cf-4f229 1/1 Terminating 0 5m1s
13nginx-deploy-595b8954f7-z6pn4 0/1 ContainerCreating 0 0s
14nginx-deploy-595b8954f7-sp5nj 0/1 Pending 0 0s
15nginx-deploy-595b8954f7-sp5nj 0/1 Pending 0 0s
16nginx-deploy-595b8954f7-sp5nj 0/1 ContainerCreating 0 0s
17nginx-deploy-6c5ff87cf-4f229 0/1 Terminating 0 5m2s
18nginx-deploy-6c5ff87cf-4f229 0/1 Terminating 0 5m2s
19nginx-deploy-6c5ff87cf-4f229 0/1 Terminating 0 5m2s
20nginx-deploy-595b8954f7-sp5nj 1/1 Running 0 2s
21nginx-deploy-595b8954f7-z6pn4 1/1 Running 0 17s
22nginx-deploy-6c5ff87cf-ttm2v 1/1 Terminating 0 7m18s
23nginx-deploy-6c5ff87cf-w2csq 1/1 Terminating 0 7m18s
24nginx-deploy-595b8954f7-qw6gz 0/1 Pending 0 0s
25nginx-deploy-595b8954f7-qw6gz 0/1 Pending 0 0s
26nginx-deploy-595b8954f7-p2qht 0/1 Pending 0 0s
27nginx-deploy-595b8954f7-p2qht 0/1 Pending 0 0s
28nginx-deploy-595b8954f7-qw6gz 0/1 ContainerCreating 0 0s
29nginx-deploy-595b8954f7-p2qht 0/1 ContainerCreating 0 0s
30nginx-deploy-6c5ff87cf-ttm2v 0/1 Terminating 0 7m19s
31nginx-deploy-6c5ff87cf-ttm2v 0/1 Terminating 0 7m19s
32nginx-deploy-6c5ff87cf-ttm2v 0/1 Terminating 0 7m19s
33nginx-deploy-6c5ff87cf-w2csq 0/1 Terminating 0 7m19s
34nginx-deploy-6c5ff87cf-w2csq 0/1 Terminating 0 7m19s
35nginx-deploy-6c5ff87cf-w2csq 0/1 Terminating 0 7m19s
36nginx-deploy-595b8954f7-p2qht 1/1 Running 0 16s
37nginx-deploy-595b8954f7-qw6gz 1/1 Running 0 17s
38nginx-deploy-6c5ff87cf-p866j 1/1 Terminating 0 7m40s
39nginx-deploy-6c5ff87cf-p866j 0/1 Terminating 0 7m40s
40nginx-deploy-6c5ff87cf-p866j 0/1 Terminating 0 7m40s
41nginx-deploy-6c5ff87cf-p866j 0/1 Terminating 0 7m40s

- 我们用命令
kubectl describle deploy nginx反而更清晰可以看到滚动升级的过程:
1[root@master1 ~]#kubectl describe deployments.apps nginx-deploy
2Name: nginx-deploy
3Namespace: default
4CreationTimestamp: Thu, 11 Nov 2021 22:04:31 +0800
5Labels: role=deploy
6Annotations: deployment.kubernetes.io/revision: 3
7Selector: app=nginx,test=course
8Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
9StrategyType: RollingUpdate
10MinReadySeconds: 5
11RollingUpdateStrategy: 1 max unavailable, 1 max surge
12Pod Template:
13 Labels: app=nginx
14 test=course
15 Containers:
16 nginx:
17 Image: nginx:latest
18 Port: 80/TCP
19 Host Port: 0/TCP
20 Environment: <none>
21 Mounts: <none>
22 Volumes: <none>
23Conditions:
24 Type Status Reason
25 ---- ------ ------
26 Available True MinimumReplicasAvailable
27 Progressing True NewReplicaSetAvailable
28OldReplicaSets: <none>
29NewReplicaSet: nginx-deploy-595b8954f7 (3/3 replicas created)
30Events:
31 Type Reason Age From Message
32 ---- ------ ---- ---- -------
33 Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set nginx-deploy-6c5ff87cf to 3
34 Normal ScalingReplicaSet 23m deployment-controller Scaled up replica set nginx-deploy-595b8954f7 to 1
35 Normal ScalingReplicaSet 23m deployment-controller Scaled down replica set nginx-deploy-6c5ff87cf to 2
36 Normal ScalingReplicaSet 23m deployment-controller Scaled up replica set nginx-deploy-595b8954f7 to 2
37 Normal ScalingReplicaSet 22m deployment-controller Scaled down replica set nginx-deploy-6c5ff87cf to 0
38 Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set nginx-deploy-595b8954f7 to 3 #最终的目的:就是deployment-controller将old rs的副本数设置为0,将new rs的副本数设置为desired数量
39[root@master1 ~]#
40
41[root@master1 ~]#kubectl get rs
42NAME DESIRED CURRENT READY AGE
43nginx-deploy-595b8954f7 3 3 3 25m
44nginx-deploy-6c5ff87cf 0 0 0 9h
45nginx-deploy-fd46765d4 0 0 0 9h
46[root@master1 ~]#
测试结束。😘
滚动升级的注意事项
如果我们强制做一个滚动更新的话,我们的应用如果现在还对外提供服务,那么就有可能正在接收流量,那如果我们做滚动升级的话,那就有可能也会造成业务的请求中断。
那么我们可以用什么样的方式来解决这个"中断"问题呢?
- 就是我们前面所说的
preStaop钩子,就是我们在停止之前,可以做一个什么样的事情呢?比如nginx,可以做一个优雅退出。让它把我们现在的请求处理完成之后,就是不接受请求了,再停止pod。所以,对于我们线上的应用,基本上会加上这里的优雅退出,在我们的preStop做这样一个事情。 - 你直接在这个
preStop里面直接sleep一下,就是让我们这里的请求/连接有足够的时间处理完成。所以,我用个preStop里面直接sleep也是可以的。
- 应用升级(更新镜像三种方式,自动触发滚动升级)
11、kubectl apply -f xxx.yaml (推荐使用)
22、kubectl set image deployment/web nginx=nginx:1.17
33、kubectl edit deployment/web #使用系统编辑器打开
案例:零宕机
本小节内容具体看如下链接:
https://onedayxyy.cn/docs/k8s-wordpress-example

==失败原因==
我们这里通过 NodePort 去访问应用,实际上也是通过每个节点上面的 kube-proxy 通过更新 iptables 规则来实现的。

Kubernetes 会根据 Pods 的状态去更新 Endpoints 对象,这样就可以保证 Endpoints 中包含的都是准备好处理请求的 Pod。一旦新的 Pod 处于活动状态并准备就绪后,Kubernetes 就将会停止旧的 Pod,从而将 Pod 的状态更新为 “Terminating”,然后从 Endpoints 对象中移除,并且发送一个 SIGTERM 信号给 Pod 的主进程。SIGTERM 信号就会让容器以正常的方式关闭,并且不接受任何新的连接。Pod 从 Endpoints 对象中被移除后,前面的负载均衡器就会将流量路由到其他(新的)Pod 中去。因为在负载均衡器注意到变更并更新其配置之前,终止信号就会去停用 Pod,而这个重新配置过程又是异步发生的,并不能保证正确的顺序,所以就可能导致很少的请求会被路由到已经终止的 Pod 上去了,也就出现了上面我们说的情况。
==零宕机==
那么如何增强我们的应用程序以实现真正的零宕机迁移更新呢?
首先,要实现这个目标的先决条件是我们的容器要正确处理终止信号,在 SIGTERM 信号上实现优雅关闭。下一步需要添加 readiness 可读探针,来检查我们的应用程序是否已经准备好来处理流量了。为了解决 Pod 停止的时候不会阻塞并等到负载均衡器重新配置的问题,我们还需要使用 preStop 这个生命周期的钩子,在容器终止之前调用该钩子。
生命周期钩子函数是同步的,所以必须在将最终停止信号发送到容器之前完成,在我们的示例中,我们使用该钩子简单的等待,然后 SIGTERM 信号将停止应用程序进程。同时,Kubernetes 将从 Endpoints 对象中删除该 Pod,所以该 Pod 将会从我们的负载均衡器中排除,基本上来说我们的生命周期钩子函数等待的时间可以确保在应用程序停止之前重新配置负载均衡器:
1readinessProbe:
2 # ...
3lifecycle:
4 preStop:
5 exec:
6 command: ["/bin/bash", "-c", "sleep 20"]

我们这里使用 preStop 设置了一个 20s 的宽限期,Pod 在真正销毁前会先 sleep 等待 20s,这就相当于留了时间给 Endpoints 控制器和 kube-proxy 更新去 Endpoints 对象和转发规则,这段时间 Pod 虽然处于 Terminating 状态,即便在转发规则更新完全之前有请求被转发到这个 Terminating 的 Pod,依然可以被正常处理,因为它还在 sleep,没有被真正销毁。
现在,当我们去查看滚动更新期间的 Pod 行为时,我们将看到正在终止的 Pod 处于 Terminating 状态,但是在等待时间结束之前不会关闭的,如果我们使用 Fortio 重新测试下,则会看到零失败请求的理想状态。
关于我
我的博客主旨:
- 排版美观,语言精炼;
- 文档即手册,步骤明细,拒绝埋坑,提供源码;
- 本人实战文档都是亲测成功的,各位小伙伴在实际操作过程中如有什么疑问,可随时联系本人帮您解决问题,让我们一起进步!
🍀 微信二维码
x2675263825 (舍得), qq:2675263825。

🍀 微信公众号
《云原生架构师实战》

🍀 语雀
https://www.yuque.com/xyy-onlyone
https://www.yuque.com/xyy-onlyone/exkgza?# 《语雀博客》

🍀 博客


🍀 csdn
https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421

🍀 知乎
https://www.zhihu.com/people/foryouone

最后
好了,关于本次就到这里了,感谢大家阅读,最后祝大家生活快乐,每天都过的有意义哦,我们下期见!

1

