实战-部署一套完整的企业级高可用K8s集群-成功测试-阿良-20211020
实战:部署一套完整的企业级高可用K8s集群(成功测试)-2021.10.20

文档版本
| 时间 | 事项 | 作者 |
|---|---|---|
| 2021.10.20 | 创建文档 | 彦 |
| 2022.10.15 | 优化部分步骤 | 彦 |
实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,2个master节点,1个node节点
4 k8s version:v1.20
5 CONTAINER-RUNTIME:docker://20.10.7
1、硬件环境
3台虚机 2c2g,20g。(nat模式,可访问外网)
| 角色 | 主机名 | ip |
|---|---|---|
| master节点 | k8s-master1 | 172.29.9.41 |
| master节点 | k8s-master2 | 172.29.9.42 |
| node节点 | k8s-node1 | 172.29.9.43 |
| VIP | / | 172.29.9.88 |
👉 注意:
本次复用3个k8s node节点来模拟etcd节点;(注意:这里是测试环境,etcd集群就复用了k8s的3个节点,实际工作环境,可以单独使用机器来组成etcd集群;)
2个master节点来做高可用;
1个工作节点来跑负载;
2、软件环境
| 软件 | 版本 |
|---|---|
| 操作系统 | centos7.6_x64 1810 mini(其他centos7.x版本也行) |
| docker | 20.10.7-ce |
| kubernetes | v1.20.0 |
3、架构图
- 理论图:

- 实际拓扑图:

实验软件
链接:https://pan.baidu.com/s/1-QDyJBsJizN8SbBHAp-JXQ
提取码:1b25
1实验软件:部署一套完整的企业级高可用K8s集群-20211020

1、基础环境配置
👉 all节点均要配置
1.基础信息配置
1systemctl stop firewalld && systemctl disable firewalld
2systemctl stop NetworkManager && systemctl disable NetworkManager
3
4setenforce 0
5sed -i s/SELINUX=enforcing/SELINUX=disabled/ /etc/selinux/config
6
7swapoff -a
8sed -ri 's/.*swap.*/#&/' /etc/fstab
9
10cat >> /etc/hosts << EOF
11172.29.9.41 k8s-master1
12172.29.9.42 k8s-master2
13172.29.9.43 k8s-node1
14EOF
15
16cat > /etc/sysctl.d/k8s.conf << EOF
17net.bridge.bridge-nf-call-ip6tables = 1
18net.bridge.bridge-nf-call-iptables = 1
19EOF
20sysctl --system
21
22
23yum install ntpdate -y
24ntpdate time.windows.com
2.配置3个节点的主机名
1hostnamectl --static set-hostname k8s-master1
2bash
3
4hostnamectl --static set-hostname k8s-master2
5bash
6
7hostnamectl --static set-hostname k8s-node1
8bash
3.配置免密
3台机器做一个免密配置:(方便后期从一台机器往剩余机器快速传输文件)
1#本次在k8s-master1机器上做操作:
2
3ssh-keygen #连续回车即可
4ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.29.9.42
5ssh-copy-id -i ~/.ssh/id_rsa.pub root@172.29.9.43
4.上传本次所需软件
将本次所需软件上传到k8s-master1节点:

👉 做个快照
此时,3个节点的初始化环境配置好了,都记得做一个快照!
2、部署Nginx+Keepalived高可用负载均衡器
👉 (只需在2个master节点配置即可)
1.安装软件包
👉 (master主备节点都要配置)
1yum install epel-release -y
2yum install nginx keepalived -y
2.Nginx配置文件
👉 (master主,备节点都要配置)
1cat > /etc/nginx/nginx.conf << "EOF"
2user nginx;
3worker_processes auto;
4error_log /var/log/nginx/error.log;
5pid /run/nginx.pid;
6
7include /usr/share/nginx/modules/*.conf;
8
9events {
10 worker_connections 1024;
11}
12
13# 四层负载均衡,为两台Master apiserver组件提供负载均衡
14# 这个strem是为nginx4层负载均衡的一个模块,不使用的https 7层的负载均衡;
15stream {
16
17 log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
18
19 access_log /var/log/nginx/k8s-access.log main;
20
21 upstream k8s-apiserver {
22 server 172.29.9.41:6443; # Master1 APISERVER IP:PORT,修改为本次master节点的ip即可
23 server 172.29.9.42:6443; # Master2 APISERVER IP:PORT
24 }
25
26 server {
27 listen 16443; # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
28 proxy_pass k8s-apiserver;
29 }
30}
31
32http {
33 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
34 '$status $body_bytes_sent "$http_referer" '
35 '"$http_user_agent" "$http_x_forwarded_for"';
36
37 access_log /var/log/nginx/access.log main;
38
39 sendfile on;
40 tcp_nopush on;
41 tcp_nodelay on;
42 keepalive_timeout 65;
43 types_hash_max_size 2048;
44
45 include /etc/nginx/mime.types;
46 default_type application/octet-stream;
47}
48EOF
3.keepalived配置文件
1、Nginx Master上配置
1cat > /etc/keepalived/keepalived.conf << EOF
2global_defs {
3 notification_email {
4 acassen@firewall.loc
5 failover@firewall.loc
6 sysadmin@firewall.loc
7 }
8 notification_email_from Alexandre.Cassen@firewall.loc
9 smtp_server 127.0.0.1
10 smtp_connect_timeout 30
11 router_id NGINX_MASTER
12}
13
14vrrp_script check_nginx {
15 script "/etc/keepalived/check_nginx.sh"
16}
17
18vrrp_instance VI_1 {
19 state MASTER
20 interface ens33 # 修改为实际网卡名
21 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
22 priority 100 # 优先级,备服务器设置 90
23 advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
24 authentication {
25 auth_type PASS
26 auth_pass 1111
27 }
28 # 虚拟IP
29 virtual_ipaddress {
30 172.29.9.88/16
31 }
32 track_script {
33 check_nginx
34 }
35}
36EOF
• vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
• virtual_ipaddress:虚拟IP(VIP)
准备上述配置文件中检查nginx运行状态的脚本:
1cat > /etc/keepalived/check_nginx.sh << "EOF"
2#!/bin/bash
3count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
4
5if [ "$count" -eq 0 ];then
6 exit 1
7else
8 exit 0
9fi
10EOF
11
12chmod +x /etc/keepalived/check_nginx.sh
2、Nginx Backup上配置
1cat > /etc/keepalived/keepalived.conf << EOF
2global_defs {
3 notification_email {
4 acassen@firewall.loc
5 failover@firewall.loc
6 sysadmin@firewall.loc
7 }
8 notification_email_from Alexandre.Cassen@firewall.loc
9 smtp_server 127.0.0.1
10 smtp_connect_timeout 30
11 router_id NGINX_BACKUP
12}
13
14vrrp_script check_nginx {
15 script "/etc/keepalived/check_nginx.sh"
16}
17
18vrrp_instance VI_1 {
19 state BACKUP
20 interface ens33
21 virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
22 priority 90 #backup这里为90
23 advert_int 1
24 authentication {
25 auth_type PASS
26 auth_pass 1111
27 }
28 virtual_ipaddress {
29 172.29.9.88/16 #VIP
30 }
31 track_script {
32 check_nginx
33 }
34}
35EOF
- 准备上述配置文件中检查nginx运行状态的脚本:
1cat > /etc/keepalived/check_nginx.sh << "EOF"
2#!/bin/bash
3count=$(ss -antp |grep 16443 |egrep -cv "grep|$$")
4
5if [ "$count" -eq 0 ];then
6 exit 1
7else
8 exit 0
9fi
10EOF
11
12chmod +x /etc/keepalived/check_nginx.sh
注意:
注:keepalived根据脚本返回状态码(0为工作正常,非0不正常)判断是否故障转移。
4.启动并设置开机启动
👉 (2个master节点上都要配置)
1systemctl daemon-reload
2systemctl start nginx
3systemctl start keepalived
4systemctl enable nginx
5systemctl enable keepalived
- 注意:在启动nginx时会报错
我们通过journalctl -u nginx发现ngixn报错原因:是现在的nginx版本里面不包含stream模块了,从而导致报错:

- 可能需要我们单独去装下了:
1[root@k8s-master1 ~]#yum search stream|grep nginx
2nginx-mod-stream.x86_64 : Nginx stream modules
3
4yum install -y nginx-mod-stream
- 再次启动:就ok了,这2个步骤在2个master节点都执行一遍。
1systemctl daemon-reload
2systemctl start nginx
3systemctl start keepalived
4systemctl enable nginx
5systemctl enable keepalived

5.查看keepalived工作状态
在master节点查看:

可以看到,在ens33网卡绑定了172.29.9..88 虚拟IP,说明工作正常。
6.Nginx+Keepalived高可用测试
- 测试方法:
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master执行 pkill nginx
在Nginx Backup,ip addr命令查看已成功绑定VIP。
- 实际测试过程
首先我们在nginx master节点上看下ip情况:

再从winodws上长ping下这个VIP:

此时,在nginx master节点上查看nginx状态,并执行pkill nginx命令:
1[root@k8s-master1 ~]#ss -antup|grep nginx
2tcp LISTEN 0 128 *:16443 *:* users:(("nginx",pid=25229,fd=7),("nginx",pid=25228,fd=7),("nginx",pid=25227,fd=7))
3[root@k8s-master1 ~]#pkill nginx
4[root@k8s-master1 ~]#ss -antup|grep nginx
5[root@k8s-master1 ~]#

再在nginx master节点上确认nginx状态,及看下ping测试情况:此时会丢一个包的。

在Nginx Backup,ip addr命令查看已成功绑定VIP:

符合预期现象。
此时再启动master节点上的nginx,并再次观察现象:
注意:keepalived在切换VIP时会丢1个包的。



👉 结论
01、master节点和backup节点都是有nginx服务在运行的;
02、keepalived在切换VIP时会丢1个包的; Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,在上述拓扑中,Keepalived主要根据Nginx运行状态判断是否需要故障转移(偏移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。
03、注意nginx服务;

3、部署Etcd集群
👉 (只需在etcd节点配置即可,但本次复用3个node节点来作为etcd使用,因此三个都需要配置)
1.准备cfssl证书生成工具
cfssl是一个开源的证书管理工具,使用json文件生成证书,相比openssl更方便使用。
找任意一台服务器操作,这里用k8s-master1节点。
11、自己下载软件方法
2wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
3wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
4wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
5#注意,以上链接若打不开,直接使用我提供的软件即可!
6
7chmod +x cfssl*
8for x in cfssl*; do mv $x ${x%*_linux-amd64}; done
9mv cfssl* /usr/bin
10
11
122、使用我的软件:
13#上传我的软件到机器上
14mv cfssl* /usr/bin
2. 生成Etcd证书
1. 自签证书颁发机构(CA)
- 创建工作目录:
1mkdir -p ~/etcd_tls
2cd ~/etcd_tls
- 自签CA:
1cat > ca-config.json << EOF
2{
3 "signing": {
4 "default": {
5 "expiry": "87600h"
6 },
7 "profiles": {
8 "www": {
9 "expiry": "87600h",
10 "usages": [
11 "signing",
12 "key encipherment",
13 "server auth",
14 "client auth"
15 ]
16 }
17 }
18 }
19}
20EOF
21
22cat > ca-csr.json << EOF
23{
24 "CN": "etcd CA",
25 "key": {
26 "algo": "rsa",
27 "size": 2048
28 },
29 "names": [
30 {
31 "C": "CN",
32 "L": "Beijing",
33 "ST": "Beijing"
34 }
35 ]
36}
37EOF
- 生成证书:
1cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
会生成ca.pem和ca-key.pem文件。
2. 使用自签CA签发Etcd HTTPS证书
- 创建证书申请文件:
1cat > server-csr.json << EOF
2{
3 "CN": "etcd",
4 "hosts": [
5 "172.29.9.41",
6 "172.29.9.42",
7 "172.29.9.43"
8 ],
9 "key": {
10 "algo": "rsa",
11 "size": 2048
12 },
13 "names": [
14 {
15 "C": "CN",
16 "L": "BeiJing",
17 "ST": "BeiJing"
18 }
19 ]
20}
21EOF
注意:上述文件hosts字段中IP为所有etcd节点的集群内部通信IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。多写几个etcd ip可以,但是少写的话,如果后期要扩容,就需要重新生成证书,比较麻烦些;
- 生成证书:
1cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
会生成server.pem和server-key.pem文件。
3.从Github下载二进制文件
下载地址:https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz
这里直接使用我提供的软件即可!
- 将etcd软件上传到k8s-master节点上;
1[root@k8s-master1 ~]#ll
2total 18044
3-r--------. 1 root root 545894 May 30 10:57 centos7-init.zip
4drwxr-xr-x 2 root root 174 Oct 19 22:28 etcd_tls
5-rw-r--r-- 1 root root 17364053 Oct 19 22:31 etcd-v3.4.9-linux-amd64.tar.gz
6-rw-r--r--. 1 root root 560272 May 30 10:48 wget-1.14-18.el7_6.1.x86_64.rpm
7[root@k8s-master1 ~]#
4.部署Etcd集群
👉 (以下在节点1上操作,为简化操作,待会将节点1生成的所有文件拷贝到节点2和节点3。)
1.创建工作目录并解压二进制包
1mkdir /opt/etcd/{bin,cfg,ssl} -p
2cd /root/
3tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
4mv etcd-v3.4.9-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
2. 创建etcd配置文件
🍀 注释版:
1cat > /opt/etcd/cfg/etcd.conf << EOF
2#[Member]
3ETCD_NAME="etcd-1"
4ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
5ETCD_LISTEN_PEER_URLS="https://172.29.9.41:2380" #2380是 集群通信的端口;
6ETCD_LISTEN_CLIENT_URLS="https://172.29.9.41:2379" #2379是指它的数据端口,其他客户端要访问etcd数据库的读写都走的是这个端口;
7
8#[Clustering]
9ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.41:2380"
10ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.41:2379"
11ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380"
12ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #一种简单的认证机制,网络里可能配置了多套k8s集群,防止误同步;
13ETCD_INITIAL_CLUSTER_STATE="new"
14EOF
•ETCD_NAME:节点名称,集群中唯一
•ETCD_DATADIR:数据目录
•ETCD_LISTEN_PEER_URLS:集群通信监听地址
•ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
•ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
•ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
•ETCD_INITIAL_CLUSTER:集群节点地址
•ETCD_INITIAL_CLUSTER_TOKEN:集群Token
•ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群
🍀 最终配置版:
1cat > /opt/etcd/cfg/etcd.conf << EOF
2#[Member]
3ETCD_NAME="etcd-1"
4ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
5ETCD_LISTEN_PEER_URLS="https://172.29.9.41:2380"
6ETCD_LISTEN_CLIENT_URLS="https://172.29.9.41:2379"
7
8#[Clustering]
9ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.41:2380"
10ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.41:2379"
11ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380"
12ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
13ETCD_INITIAL_CLUSTER_STATE="new"
14EOF
3.systemd管理etcd
1cat > /usr/lib/systemd/system/etcd.service << EOF
2[Unit]
3Description=Etcd Server
4After=network.target
5After=network-online.target
6Wants=network-online.target
7
8[Service]
9Type=notify
10EnvironmentFile=/opt/etcd/cfg/etcd.conf
11ExecStart=/opt/etcd/bin/etcd \
12--cert-file=/opt/etcd/ssl/server.pem \
13--key-file=/opt/etcd/ssl/server-key.pem \
14--trusted-ca-file=/opt/etcd/ssl/ca.pem \
15--peer-cert-file=/opt/etcd/ssl/server.pem \
16--peer-key-file=/opt/etcd/ssl/server-key.pem \
17--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \
18--logger=zap
19Restart=on-failure
20LimitNOFILE=65536
21
22[Install]
23WantedBy=multi-user.target
24EOF
4.拷贝刚才生成的证书
把刚才生成的证书拷贝到配置文件中的路径:
1cp ~/etcd_tls/ca*pem ~/etcd_tls/server*pem /opt/etcd/ssl/
5.启动并设置开机启动
1systemctl daemon-reload
2systemctl start etcd
3systemctl enable etcd
👉 注意:第一个etcd服务启动很定会很慢,肯定会失败的;
为什么呢?
journalctl -u etcd -f #查看日志
1dial tcp 172.29.9.43:2380: conet: connection refused

注意:
通过查看日志可以看到,链接etcd另外2个节点报错,因此另外2个etcd节点的服务也需要起起来才行。
6.将上面节点1所有生成的文件拷贝到节点2和节点3
1scp -r /opt/etcd/ root@172.29.9.42:/opt/
2scp /usr/lib/systemd/system/etcd.service root@172.29.9.42:/usr/lib/systemd/system/
3
4scp -r /opt/etcd/ root@172.29.9.43:/opt/
5scp /usr/lib/systemd/system/etcd.service root@172.29.9.43:/usr/lib/systemd/system/
然后在节点2和节点3分别修改etcd.conf配置文件中的节点名称和当前服务器IP:
1vi /opt/etcd/cfg/etcd.conf
2#[Member]
3ETCD_NAME="etcd-1" # 修改此处,节点2改为etcd-2,节点3改为etcd-3
4ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
5ETCD_LISTEN_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
6ETCD_LISTEN_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
7
8#[Clustering]
9ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.31.71:2380" # 修改此处为当前服务器IP
10ETCD_ADVERTISE_CLIENT_URLS="https://192.168.31.71:2379" # 修改此处为当前服务器IP
11ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380"
12ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
13ETCD_INITIAL_CLUSTER_STATE="new"
- 最终配置:
1#k8s-node1-172.29.9.42上配置
2cat > /opt/etcd/cfg/etcd.conf << EOF
3#[Member]
4ETCD_NAME="etcd-2"
5ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
6ETCD_LISTEN_PEER_URLS="https://172.29.9.42:2380"
7ETCD_LISTEN_CLIENT_URLS="https://172.29.9.42:2379"
8
9#[Clustering]
10ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.42:2380"
11ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.42:2379"
12ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380"
13ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
14ETCD_INITIAL_CLUSTER_STATE="new"
15EOF
16
17
18#k8s-node1-172.29.9.43上配置
19cat > /opt/etcd/cfg/etcd.conf << EOF
20#[Member]
21ETCD_NAME="etcd-3"
22ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
23ETCD_LISTEN_PEER_URLS="https://172.29.9.43:2380"
24ETCD_LISTEN_CLIENT_URLS="https://172.29.9.43:2379"
25
26#[Clustering]
27ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.29.9.43:2380"
28ETCD_ADVERTISE_CLIENT_URLS="https://172.29.9.43:2379"
29ETCD_INITIAL_CLUSTER="etcd-1=https://172.29.9.41:2380,etcd-2=https://172.29.9.42:2380,etcd-3=https://172.29.9.43:2380"
30ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
31ETCD_INITIAL_CLUSTER_STATE="new"
32EOF
- 最后启动etcd并设置开机启动,同上。
1systemctl daemon-reload
2systemctl start etcd
3systemctl enable etcd

7. 查看集群状态
1[root@k8s-master1 ~]#ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/ca.pem --cert=/opt/etcd/ssl/server.pem --key=/opt/etcd/ssl/server-key.pem --endpoints="https://172.29.9.41:2379,https://172.29.9.42:2379,https://172.29.9.43:2379" endpoint health --write-out=table

如果输出上面信息,就说明集群部署成功。
如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd
4、安装Docker/kubeadm/kubelet
1.安装Docker
1yum install -y yum-utils device-mapper-persistent-data lvm2
2yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
3
4yum install -y yum install docker-ce-20.10.7 docker-ce-cli-20.10.7 containerd.io
5
6systemctl start docker && systemctl enable docker
7
8mkdir -p /etc/docker
9tee /etc/docker/daemon.json <<-'EOF'
10{
11 "registry-mirrors":["https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com","http://qtid6917.mirror.aliyuncs.com"]
12}
13EOF
14
15echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf
16sysctl -p
17
18systemctl daemon-reload
19systemctl restart docker
2.添加阿里云YUM软件源
1cat > /etc/yum.repos.d/kubernetes.repo << EOF
2[kubernetes]
3name=Kubernetes
4baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
5enabled=1
6gpgcheck=0
7repo_gpgcheck=0
8gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
9EOF
3.安装kubeadm,kubelet和kubectl
由于版本更新频繁,这里指定版本号部署:
1yum install -y kubelet-1.20.0 kubeadm-1.20.0 kubectl-1.20.0
2systemctl enable kubelet
5、部署Kubernetes Master
1.初始化Master1
👉 (在k8s-master1上操作)
- 生成初始化配置文件:
1cat > kubeadm-config.yaml << EOF
2apiVersion: kubeadm.k8s.io/v1beta2
3bootstrapTokens:
4- groups:
5 - system:bootstrappers:kubeadm:default-node-token
6 token: 9037x2.tcaqnpaqkra9vsbw
7 ttl: 24h0m0s
8 usages:
9 - signing
10 - authentication
11kind: InitConfiguration
12localAPIEndpoint:
13 advertiseAddress: 172.29.9.41
14 bindPort: 6443
15nodeRegistration:
16 criSocket: /var/run/dockershim.sock
17 name: k8s-master1
18 taints:
19 - effect: NoSchedule
20 key: node-role.kubernetes.io/master
21---
22apiServer:
23 certSANs: # 包含所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。
24 - k8s-master1
25 - k8s-master2
26 - 172.29.9.41
27 - 172.29.9.42
28 - 172.29.9.88
29 - 127.0.0.1
30 extraArgs:
31 authorization-mode: Node,RBAC
32 timeoutForControlPlane: 4m0s
33apiVersion: kubeadm.k8s.io/v1beta2
34certificatesDir: /etc/kubernetes/pki
35clusterName: kubernetes
36controlPlaneEndpoint: 172.29.9.88:16443 # 负载均衡虚拟IP(VIP)和端口
37controllerManager: {}
38dns:
39 type: CoreDNS
40etcd:
41 external: # 使用外部etcd
42 endpoints:
43 - https://172.29.9.41:2379 # etcd集群3个节点
44 - https://172.29.9.42:2379
45 - https://172.29.9.43:2379
46 caFile: /opt/etcd/ssl/ca.pem # 连接etcd所需证书
47 certFile: /opt/etcd/ssl/server.pem
48 keyFile: /opt/etcd/ssl/server-key.pem
49imageRepository: registry.aliyuncs.com/google_containers # 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
50kind: ClusterConfiguration
51kubernetesVersion: v1.20.0 # K8s版本,与上面安装的一致
52networking:
53 dnsDomain: cluster.local
54 podSubnet: 10.244.0.0/16 # Pod网络,与下面部署的CNI网络组件yaml中保持一致
55 serviceSubnet: 10.96.0.0/12 # 集群内部虚拟网络,Pod统一访问入口
56scheduler: {}
57EOF
- 使用配置文件引导:
1[root@k8s-master1 ~]#kubeadm init --config kubeadm-config.yaml
2[init] Using Kubernetes version: v1.20.0
3[preflight] Running pre-flight checks
4 [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
5 [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.9. Latest validated version: 19.03
6[preflight] Pulling images required for setting up a Kubernetes cluster
7[preflight] This might take a minute or two, depending on the speed of your internet connection
8[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
9[certs] Using certificateDir folder "/etc/kubernetes/pki"
10[certs] Generating "ca" certificate and key
11[certs] Generating "apiserver" certificate and key
12[certs] apiserver serving cert is signed for DNS names [k8s-master1 k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.29.9.41 172.29.9.88 172.29.9.42 127.0.0.1]
13[certs] Generating "apiserver-kubelet-client" certificate and key
14[certs] Generating "front-proxy-ca" certificate and key
15[certs] Generating "front-proxy-client" certificate and key
16[certs] External etcd mode: Skipping etcd/ca certificate authority generation
17[certs] External etcd mode: Skipping etcd/server certificate generation
18[certs] External etcd mode: Skipping etcd/peer certificate generation
19[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
20[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
21[certs] Generating "sa" key and public key
22[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
23[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
24[kubeconfig] Writing "admin.conf" kubeconfig file
25[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
26[kubeconfig] Writing "kubelet.conf" kubeconfig file
27[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
28[kubeconfig] Writing "controller-manager.conf" kubeconfig file
29[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
30[kubeconfig] Writing "scheduler.conf" kubeconfig file
31[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
32[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
33[kubelet-start] Starting the kubelet
34[control-plane] Using manifest folder "/etc/kubernetes/manifests"
35[control-plane] Creating static Pod manifest for "kube-apiserver"
36[control-plane] Creating static Pod manifest for "kube-controller-manager"
37[control-plane] Creating static Pod manifest for "kube-scheduler"
38[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
39[apiclient] All control plane components are healthy after 17.036041 seconds
40[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
41[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
42[upload-certs] Skipping phase. Please see --upload-certs
43[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
44[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
45[bootstrap-token] Using token: 9037x2.tcaqnpaqkra9vsbw
46[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
47[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
48[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
49[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
50[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
51[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
52[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
53[addons] Applied essential addon: CoreDNS
54[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
55[addons] Applied essential addon: kube-proxy
56
57Your Kubernetes control-plane has initialized successfully!
58
59To start using your cluster, you need to run the following as a regular user:
60
61 mkdir -p $HOME/.kube
62 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
63 sudo chown $(id -u):$(id -g) $HOME/.kube/config
64
65Alternatively, if you are the root user, you can run:
66
67 export KUBECONFIG=/etc/kubernetes/admin.conf
68
69You should now deploy a pod network to the cluster.
70Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
71 https://kubernetes.io/docs/concepts/cluster-administration/addons/
72
73You can now join any number of control-plane nodes by copying certificate authorities
74and service account keys on each node and then running the following as root:
75
76 kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \
77 --discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d \
78 --control-plane
79
80Then you can join any number of worker nodes by running the following on each as root:
81
82kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \
83 --discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d
初始化完成后,会有两个join的命令,带有 –control-plane 是用于加入组建多master集群的,不带的是加入节点的。
拷贝kubectl使用的连接k8s认证文件到默认路径:
1mkdir -p $HOME/.kube
2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3sudo chown $(id -u):$(id -g) $HOME/.kube/config
4
5[root@k8s-master1 ~]#kubectl get node
6NAME STATUS ROLES AGE VERSION
7k8s-master1 NotReady control-plane,master 7m34s v1.20.0
8[root@k8s-master1 ~]#
2.初始化Master2
此时,若我们直接在k8s-master2节点上使用提示的命令join时会报错:

注意:我们再加入master2,执行这条命令的时候,实际上只是初始化生成了一些配置文件而已,但是他那个证书,它没有再给你重新初始化了,因为你作为第二个控制面板加入集群了,它证书不再给你重新初始化,这主要是因为你是同一个集群,因为它再重新给你初始化根证书等证书,会导致你集群里的证书不一致现象,那么就会带来后面很多关于证书方面的问题。所以我们要部署第二个节点的时候,我们要把第一个节点的证书拷贝过来,就不能再重新生成一套独立的证书了,包括后面的授权等都是基于这1套证书去实现的。
- 将Master1节点生成的证书拷贝到Master2:
1scp -r /etc/kubernetes/pki/ 172.29.9.42:/etc/kubernetes/
- 复制加入master join命令在master2执行:
1 kubeadm join 172.29.9.88:16443 --token 9037x2.tcaqnpaqkra9vsbw \
2 --discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d \
3 --control-plane
此时观看,发现加入成功!
拷贝kubectl使用的连接k8s认证文件到默认路径:
1mkdir -p $HOME/.kube
2sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
3sudo chown $(id -u):$(id -g) $HOME/.kube/config
4
5[root@k8s-master2 ~]#kubectl get node
6NAME STATUS ROLES AGE VERSION
7k8s-master1 NotReady control-plane,master 14m v1.20.0
8k8s-master2 NotReady control-plane,master 30s v1.20.0
9[root@k8s-master2 ~]#
注:由于网络插件还没有部署,还没有准备就绪 NotReady。
3.访问负载均衡器测试
找K8s集群中任意一个节点,使用curl查看K8s版本测试,使用VIP访问:
本次使用k8s-master2节点跑这个测试命令:多执行几次。
1[root@k8s-master2 ~]#curl -k https://172.29.9.88:16443/version
2{
3 "major": "1",
4 "minor": "20",
5 "gitVersion": "v1.20.0",
6 "gitCommit": "af46c47ce925f4c4ad5cc8d1fca46c7b77d13b38",
7 "gitTreeState": "clean",
8 "buildDate": "2020-12-08T17:51:19Z",
9 "goVersion": "go1.15.5",
10 "compiler": "gc",
11 "platform": "linux/amd64"
12}[root@k8s-master2 ~]#
可以正确获取到K8s版本信息,说明负载均衡器搭建正常。该请求数据流程:curl -> vip(nginx) -> apiserver
- 通过查看Nginx日志也可以看到转发apiserver IP:这个在k8s-master1节点上测试
1tail /var/log/nginx/k8s-access.log -f

测试成功。
6、加入Kubernetes Node
👉 在172.29.9.43(Node1)执行。
- 向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
1[root@k8s-master1 ~]#kubeadm token create --print-join-command
2kubeadm join 172.29.9.88:16443 --token vh0mrh.9s60jligjkrduacj --discovery-token-ca-cert-hash sha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d
3[root@k8s-master1 ~]#
后续其他节点也是这样加入。
注:默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,可以直接在master节点直接使用命令快捷生成:kubeadm token create --print-join-command
7、部署网络组件
Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
- 部署Calico:
1kubectl apply -f calico.yaml
2kubectl get pod -A

- 等Calico Pod都Running,节点也会准备就绪:
1[root@k8s-master1 ~]#kubectl get node
2NAME STATUS ROLES AGE VERSION
3k8s-master1 Ready control-plane,master 29m v1.20.0
4k8s-master2 Ready control-plane,master 24m v1.20.0
5k8s-node1 Ready <none> 16m v1.20.0
6[root@k8s-master1 ~]#
8、部署 Dashboard
Dashboard是官方提供的一个UI,可用于基本管理K8s资源。
1kubectl apply -f kubernetes-dashboard.yaml
查看部署
kubectl get pods -n kubernetes-dashboard
访问地址:https://NodeIP:30001
创建service account并绑定默认cluster-admin管理员集群角色:
1kubectl create serviceaccount dashboard-admin -n kube-system
2kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
3
4kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')
- 使用输出的token登录Dashboard。


👉 以上步骤都做完后,记得为这3个节点做一个快照,方便后面集群损坏,从而快速恢复集群。
引用
感谢阿良老师的分享。😘
关于我
我的博客主旨:
- 排版美观,语言精炼;
- 文档即手册,步骤明细,拒绝埋坑,提供源码;
- 本人实战文档都是亲测成功的,各位小伙伴在实际操作过程中如有什么疑问,可随时联系本人帮您解决问题,让我们一起进步!
🍀 微信二维码
x2675263825 (舍得), qq:2675263825。

🍀 微信公众号
《云原生架构师实战》

🍀 博客


🍀 csdn
https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421

🍀 知乎
https://www.zhihu.com/people/foryouone

最后
好了,关于本次就到这里了,感谢大家阅读,最后祝大家生活快乐,每天都过的有意义哦,我们下期见!


