实战-kubeadm方式搭建k8s集群-v1-22-1-20211102-测试成功-阳明(CRI-Containerd)
实战:kubeadm方式搭建k8s集群(k8s-v1.22.1,containerd-v1.5.5)-2021.11.2(部署成功-阳明)
2021.11.2 搭建。
实验环境
1、硬件环境
3台虚机 2c2g,20g。(nat模式,可访问外网)
角色 | 主机名 | ip |
---|---|---|
master节点 | master1 | 172.29.9.51 |
node节点 | node1 | 172.29.9.52 |
node节点 | node2 | 172.29.9.53 |
2、软件环境
软件 | 版本 |
---|---|
操作系统 | centos7.6_x64 1810 mini(其他centos7.x版本也行) |
containerd | v1.5.5 |
kubernetes | v1.22.1 |
实验软件
链接:https:bashhostnamectl--staticset-hostnamenode1bashhostnamectl--staticset-hostnamenode2bash
注意: 节点的 hostname 必须使用标准的 DNS 命名,另外千万不用什么默认的
localhost
的 hostname,会导致各种错误出现的。在 Kubernetes 项目里,机器的名字以及一切存储在 Etcd 中的 API 对象,都必须使用标准的 DNS 命名(RFC 1123)。可以使用命令hostnamectl set-hostname node1
来修改 hostname。
1.2关闭防火墙,selinux
systemctlstopfirewalld&&systemctldisablefirewalldsystemctlstopNetworkManager&&systemctldisableNetworkManagersetenforce0sed-is/SELINUX=enforcing/SELINUX=disabled//etc/selinux/config
1.3关闭swap分区
swapoff-ased-ri's/.*swap.*/#&/'/etc/fstab
问题:k8s集群安装为什么需要关闭swap分区? swap必须关,否则kubelet起不来,进而导致k8s集群起不来; 可能kublet考虑到用swap做数据交换的话,对性能影响比较大;
1.4配置dns解析
cat>>/etc/hosts<<EOF172.29.9.51 master1172.29.9.52 node1172.29.9.53 node2EOF
问题:k8s集群安装时节点是否需要配置dns解析? 就是后面的kubectl如果需要连接运行在node上面的容器的话,它是通过kubectl get node出来的名称去连接的,所以那个的话,我们需要在宿主机上能够解析到它。如果它解析不到的话,那么他就可能连不上;
1.5将桥接的IPv4流量传递到iptables的链
modprobebr_netfiltercat>/etc/sysctl.d/k8s.conf<<EOFnet.bridge.bridge-nf-call-ip6tables =1net.bridge.bridge-nf-call-iptables =1net.ipv4.ip_forward =1EOFsysctl--system
注意:将桥接的IPv4流量传递到iptables的链 由于开启内核 ipv4 转发需要加载 br_netfilter 模块,所以加载下该模块: modprobe br_netfilter bridge-nf说明:
bridge-nf 使得 netfilter 可以对 Linux 网桥上的 IPv4/ARP/IPv6 包过滤。比如,设置
net.bridge.bridge-nf-call-iptables=1
后,二层的网桥在转发包时也会被 iptables的 FORWARD 规则所过滤。常用的选项包括:
- net.bridge.bridge-nf-call-arptables:是否在 arptables 的 FORWARD 中过滤网桥的 ARP 包
- net.bridge.bridge-nf-call-ip6tables:是否在 ip6tables 链中过滤 IPv6 包
- net.bridge.bridge-nf-call-iptables:是否在 iptables 链中过滤 IPv4 包
- net.bridge.bridge-nf-filter-vlan-tagged:是否在 iptables/arptables 中过滤打了 vlan 标签的包。
1.6安装 ipvs
cat>/etc/sysconfig/modules/ipvs.modules<<EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOFchmod755/etc/sysconfig/modules/ipvs.modules&&bash/etc/sysconfig/modules/ipvs.modules&&lsmod|grep-eip_vs-enf_conntrack_ipv4yuminstallipset-yyuminstallipvsadm-y
说明: 01、上面脚本创建了的
/etc/sysconfig/modules/ipvs.modules
文件,保证在节点重启后能自动加载所需模块。使用lsmod |grep -e ip_vs -e nf_conntrack_ipv4
命令查看是否已经正确加载所需的内核模块;02、要确保各个节点上已经安装了 ipset 软件包,因此需要:yum install ipset -y
;03、为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadm:yum install ipvsadm -y
1.7同步服务器时间
yuminstallchrony-ysystemctlenablechronyd--nowchronycsources
1.8配置免密(方便后期从master节点传文件到node节点)
#在master1节点执行如下命令,按2次回车ssh-keygen#在master1节点执行ssh-copy-id-i~/.ssh/id_rsa.pubroot@172.29.9.52ssh-copy-id-i~/.ssh/id_rsa.pubroot@172.29.9.53
2、安装 Containerd(all节点均要配置)
2.1安装containerd
cd/root/yuminstalllibseccomp-ywgethttps:tar-C/-xzfcri-containerd-cni-1.5.5-linux-amd64.tar.gzecho"export PATH=$PATH:/usr/local/bin:/usr/local/sbin">>~/.bashrcsource~/.bashrcmkdir-p/etc/containerdcontainerdconfigdefault>/etc/containerd/config.tomlsystemctlenablecontainerd--nowctrversion
说明:centos7上具体如何安装containerd,请看文章:实战:centos7上containerd的安装-20211023,本次只提供具体shell命令。
2.2将 containerd 的 cgroup driver 配置为 systemd
对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd
作为容器的 cgroup driver
可以确保节点在资源紧张的情况更加稳定,所以推荐将 containerd 的 cgroup driver 配置为 systemd。
修改前面生成的配置文件 /etc/containerd/config.toml
,在 plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options
配置块下面将 SystemdCgroup
设置为 true
:
#通过搜索SystemdCgroup进行定位#vim /etc/containerd/config.toml[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]...[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]SystemdCgroup=true....#注意:最终输出shell命令:sed-i"s/SystemdCgroup =false/SystemdCgroup =true/g"/etc/containerd/config.toml
2.3配置镜像加速器地址
然后再为镜像仓库配置一个加速器,需要在 cri 配置块下面的 registry
配置块下面进行配置 registry.mirrors
:(注意缩进)
[root@master1 ~]#vim /etc/containerd/config.toml[plugins."io.containerd.grpc.v1.cri".registry][plugins."io.containerd.grpc.v1.cri".registry.mirrors][plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]endpoint=["https:[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]endpoint=["https:……sandbox_image="registry.aliyuncs.com/k8sxio/pause:3.5"……
2.5启动containerd服务
由于上面我们下载的 containerd 压缩包中包含一个 etc/systemd/system/containerd.service
的文件,这样我们就可以通过 systemd 来配置 containerd 作为守护进程运行了,现在我们就可以启动 containerd 了,直接执行下面的命令即可:
systemctldaemon-reloadsystemctlenablecontainerd--now
2.6验证
启动完成后就可以使用 containerd 的本地 CLI 工具 ctr
和 crictl
了,比如查看版本:
ctr versioncrictl version
至此,containerd安装完成。
3、使用 kubeadm 部署 Kubernetes
3.1添加阿里云YUM软件源(all节点均要配置)
我们使用阿里云的源进行安装:
cat>/etc/yum.repos.d/kubernetes.repo<<EOF[kubernetes]name=Kubernetesbaseurl=https:enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https:EOF
3.2安装 kubeadm、kubelet、kubectl(all节点均要配置)
yummakecachefastyuminstall-ykubelet-1.22.2kubeadm-1.22.2kubectl-1.22.2--disableexcludes=kuberneteskubeadmversionsystemctlenable--nowkubelet
说明:--disableexcludes 禁掉除了kubernetes之外的别的仓库
符合预期:
3.3初始化集群(master1节点操作)
当我们执行 kubelet --help
命令的时候可以看到原来大部分命令行参数都被 DEPRECATED
了,这是因为官方推荐我们使用 --config
来指定配置文件,在配置文件中指定原来这些参数的配置,可以通过官方文档 Set Kubelet parameters via a config file了解更多相关信息,这样 Kubernetes 就可以支持**动态 Kubelet 配置(Dynamic Kubelet Configuration)**了,参考 Reconfigure a Node’s Kubelet in a Live Cluster。
然后我们可以通过下面的命令在 master 节点上输出集群初始化默认使用的配置:
[root@master1 ~]#kubeadm config print init-defaults --component-configs KubeletConfiguration >kubeadmyaml
然后根据我们自己的需求修改配置,比如修改 imageRepository
指定集群初始化时拉取 Kubernetes 所需镜像的地址,kube-proxy 的模式为 ipvs,另外需要注意的是我们这里是准备安装 flannel 网络插件的,需要将 networking.podSubnet
设置为10.244.0.0/16
:
[root@master1 ~]#vim kubeadmyamlapiVersion:kubeadm.k8s.io/v1beta3bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken:abcdef.0123456789abcdefttl:24h0m0susages:- signing- authenticationkind:InitConfigurationlocalAPIEndpoint:advertiseAddress:172.29.9.51# 修改1:指定master节点内网IPbindPort:6443nodeRegistration:criSocket:/run/containerd/containerd.sock# 修改2:使用 containerd的Unix socket 地址imagePullPolicy:IfNotPresentname:master1#name #修改3:修改master节点名称taints:# 修改4:给master添加污点,master节点不能调度应用- effect:"NoSchedule"key:"node-role.kubernetes.io/master"---apiVersion:kubeproxy.config.k8s.io/v1alpha1kind:KubeProxyConfigurationmode:ipvs# 修改5:修改kube-proxy 模式为ipvs,默认为iptables---apiServer:timeoutForControlPlane:4m0sapiVersion:kubeadm.k8s.io/v1beta3certificatesDir:/etc/kubernetes/pkiclusterName:kubernetescontrollerManager:{}dns:{}etcd:local:dataDir:/var/lib/etcdimageRepository:registry.aliyuncs.com/k8sxio#修改6:image地址kind:ClusterConfigurationkubernetesVersion:1.22.2#修改7:指定k8s版本号,默认这里忽略了小版本号networking:dnsDomain:cluster.localserviceSubnet:10.96.0.0/12podSubnet:10.244.0.0/16# 修改8:指定 pod 子网scheduler:{}---apiVersion:kubelet.config.k8s.io/v1beta1authentication:anonymous:enabled:falsewebhook:cacheTTL:0senabled:truex509:clientCAFile:/etc/kubernetes/pki/ca.crtauthorization:mode:Webhookwebhook:cacheAuthorizedTTL:0scacheUnauthorizedTTL:0sclusterDNS:- 10.96.0.10clusterDomain:cluster.localcpuManagerReconcilePeriod:0sevictionPressureTransitionPeriod:0sfileCheckFrequency:0shealthzBindAddress:127.0.0.1healthzPort:10248httpCheckFrequency:0simageMinimumGCAge:0skind:KubeletConfigurationcgroupDriver:systemd# 修改9:配置 cgroup driverlogging:{}memorySwap:{}nodeStatusReportFrequency:0snodeStatusUpdateFrequency:0srotateCertificates:trueruntimeRequestTimeout:0sshutdownGracePeriod:0sshutdownGracePeriodCriticalPods:0sstaticPodPath:/etc/kubernetes/manifestsstreamingConnectionIdleTimeout:0ssyncFrequency:0svolumeStatsAggPeriod:0s
配置提示
对于上面的资源清单的文档比较杂,要想完整了解上面的资源对象对应的属性,可以查看对应的 godoc 文档,地址:https:[root@master1 ~]#kubeadm config images list --config kubeadm.yaml#记得昨天测试都没报错啊,现在怎么报错了。。。可以忽略,不影响后续使用的。
配置文件准备好过后,可以使用如下命令先将相关镜像 pull 下面:
[root@master1 ~]#kubeadm config images pull --config kubeadm.yamlW103106:59:20.92289025580strict.go:55]errorunmarshalingconfigurationschema.GroupVersionKind{Group:"kubelet.config.k8s.io",Version:"v1beta1",Kind:"KubeletConfiguration"}:errorconvertingYAMLtoJSON:yaml:unmarshalerrors:line27:key"cgroupDriver"alreadysetinmap[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v1.22.2[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.22.2[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v1.22.2[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v1.22.2[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.5[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.5.0-0failedtopullimage"registry.aliyuncs.com/k8sxio/coredns:v1.8.4":output:time="2021-10-31T07:04:50+08:00"level=fatalmsg="pulling image:rpc error:code =NotFound desc =failed to pull and unpack image \"registry.aliyuncs.com/k8sxio/coredns:v1.8.4\":failed to resolve reference \"registry.aliyuncs.com/k8sxio/coredns:v1.8.4\":registry.aliyuncs.com/k8sxio/coredns:v1.8.4:not found",error:exitstatus1Toseethestacktraceofthiserrorexecutewith--v=5orhigher[root@master1 ~]#
上面在拉取 coredns
镜像的时候出错了,阿里云仓库里没有找到这个镜像,我们可以手动到官方仓库 pull 该镜像,然后重新 tag 下镜像地址即可:
[root@master1 ~]#ctr -n k8s.io i pull docker.io/coredns/coredns:1.8.4docker.io/coredns/coredns:1.8.4:resolved|++++++++++++++++++++++++++++++++++++++|index-sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890:done|++++++++++++++++++++++++++++++++++++++|manifest-sha256:10683d82b024a58cc248c468c2632f9d1b260500f7cd9bb8e73f751048d7d6d4:done|++++++++++++++++++++++++++++++++++++++|layer-sha256:bc38a22c706b427217bcbd1a7ac7c8873e75efdd0e59d6b9f069b4b243db4b4b:done|++++++++++++++++++++++++++++++++++++++|config-sha256:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44:done|++++++++++++++++++++++++++++++++++++++|layer-sha256:c6568d217a0023041ef9f729e8836b19f863bcdb612bb3a329ebc165539f5a80:done|++++++++++++++++++++++++++++++++++++++|elapsed:15.9stotal:12.1M(780.6 KiB/s) unpackinglinux/amd64sha256:6e5a02c21641597998b4be7cb5eb1e7b02c0d8d23cce4dd09f4682d463798890...done:684.151259ms[root@master1 ~]#ctr -n k8s.io i ls -qdocker.io/coredns/coredns:1.8.4registry.aliyuncs.com/k8sxio/etcd:3.5.0-0registry.aliyuncs.com/k8sxio/etcd@sha256:9ce33ba33d8e738a5b85ed50b5080ac746deceed4a7496c550927a7a19ca3b6dregistry.aliyuncs.com/k8sxio/kube-apiserver:v1.22.2registry.aliyuncs.com/k8sxio/kube-apiserver@sha256:eb4fae890583e8d4449c1e18b097aec5574c25c8f0323369a2df871ffa146f41registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.22.2registry.aliyuncs.com/k8sxio/kube-controller-manager@sha256:91ccb477199cdb4c63fb0c8fcc39517a186505daf4ed52229904e6f9d09fd6f9registry.aliyuncs.com/k8sxio/kube-proxy:v1.22.2registry.aliyuncs.com/k8sxio/kube-proxy@sha256:561d6cb95c32333db13ea847396167e903d97cf6e08dd937906c3dd0108580b7registry.aliyuncs.com/k8sxio/kube-scheduler:v1.22.2registry.aliyuncs.com/k8sxio/kube-scheduler@sha256:c76cb73debd5e37fe7ad42cea9a67e0bfdd51dd56be7b90bdc50dd1bc03c018bregistry.aliyuncs.com/k8sxio/pause:3.5registry.aliyuncs.com/k8sxio/pause@sha256:1ff6c18fbef2045af6b9c16bf034cc421a29027b800e4f9b68ae9b1cb3e9ae07sha256:0048118155842e4c91f0498dd298b8e93dc3aecc7052d9882b76f48e311a76basha256:5425bcbd23c54270d9de028c09634f8e9a014e9351387160c133ccf3a53ab3dcsha256:873127efbc8a791d06e85271d9a2ec4c5d58afdf612d490e24fb3ec68e891c8dsha256:8d147537fb7d1ac8895da4d55a5e53621949981e2e6460976dae812f83d84a44sha256:b51ddc1014b04295e85be898dac2cd4c053433bfe7e702d7e9d6008f3779609bsha256:e64579b7d8862eff8418d27bf67011e348a5d926fa80494a6475b3dc959777f5sha256:ed210e3e4a5bae1237f1bb44d72a05a2f1e5c6bfe7a7e73da179e2534269c459[root@master1 ~]#[root@master1 ~]#ctr -n k8s.io i tag docker.io/coredns/coredns:1.8.4 registry.aliyuncs.com/k8sxio/coredns:v1.8.4registry.aliyuncs.com/k8sxio/coredns:v1.8.4[root@master1 ~]#
然后就可以使用上面的配置文件在 master 节点上进行初始化:
这里需要特别注意下:会报错。。。
#注意:可以通过加上--v 5来进一步打印更多的log信息kubeadminit--configkubeadm.yaml--v5
[root@master1 ~]#kubeadm init --config kubeadm.yamlW103107:14:21.83705926278strict.go:55]errorunmarshalingconfigurationschema.GroupVersionKind{Group:"kubelet.config.k8s.io",Version:"v1beta1",Kind:"KubeletConfiguration"}:errorconvertingYAMLtoJSON:yaml:unmarshalerrors:line27:key"cgroupDriver"alreadysetinmap[init] Using Kubernetes version:v1.22.2[preflight] Running pre-flight checks[preflight] Pulling images required forsetting up a Kubernetes cluster[preflight] This might take a minute or two,depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca"certificate and key[certs] Generating "apiserver"certificate and key[certs] apiserver serving cert is signed forDNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.c luster.local master1] and IPs [10.96.0.1172.29.9.51][certs] Generating "apiserver-kubelet-client"certificate and key[certs] Generating "front-proxy-ca"certificate and key[certs] Generating "front-proxy-client"certificate and key[certs] Generating "etcd/ca"certificate and key[certs] Generating "etcd/server"certificate and key[certs] etcd/server serving cert is signed forDNS names [localhost master1] and IPs [172.29.9.51127.0.0.1::1][certs] Generating "etcd/peer"certificate and key[certs] etcd/peer serving cert is signed forDNS names [localhost master1] and IPs [172.29.9.51127.0.0.1::1][certs] Generating "etcd/healthcheck-client"certificate and key[certs] Generating "apiserver-etcd-client"certificate and key[certs] Generating "sa"key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf"kubeconfig file[kubeconfig] Writing "kubelet.conf"kubeconfig file[kubeconfig] Writing "controller-manager.conf"kubeconfig file[kubeconfig] Writing "scheduler.conf"kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for"kube-apiserver"[control-plane] Creating static Pod manifest for"kube-controller-manager"[control-plane] Creating static Pod manifest for"kube-scheduler"[etcd] Creating static Pod manifest forlocal etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting forthe kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.Unfortunately,anerrorhasoccurred:timedoutwaitingfortheconditionThiserrorislikelycausedby:-Thekubeletisnotrunning-Thekubeletisunhealthyduetoamisconfigurationofthenodeinsomeway(required cgroupsdisabled)Ifyouareonasystemd-poweredsystem,youcantrytotroubleshoottheerrorwiththefollowingcommands:-'systemctl status kubelet'-'journalctl -xeu kubelet'Additionally,acontrolplanecomponentmayhavecrashedorexitedwhenstartedbythecontainerruntime.Totroubleshoot,listallcontainersusingyourpreferredcontainerruntimesCLI.HereisoneexamplehowyoumaylistallKubernetescontainersrunningincri-o/containerdusingcrictl:-'crictl --runtime-endpoint /run/containerd/containerd.sock ps -a |grep kube |grep -v pause'Onceyouhavefoundthefailingcontainer,youcaninspectitslogswith:-'crictl --runtime-endpoint /run/containerd/containerd.sock logs CONTAINERID'errorexecutionphasewait-control-plane:couldn't initialize a Kubernetes clusterTo see the stack trace of this error execute with --v=5 or higher[root@master1 ~]#
我们进一步排查报错log:
[root@master1 ~]#systemctl status kubelet
[root@master1 ~]#journalctl -xeu kubelet
[root@master1 ~]#vim /var/log/messages
通过上述排查,从vim /var/log/messages可以看出是error="failed to get sandbox image \"k8s.gcr.io/pause:3.5\"
问题。
咦,奇怪了,不是本地都已经下载好了阿里云pause镜像了吗,这里怎么提示还从默认k8s仓库拉pause镜像呢?
这里我们再根据报错提示再次从k8s官方仓库拉取下pause镜像,看下效果:
[root@master1 ~]#ctr -n k8s.io i pull k8s.gcr.io/pause:3.5
尝试了几次,发现从k8s官方仓库拉取pause镜像一直失败,即使科学上网也还是有问题。
咦,我们不是可以直接把阿里云仓库下载的镜像直接打下tag不就可以了吗,下面测试下:
[root@master1 ~]#ctr -n k8s.io i tag registry.aliyuncs.com/k8sxio/pause:3.5 k8s.gcr.io/pause:3.5[root@master1 ~]#ctr -n k8s.io i ls -q
此时,我们用kubeadm reset命令
清空下刚才master1节点,再次初始化集群看下下效果:
[root@master1 ~]#kubeadm reset
[root@master1 ~]#kubeadm init --config kubeadm.yamlW103107:56:49.68172727288strict.go:55]errorunmarshalingconfigurationschema.GroupVersionKind{Group:"kubelet.config.k8s.io",Version:"v1beta1",Kind:"KubeletConfiguration"}:errorconvertingYAMLtoJSON:yaml:unmarshalerrors:line27:key"cgroupDriver"alreadysetinmap[init] Using Kubernetes version:v1.22.2[preflight] Running pre-flight checks[preflight] Pulling images required forsetting up a Kubernetes cluster[preflight] This might take a minute or two,depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca"certificate and key[certs] Generating "apiserver"certificate and key[certs] apiserver serving cert is signed forDNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master1] and IPs [10.96.0.1172.29.9.51][certs] Generating "apiserver-kubelet-client"certificate and key[certs] Generating "front-proxy-ca"certificate and key[certs] Generating "front-proxy-client"certificate and key[certs] Generating "etcd/ca"certificate and key[certs] Generating "etcd/server"certificate and key[certs] etcd/server serving cert is signed forDNS names [localhost master1] and IPs [172.29.9.51127.0.0.1::1][certs] Generating "etcd/peer"certificate and key[certs] etcd/peer serving cert is signed forDNS names [localhost master1] and IPs [172.29.9.51127.0.0.1::1][certs] Generating "etcd/healthcheck-client"certificate and key[certs] Generating "apiserver-etcd-client"certificate and key[certs] Generating "sa"key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf"kubeconfig file[kubeconfig] Writing "kubelet.conf"kubeconfig file[kubeconfig] Writing "controller-manager.conf"kubeconfig file[kubeconfig] Writing "scheduler.conf"kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for"kube-apiserver"[control-plane] Creating static Pod manifest for"kube-controller-manager"[control-plane] Creating static Pod manifest for"kube-scheduler"[etcd] Creating static Pod manifest forlocal etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting forthe kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[apiclient] All control plane components are healthy after 210.014030 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system"Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.22"in namespace kube-system with the configuration forthe kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node master1 as control-plane by adding the labels:[node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers][mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token:abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens,cluster-info ConfigMap,RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order fornodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation forall node client certificates in the cluster[bootstrap-token] Creating the "cluster-info"ConfigMap in the "kube-public"namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf"to point to a rotatable kubelet client certificate and key[addons] Applied essential addon:CoreDNS[addons] Applied essential addon:kube-proxyYourKubernetescontrol-planehasinitializedsuccessfully!Tostartusingyourcluster,youneedtorunthefollowingasaregularuser:mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/admin.conf$HOME/.kube/configsudochown$(id-u):$(id-g) $HOME/.kube/configAlternatively,ifyouaretherootuser,youcanrun:exportKUBECONFIG=/etc/kubernetes/admin.confYoushouldnowdeployapodnetworktothecluster.Run"kubectl apply -f [podnetwork].yaml"withoneoftheoptionslistedat:https:Thenyoucanjoinanynumberofworkernodesbyrunningthefollowingoneachasroot:kubeadmjoin172.29.9.51:6443--tokenabcdef.0123456789abcdef\--discovery-token-ca-cert-hashsha256:7fb11aea8a467bd1453efe10600c167b87a5f04d55d7e60298583a6a0c736ec4[root@master1 ~]#
注意:
I1030 07:26:13.898398 18436 checks.go:205] validating availability of port 10250 kubelet端口 I1030 07:26:13.898547 18436 checks.go:205] validating availability of port 2379 etcd端口 I1030 07:26:13.898590 18436 checks.go:205] validating availability of port 2380 etcd端口
master1节点初始化成功。
根据安装提示拷贝 kubeconfig 文件:
mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/admin.conf$HOME/.kube/configsudochown$(id-u):$(id-g) $HOME/.kube/config
然后可以使用 kubectl 命令查看 master 节点已经初始化成功了:
[root@master1 ~]#kubectl get nodeNAMESTATUSROLESAGEVERSIONmaster1Readycontrol-plane,master114sv1.22.2[root@master1 ~]#
3.4添加节点
记住初始化集群上面的配置和操作要提前做好,将 master 节点上面的 $HOME/.kube/config
文件拷贝到 node 节点对应的文件中,安装 kubeadm、kubelet、kubectl(可选),然后执行上面初始化完成后提示的 join 命令即可:
kubeadmjoin172.29.9.51:6443--tokenabcdef.0123456789abcdef\--discovery-token-ca-cert-hashsha256:7fb11aea8a467bd1453efe10600c167b87a5f04d55d7e60298583a6a0c736ec4
join 命令:如果忘记了上面的 join 命令可以使用命令 kubeadm token create --print-join-command
重新获取。
执行成功后运行 get nodes 命令:
[root@master1 ~]#kubectl get nodeNAME STATUS ROLES AGE VERSIONmaster1 Ready control-plane,master 31m v1.22.2node1 Ready <none>102s v1.22.2node2 Ready <none>95s v1.22.2[root@master1 ~]#
3.5安装网络插件flannel
这个时候其实集群还不能正常使用,因为还没有安装网络插件,接下来安装网络插件,可以在文档 https:# 搜索到名为 kube-flannel-ds 的 DaemonSet,在kube-flannel容器下面➜~vikube-flannel.yml......containers:-name:kube-flannelimage:quay.io/coreos/flannel:v0.15.0command:-/opt/bin/flanneldargs:---ip-masq---kube-subnet-mgr---iface=eth0# 如果是多网卡的话,指定内网网卡的名称......