kubectl使用
kubectl使用
目录
[toc]
1、kubectl命令自动补全
bash
kubectl工具自动补全:source<(kubectlcompletion bash)(依赖软件包bash-completion)
🍀 1、centos下安装方式
bash
#安装软件包yuminstall-yepel-releasebash-completion#执行命令source/usr/share/bash-completion/bash_completionsource<(kubectlcompletion bash)echo"source <(kubectl completion bash)">>~/.bashrcsource~/.bashrc
🍀 2、ubuntu下安装方式
bash
aptinstallbash-completionsource<(kubectlcompletion bash)
2、kubectl使用的连接k8s认证文件
💘 实战:kubectl使用的连接k8s认证文件测试实验(测试成功)-20211021
实验环境
bash
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.21CONTAINER-RUNTIME:docker:-rw-------1rootroot5564Oct2015:55/etc/kubernetes/admin.conf[root@k8s-master1 ~]#ll .kube/config-rw-------1rootroot5564Oct2015:56.kube/config[root@k8s-master1 ~]#
node节点上之前是有安装的kubectl工具的,但是它无法查看集群信息:
bash
[root@k8s-node1 ~]#kubectl get poTheconnectiontotheserverlocalhost:8080wasrefused-didyouspecifytherighthostorport?[root@k8s-node1 ~]#
再看下node节点相应目录下是否存在认证文件:=》都不存在!
bash
[root@k8s-node1 ~]#ll /etc/kubernetes/kubelet.confmanifests/pki/[root@k8s-node1 ~]#ll .kube
下面这个命令是在k8s集群搭建过程中配置kubectl使用的连接k8s认证文件的方法:
拷贝kubectl使用的连接k8s认证文件到默认路径:
bash
mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/admin.conf$HOME/.kube/configsudochown$(id-u):$(id-g) $HOME/.kube/config
接下来我们把master的这个config文件给传送到node节点并做相应的配置:
bash
#在master节点scp这个认证文件[root@k8s-master1 ~]#scp .kube/config root@172.29.9.43:/etc/kubernetes/config100%55641.7MB/s00:00[root@k8s-master1 ~]##在node节点开始配置mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/config$HOME/.kube/configsudochown$(id-u):$(id-g) $HOME/.kube/config
测试在node节点是否可以访问k8s集群:=》是可以的,测试完美!
2、实验总结
1.kubeconfig配置文件
kubectl使用kubeconfig认证文件连接K8s集群,使用kubectl config指令生成kubeconfig文件。
bash
kubeconfig连接K8s认证文件apiVersion:v1kind:Configclusters:-cluster:certificate-authority-data:server:https:name:kubernetescontexts:-context:cluster:kubernetesuser:kubernetes-adminname:kubernetes-admin@kubernetescurrent-context:kubernetes-admin@kubernetesusers:-name:kubernetes-adminuser:client-certificate-data:client-key-data:
2.方法:拷贝kubectl使用的连接k8s认证文件到默认路径
bash
mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/admin.conf$HOME/.kube/configsudochown$(id-u):$(id-g) $HOME/.kube/config
bash
[root@k8s-master1 ~]#ll /etc/kubernetes/admin.conf-rw-------1rootroot5564Oct2015:55/etc/kubernetes/admin.conf[root@k8s-master1 ~]#ll .kube/config-rw-------1rootroot5564Oct2015:56.kube/config[root@k8s-master1 ~]#
- 注意:也可以设置环境变量
3.注意:各种搭建方式下kubeconfig文件的生成方式
1.单master集群:
bash
当时直接执行`kubeadm init`即可自动生成admin.conf文件,命令完成后只需要我们执行copy操作即可:mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/admin.conf$HOME/.kube/configsudochown$(id-u):$(id-g) $HOME/.kube/config
2.高可用k8s集群:
bash
生成初始化配置文件:cat>kubeadm-config.yaml<<EOFapiVersion:kubeadm.k8s.io/v1beta2bootstrapTokens:- groups:- system:bootstrappers:kubeadm:default-node-tokentoken:9037x2.tcaqnpaqkra9vsbwttl:24h0m0susages:- signing- authenticationkind:InitConfigurationlocalAPIEndpoint:advertiseAddress:172.29.9.41bindPort:6443nodeRegistration:criSocket:/var/run/dockershim.sockname:k8s-master1taints:- effect:NoSchedulekey:node-role.kubernetes.io/master---apiServer:certSANs:# 包含所有Master/LB/VIP IP,一个都不能少!为了方便后期扩容可以多写几个预留的IP。- k8s-master1- k8s-master2- 172.29.9.41- 172.29.9.42- 172.29.9.88- 127.0.0.1extraArgs:authorization-mode:Node,RBACtimeoutForControlPlane:4m0sapiVersion:kubeadm.k8s.io/v1beta2certificatesDir:/etc/kubernetes/pkiclusterName:kubernetescontrolPlaneEndpoint:172.29.9.88:16443 # 负载均衡虚拟IP(VIP)和端口controllerManager:{}dns:type:CoreDNSetcd:external:# 使用外部etcdendpoints:- https:- https:- https:caFile:/opt/etcd/ssl/ca.pem # 连接etcd所需证书certFile:/opt/etcd/ssl/server.pemkeyFile:/opt/etcd/ssl/server-key.pemimageRepository:registry.aliyuncs.com/google_containers # 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址kind:ClusterConfigurationkubernetesVersion:v1.20.0 # K8s版本,与上面安装的一致networking:dnsDomain:cluster.localpodSubnet:10.244.0.0/16 # Pod网络,与下面部署的CNI网络组件yaml中保持一致serviceSubnet:10.96.0.0/12 # 集群内部虚拟网络,Pod统一访问入口scheduler:{}EOF使用配置文件引导:kubeadminit--configkubeadm-config.yaml[root@k8s-master1 ~]#kubeadm init --config kubeadm-config.yaml[init] Using Kubernetes version:v1.20.0[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]:detected"cgroupfs"astheDockercgroupdriver.Therecommendeddriveris"systemd".Pleasefollowtheguideathttps:[WARNING SystemVerification]:thisDockerversionisnotonthelistofvalidatedversions:20.10.9.Latestvalidatedversion:19.03[preflight] Pulling images required forsetting up a Kubernetes cluster[preflight] This might take a minute or two,depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca"certificate and key[certs] Generating "apiserver"certificate and key[certs] apiserver serving cert is signed forDNS names [k8s-master1 k8s-master2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1172.29.9.41172.29.9.88172.29.9.42127.0.0.1][certs] Generating "apiserver-kubelet-client"certificate and key[certs] Generating "front-proxy-ca"certificate and key[certs] Generating "front-proxy-client"certificate and key[certs] External etcd mode:Skipping etcd/ca certificate authority generation[certs] External etcd mode:Skipping etcd/server certificate generation[certs] External etcd mode:Skipping etcd/peer certificate generation[certs] External etcd mode:Skipping etcd/healthcheck-client certificate generation[certs] External etcd mode:Skipping apiserver-etcd-client certificate generation[certs] Generating "sa"key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[endpoint] WARNING:port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "admin.conf"kubeconfig file[endpoint] WARNING:port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "kubelet.conf"kubeconfig file[endpoint] WARNING:port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "controller-manager.conf"kubeconfig file[endpoint] WARNING:port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "scheduler.conf"kubeconfig file[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Starting the kubelet[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for"kube-apiserver"[control-plane] Creating static Pod manifest for"kube-controller-manager"[control-plane] Creating static Pod manifest for"kube-scheduler"[wait-control-plane] Waiting forthe kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 17.036041 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system"Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.20"in namespace kube-system with the configuration forthe kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the labels "node-role.kubernetes.io/master=''"and "node-role.kubernetes.io/control-plane=''(deprecated)"[mark-control-plane] Marking the node k8s-master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token:9037x2.tcaqnpaqkra9vsbw[bootstrap-token] Configuring bootstrap tokens,cluster-info ConfigMap,RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order fornodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation forall node client certificates in the cluster[bootstrap-token] Creating the "cluster-info"ConfigMap in the "kube-public"namespace[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf"to point to a rotatable kubelet client certificate and key[addons] Applied essential addon:CoreDNS[endpoint] WARNING:port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[addons] Applied essential addon:kube-proxyYourKubernetescontrol-planehasinitializedsuccessfully!Tostartusingyourcluster,youneedtorunthefollowingasaregularuser:mkdir-p$HOME/.kubesudocp-i/etc/kubernetes/admin.conf$HOME/.kube/configsudochown$(id-u):$(id-g) $HOME/.kube/configAlternatively,ifyouaretherootuser,youcanrun:exportKUBECONFIG=/etc/kubernetes/admin.confYoushouldnowdeployapodnetworktothecluster.Run"kubectl apply -f [podnetwork].yaml"withoneoftheoptionslistedat:https:Youcannowjoinanynumberofcontrol-planenodesbycopyingcertificateauthoritiesandserviceaccountkeysoneachnodeandthenrunningthefollowingasroot:kubeadmjoin172.29.9.88:16443--token9037x2.tcaqnpaqkra9vsbw\--discovery-token-ca-cert-hashsha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d\--control-planeThenyoucanjoinanynumberofworkernodesbyrunningthefollowingoneachasroot:kubeadmjoin172.29.9.88:16443--token9037x2.tcaqnpaqkra9vsbw\--discovery-token-ca-cert-hashsha256:b83d62021daef2cd62c0c19ee0f45adf574c2eaf1de28f0e6caafdabdf95951d
4.问题:node 节点上需要admin.conf吗?
回答:
bash
node节点如果不执行kubectl工具,就不需要admin.conf,只在master节点,就可以完成集群维护工作了;只要有admin.conf,不仅仅是在node节点访问api-server,在任何网络可达的电脑上都可以访问api-server;你需要用到kubectl才需要admin.conf配置,那么多节点上都可以运行这个命令安全上也会存在问题;kubectl就是k8s集群的客户端,可以把这个客户端放在k8s的master节点机器,也可以放在node节点机器,应该也可以放在集群之外,只需要有地址和认证就可以了,类似于mysql客户端和mysql服务器。kubectl需要配置认证信息与apiserver去通信,然后保存在etcd里面;.kube/config文件就是通信认证文件,kubectl通过认证文件调用apiserver,apiserver再去调用etcd里面的数据存储,你kubectlget看到的都是etcd的;是apiserver去调用etcd的数据;其实就是一个查询;
3、kubectl官方文档参考地址
上次更新时间: