实战-yaml方式安装ingress-nginx(测试成功)(DaemonSet方式)v3-20230311
v3-2023.3.11-实战-yaml方式安装ingress-nginx(测试成功)(DaemonSet方式)
目录
[toc]
实验环境
实验环境:1、win10,vmwrokstation虚机;2、k8s集群:3台centos7.61810虚机,1个master节点,2个node节点k8sversion:v1.22.2containerd:v1.5.5#同样在k8s version:v1.25.4,containerd:v1.6.10下也可以使用次方法;
实验软件
2023.3.11-实战:yaml方式安装ingress-nginx-2023.3.11(测试成功)
这里注意下:
和默认使用Deployment
方式部署,DaemonSet
部署时需要更改如下2点:(自己附件deploy.yaml里已经都更改好了的
)
01.更改部署方式为DaemonSet
406 apiVersion:apps/v1407 kind:DaemonSet
02.添加容忍
513 tolerations:514 - operator:Exists
- 上面的命令执行后会自动创建一个名为 ingress-nginx 的命名空间,会生成如下几个 Pod:
[root@master1 ingress-nginx]#kubectl get pods -n ingress-nginx -owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESingress-nginx-admission-create--1-5h6rr0/1Completed020m10.244.1.25node1<none><none>ingress-nginx-admission-patch--1-jdn2k0/1Completed020m10.244.2.18node2<none><none>ingress-nginx-controller-46kbb1/1Running07m58s10.244.2.20node2<none><none>ingress-nginx-controller-xtbn41/1Running010m10.244.0.2master1<none><none>ingress-nginx-controller-zxffk1/1Running08m20s10.244.1.27node1<none><none>
- 此外还会创建如下两个 Service 对象:
[root@master1 ingress-nginx]#kubectl get svc -n ingress-nginxNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S) AGEingress-nginx-controllerLoadBalancer10.108.58.24680:32439/TCP,443:31347/TCP20mingress-nginx-controller-admissionClusterIP10.101.184.28<none>443/TCP20m
其中 ingress-nginx-controller-admission 是为准入控制器提供服务的,我们也是强烈推荐开启该准入控制器,这样当我们创建不合要求的 Ingress 对象后就会直接被拒绝了。另外一个 ingress-nginx-controller 就是ingress 控制器对外暴露的服务,我们可以看到默认是一个 LoadBalancer 类型的 Service,我们知道该类型是用于云服务商的,我们这里在本地环境,暂时不能使用,但是可以通过他的 NodePort 来对外暴露,后面我们会提供在本地测试环境提供 LoadBalancer 的方式。
- 到这里 ingress-nginx 就部署成功了,安装完成后还会创建一个名为 nginx 的 IngressClass 对象:
[root@master1 ~]# kubectl get ingressclassNAMECONTROLLERPARAMETERSAGEnginxk8s.io/ingress-nginx<none>3m43s[root@master1 ~]#kubectl get ingressclass nginx -o yamlapiVersion:networking.k8s.io/v1kind:IngressClassmetadata:annotations:kubectl.kubernetes.io/last-applied-configuration:|{"apiVersion":"networking.k8s.io/v1","kind":"IngressClass","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.5.1"},"name":"nginx"},"spec":{"controller":"k8s.io/ingress-nginx"}}creationTimestamp:"2023-03-01T14:49:35Z"generation:1labels:app.kubernetes.io/component:controllerapp.kubernetes.io/instance:ingress-nginxapp.kubernetes.io/name:ingress-nginxapp.kubernetes.io/part-of:ingress-nginxapp.kubernetes.io/version:1.5.1name:nginxresourceVersion:"20342"uid:7b4ad44f-1eff-405b-9da4-821808529177spec:controller:k8s.io/ingress-nginx[root@master1 ~]#
这里我们只提供了一个 controller 属性,对应的值和 ingress-nginx 的启动参数中的 controller-class 一致的。
[root@master1 ~]#cat deploy.yaml431spec:432containers:433-args:434-/nginx-ingress-controller435---publish-service=$(POD_NAMESPACE)/ingress-nginx-controller436---election-id=ingress-nginx-leader437---controller-class=k8s.io/ingress-nginx438---ingress-class=nginx439---configmap=$(POD_NAMESPACE)/ingress-nginx-controller440---validating-webhook=:8443441---validating-webhook-certificate=/usr/local/certificates/cert442---validating-webhook-key=/usr/local/certificates/key
2、第一个示例
- 我们先看下
ingress-controller
pod所在的节点
[root@master1 ~]#vim deploy.yaml406apiVersion:apps/v1407kind:Daemonset……509nodeSelector:510kubernetes.io/os:linux[root@master1 ingress-nginx]#kubectl get pods -n ingress-nginx -owideNAMEREADYSTATUSRESTARTSAGEIPNODENOMINATEDNODEREADINESSGATESingress-nginx-admission-create--1-5h6rr0/1Completed020m10.244.1.25node1<none><none>ingress-nginx-admission-patch--1-jdn2k0/1Completed020m10.244.2.18node2<none><none>ingress-nginx-controller-46kbb1/1Running07m58s10.244.2.20node2<none><none>ingress-nginx-controller-xtbn41/1Running010m10.244.0.2master1<none><none>ingress-nginx-controller-zxffk1/1Running08m20s10.244.1.27node1<none><none>
- 安装成功后,现在我们来为一个 nginx 应用创建一个 Ingress 资源,如下所示:
# my-nginx.yaml apiVersion:apps/v1kind:Deploymentmetadata:name:my-nginxspec:selector:matchLabels:app:my-nginxtemplate:metadata:labels:app:my-nginxspec:containers:- name:my-nginximage:nginxports:- containerPort:80---apiVersion:v1kind:Servicemetadata:name:my-nginxlabels:app:my-nginxspec:ports:- port:80protocol:TCPname:httpselector:app:my-nginx---apiVersion:networking.k8s.io/v1kind:Ingressmetadata:name:my-nginxnamespace:defaultspec:ingressClassName:nginx# 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)rules:- host:first-ingress.172.29.9.52.nip.io# 将域名映射到 my-nginx 服务http:paths:- path:/pathType:Prefixbackend:service:# 将所有请求发送到 my-nginx 服务的 80 端口name:my-nginxport:number:80
不过需要注意大部分Ingress控制器都不是直接转发到Service,而是只是通过Service来获取后端的Endpoints列表(因此这里的svc只起到了一个服务发现的作用),直接转发到Pod,这样可以减少网络跳转,提高性能!!!
⚠️ 注意:
注意我们这里配置的域名是 first-ingress.172.18.0.2.nip.io,该地址其实会直接映射到 172.18.0.2 上面,该 IP 地址就是我的 Node 节点地址,因为我们这里 ingress 控制器是通过 NodePort对外进行暴露的,所以可以通过 域名:nodePort来访问服务。nip.io 是由 PowerDNS 提供支持的开源服务,允许我们可以直接通过使用以下格式将任何 IP 地址映射到主机名,这样我们就不需要在 etc/hosts 文件中配置映射了,对于 Ingress 测试非常方便。
注意:
nip.io
不需要另外安装服务,我们只要安装如下命令配置就好。
- 这里直接创建上面的资源对象即可:
[root@master1 ~]#kubectl apply -f my-nginx.yamldeployment.apps/my-nginxcreatedservice/my-nginxcreatedingress.networking.k8s.io/my-nginxcreated[root@master1 ~]#kubectl get ingressNAMECLASSHOSTSADDRESSPORTSAGEmy-nginxnginxfirst-ingress.172.29.9.52.nip.io8027mroot@master1~]#kubectlgetsvc-ningress-nginxNAMETYPECLUSTER-IPEXTERNAL-IPPORT(S) AGEingress-nginx-controllerLoadBalancer10.96.228.157<pending>80:30933/TCP,443:31697/TCP7h51mingress-nginx-controller-admissionClusterIP10.105.93.22<none>443/TCP7h51m
在上面的 Ingress 资源对象中我们使用配置 ingressClassName:nginx 指定让我们安装的 ingress-nginx 这个控制器来处理我们的 Ingress 资源,配置的匹配路径类型为前缀的方式去匹配 / ,将来自域名 firstingress.172.29.9.52.nip.io 的所有请求转发到 my-nginx 服务的后端 Endpoints 中去,注意访问的时候需要带上 ingress-nginx svc的NodePort 端口。
- 测试
[root@master1 ~]#curl first-ingress.172.29.9.52.nip.iocurl:(7) Failed connect to first-ingress.172.29.9.52.nip.io:80;Connectionrefused[root@master1 ~]#curl first-ingress.172.29.9.52.nip.io:30933 #注意:这里的ingrexx-nginx默认是一个 LoadBalancer 类型的 Service,我们知道该类型是用于云服务商的,我们这里在本地环境,暂时不能使用,但是可以通过他的 NodePort 来对外暴露。<!DOCTYPEhtml><html><head><title>Welcome to nginx!</title><style>html{color-scheme:lightdark;}body{width:35em;margin:0auto;font-family:Tahoma,Verdana,Arial,sans-serif;}</style></head><body><h1>Welcome to nginx!</h1><p>If you see this page,the nginx web server is successfully installed andworking.Furtherconfigurationisrequired.</p><p>For online documentation and support please refer to<a href="http:Commercialsupportisavailableat<a href="http:<p><em>Thank you forusing nginx.</em></p></body></html>[root@master1 ~]#
- 前面我们也提到了 ingress-nginx 控制器的核心原理就是将我们的 Ingress 这些资源对象映射翻译成 Nginx 配置文件 nginx.conf ,我们可以通过查看控制器中的配置文件来验证这点:
[root@master1 ~]#kubectl exec -it ingress-nginx-controller-c66bc7c5c-pj2h8 -n ingress-nginx -- cat /etc/nginx/nginx.conf……upstreamupstream_balancer{### Attention!!!## We no longer create "upstream"section for every backend.# Backends are handled dynamically using Lua. If you would like to debug# and see what backends ingress-nginx has in its memory you can# install our kubectl plugin https:# Once you have the plugin you can use "kubectl ingress-nginx backends"command to# inspect current backends.####server0.0.0.1;# placeholderbalancer_by_lua_block{balancer.balance()}keepalive320;keepalive_time1h;keepalive_timeout60s;keepalive_requests10000;}……## start server first-ingress.172.29.9.52.nip.ioserver{server_namefirst-ingress.172.29.9.52.nip.io;listen80;listen[::]:80 ;listen443sslhttp2;listen[::]:443 ssl http2 ;set$proxy_upstream_name "-";ssl_certificate_by_lua_block{certificate.call()}location/{set$namespace "default";set$ingress_name "my-nginx";set$service_name "my-nginx";set$service_port "80";set$location_path "/";set$global_rate_limit_exceeding n;rewrite_by_lua_block{lua_ingress.rewrite({force_ssl_redirect=false,ssl_redirect=true,force_no_ssl_redirect=false,preserve_trailing_slash=false,use_port_in_redirects=false,global_throttle={namespace="",limit=0,window_size=0,key={},ignored_cidrs={}},})balancer.rewrite()plugins.run()}# be careful with `access_by_lua_block` and `satisfy any` directives as satisfy any# will always succeed when there's `access_by_lua_block` that does not have any lua code doing `ngx.exit(ngx.DECLINED)`# other authentication method such as basic auth or external auth useless - all requests will be allowed.#access_by_lua_block {#}header_filter_by_lua_block{lua_ingress.header()plugins.run()}body_filter_by_lua_block{plugins.run()}log_by_lua_block{balancer.log()monitor.call()plugins.run()}port_in_redirectoff;set$balancer_ewma_score -1;set$proxy_upstream_name "default-my-nginx-80";set$proxy_host $proxy_upstream_name;set$pass_access_scheme $scheme;set$pass_server_port $server_port;set$best_http_host $http_host;set$pass_port $pass_server_port;set$proxy_alternative_upstream_name "";client_max_body_size1m;proxy_set_headerHost$best_http_host;# Pass the extracted client certificate to the backend# Allow websocket connectionsproxy_set_headerUpgrade$http_upgrade;proxy_set_headerConnection$connection_upgrade;proxy_set_headerX-Request-ID$req_id;proxy_set_headerX-Real-IP$remote_addr;proxy_set_headerX-Forwarded-For$remote_addr;proxy_set_headerX-Forwarded-Host$best_http_host;proxy_set_headerX-Forwarded-Port$pass_port;proxy_set_headerX-Forwarded-Proto$pass_access_scheme;proxy_set_headerX-Forwarded-Scheme$pass_access_scheme;proxy_set_headerX-Scheme$pass_access_scheme;# Pass the original X-Forwarded-Forproxy_set_headerX-Original-Forwarded-For$http_x_forwarded_for;# mitigate HTTPoxy Vulnerability# https:proxy_set_headerProxy"";# Custom headers to proxied serverproxy_connect_timeout5s;proxy_send_timeout60s;proxy_read_timeout60s;proxy_bufferingoff;proxy_buffer_size4k;proxy_buffers44k;proxy_max_temp_file_size1024m;proxy_request_bufferingon;proxy_http_version1.1;proxy_cookie_domainoff;proxy_cookie_pathoff;# In case of errors try the next upstream server before returning an errorproxy_next_upstreamerrortimeout;proxy_next_upstream_timeout0;proxy_next_upstream_tries3;proxy_passhttp:proxy_redirectoff;}}## end server first-ingress.172.29.9.52.nip.io……
我们可以在 nginx.conf 配置文件中看到上面我们新增的 Ingress 资源对象的相关配置信息,不过需要注意的是现在并不会为每个 backend 后端都创建一个 upstream 配置块,现在是使用 Lua 程序进行动态处理的,所以我们没有直接看到后端的 Endpoints 相关配置数据。
关于我
我的博客主旨:
- 排版美观,语言精炼;
- 文档即手册,步骤明细,拒绝埋坑,提供源码;
- 本人实战文档都是亲测成功的,各位小伙伴在实际操作过程中如有什么疑问,可随时联系本人帮您解决问题,让我们一起进步!
🍀 微信二维码 x2675263825 (舍得), qq:2675263825。
🍀 微信公众号 《云原生架构师实战》
🍀 语雀