hugo-teek is loading...

Ingress-nginx

最后更新于:

Ingress-nginx

image-20230307212336426

目录

[toc]

原文链接

https://onedayxyy.cn/docs/ingress-nginx

image-20240309071723747

推荐文章

我的开源项目:

https://onedayxyy.cn/docs/MyOpenSourceProject

image-20240305125631154

本节实战

实战名称
💘 实战:在 Ingress 对象上配置Basic Auth-2023.3.12(测试成功)
💘 实战:使用外部的 Basic Auth 认证信息-2023.3.12(测试成功)
💘 实战:Ingress-nginx之URL Rewrite-2023.3.13(测试成功)
💘 实战:Ingress-nginx之灰度发布-2023.3.14(测试成功)
💘 实战:Ingress-nginx之用 HTTPS 来访问我们的应用(openssl)-2022.11.27(测试成功)
💘 实战:实战:Ingress-nginx之用 HTTPS 来访问我们的应用(cfgssl)-2023.1.2(测试成功)
💘 实战:Ingress-nginx之TCP-2023.3.15(测试成功)
💘 实战:Ingress-nginx之全局配置-2023.3.15(测试成功)

前言

我们已经了解了 Ingress 资源对象只是一个路由请求描述配置文件,要让其真正生效还需要对应的 Ingress 控制器才行。Ingress 控制器有很多,这里我们先介绍使用最多的 ingress-nginx它是基于 Nginx 的 Ingress 控制器

ingress-nginx控制器主要是用来组装一个 nginx.conf 的配置文件**,当配置文件发生任何变动的时候就需要重新加载 Nginx 来生效,**但是并不会只在影响 upstream 配置的变更后就重新加载 Nginx,控制器内部会使用一个 lua-nginx-module 来实现该功能。

我们知道 Kubernetes 控制器使用控制循环模式来检查控制器中所需的状态是否已更新或是否需要变更,所以 ingress-nginx 需要使用集群中的不同对象来构建模型,比如 Ingress、Service、Endpoints、Secret、ConfigMap 等可以生成反映集群状态的配置文件的对象,控制器需要一直 Watch 这些资源对象的变化,但是并没有办法知道特定的更改是否会影响到最终生成的 nginx.conf 配置文件,所以一旦 Watch 到了任何变化,控制器都必须根据集群的状态重建一个新的模型,并将其与当前的模型进行比较,如果模型相同则就可以避免生成新的 Nginx 配置并触发重新加载,否则还需要检查模型的差异是否只和端点有关**,如果是这样,则然后需要使用 HTTP POST 请求将新的端点列表发送到在 Nginx 内运行的 Lua 处理程序,并再次避免生成新的 Nginx 配置并触发重新加载。**如果运行和新模型之间的差异不仅仅是端点,那么就会基于新模型创建一个新的 Nginx 配置了,这样构建模型最大的一个好处就是在状态没有变化时避免不必要的重新加载,可以节省大量 Nginx 重新加载。

下面简单描述了需要重新加载的一些场景:

  • 创建了新的 Ingress 资源
  • TLS 添加到现有 Ingress
  • 从 Ingress 中添加或删除 path 路径
  • Ingress、Service、Secret 被删除了
  • Ingress 的一些缺失引用对象变可用了,例如 Service 或 Secret
  • 更新了一个 Secret

对于集群规模较大的场景下频繁的对Nginx进行重新加载显然会造成大量的性能消耗,所以要尽可能减少出现重新加载的场景。

现在我们已经安装了 ingress-nginx ,并可以通过 LoadBalancer 负载均衡器来暴露其服务了,那么接下来我们就来了解下 ingress-nginx 的一些具体配置使用,要进行一些自定义配置,有几种方式可以实现:使用 Configmap 在Nginx 中设置全局配置、通过 Ingress 的 Annotations 设置特定 Ingress 的规则、自定义模板。接下来我们重点给大家介绍使用注解来对 Ingress 对象进行自定义

 1[root@master1 ~]#kubectl get svc ingress-nginx-controller -n ingress-nginx
 2NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
 3ingress-nginx-controller   LoadBalancer   10.97.111.207   172.29.9.61   80:30970/TCP,443:32364/TCP   23h
 4
 5[root@master1 ~]#kubectl get cm -ningress-nginx
 6NAME                       DATA   AGE
 7ingress-nginx-controller   1      23h
 8kube-root-ca.crt           1      23h
 9[root@master1 ~]#kubectl get cm ingress-nginx-controller -ningress-nginx -oyaml
10apiVersion: v1
11data:
12  allow-snippet-annotations: "true"
13kind: ConfigMap
14metadata:
15  annotations:
16    kubectl.kubernetes.io/last-applied-configuration: |
17      {"apiVersion":"v1","data":{"allow-snippet-annotations":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.5.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}    
18  creationTimestamp: "2023-03-06T22:59:36Z"
19  labels:
20    app.kubernetes.io/component: controller
21    app.kubernetes.io/instance: ingress-nginx
22    app.kubernetes.io/name: ingress-nginx
23    app.kubernetes.io/part-of: ingress-nginx
24    app.kubernetes.io/version: 1.5.1
25  name: ingress-nginx-controller
26  namespace: ingress-nginx
27  resourceVersion: "177720"
28  uid: 7692a932-ce9a-40d0-8df2-988e4eb0aa31
29[root@master1 ~]#

1、Basic Auth

1.在 Ingress 对象上配置Basic Auth

==💘 实战:在 Ingress 对象上配置Basic Auth-2023.3.12(测试成功)==

image-20230312222814393

  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd: v1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/15d39N-oQEsgcLlqZOCb__w?pwd=pshn 提取码:pshn 2023.3.12-实战:在 Ingress 对象上配置Basic Auth-2023.3.12(测试成功)

image-20230312222659310

  • 前提条件

已安装好ingress-nginx环境;(ingress-nginx svc类型是LoadBalancer的。

已部署好MetalLB环境;(当然,这里也可以不部署MetalLB环境,直接使用域名:NodePort端口去访问的,这里为了测试方便,我们使用LB来进行访问)。

ingress-nginx部署见文档:

本地文档:

image-20230311153824838

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129334116?spm=1001.2014.3001.5501

image-20230308212123364

MetalLB部署见文档:

本地文档:

image-20230308212145378

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129343617?spm=1001.2014.3001.5501

image-20230308212209034

⚠️ 注意:Ingres-nginx是通过DaemonSet方式部署的,MetalLB部署后,在3个节点上都是可以正常访问ingress的哦。

  • 创建一个后续用于测试的应用
 1#nginx.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: nginx
 6spec:
 7  selector:
 8    matchLabels:
 9      app: nginx
10  template:
11    metadata:
12      labels:
13        app: nginx
14    spec:
15      containers:
16      - name: nginx
17        image: nginx
18        ports:
19        - containerPort: 80
20---
21apiVersion: v1
22kind: Service
23metadata:
24  name: nginx
25spec:
26  ports:
27  - port: 80
28    protocol: TCP
29    name: http
30  selector:
31    app: nginx

部署并观察:

 1[root@master1 ingress-nginx]#kubectl apply -f nginx.yaml 
 2deployment.apps/nginx created
 3service/nginx created
 4
 5[root@master1 ingress-nginx]#kubectl get po,svc
 6NAME                         READY   STATUS    RESTARTS   AGE
 7pod/nginx-7848d4b86f-ftznq   1/1     Running   0          26s
 8
 9NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
10service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   18d
11service/nginx        ClusterIP   10.98.22.153   <none>        80/TCP    26s
  • 我们可以在 Ingress 对象上配置一些基本的 Auth 认证,比如 Basic Auth。可以用 htpasswd 生成一个密码文件来验证身份验证。
 1[root@master1 ingress-nginx]#yum install -y httpd-tools #记得安装下httpd-tools软件包,htpasswd命令依赖于这个软件包
 2
 3[root@master1 ingress-nginx]#htpasswd -c auth foo #当前密码是foo321
 4New password: 
 5Re-type new password:
 6Adding password for user foo
 7[root@master1 ingress-nginx]#ll 
 8total 8
 9-rw-r--r-- 1 root root  42 Mar 12 21:47 auth
10-rw-r--r-- 1 root root 441 Mar  8 06:29 nginx.yaml
  • 然后根据上面的 auth 文件创建一个 secret 对象:
 1[root@master1 ingress-nginx]# kubectl create secret generic basic-auth --from-file=auth
 2secret/basic-auth created
 3[root@master1 ingress-nginx]# kubectl get secret basic-auth -o yaml
 4apiVersion: v1
 5data:
 6  auth: Zm9vOiRhcHIxJE9reFhCMTV3JGNZR1NMYnpBWDhTNklkNHo3WTRlWi8K
 7kind: Secret
 8metadata:
 9  creationTimestamp: "2023-03-12T13:48:52Z"
10  name: basic-auth
11  namespace: default
12  resourceVersion: "242528"
13  uid: b9c37dd7-3fac-43af-b5bc-3388114d9cc0
14type: Opaque
  • 然后对上面的 my-nginx 应用创建一个具有 Basic Auth 的 Ingress 对象:
 1# ingress-basic-auth.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: ingress-with-auth
 6  namespace: default
 7  annotations:
 8    nginx.ingress.kubernetes.io/auth-type: basic # 认证类型
 9    nginx.ingress.kubernetes.io/auth-secret: basic-auth # 包含 user/password 定义的 secret对象名
10    nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo" # 要显示的带有适当上下文的消息,说明需要身份验证的原因  
11spec:
12  ingressClassName: nginx  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
13  rules:
14  - host: auth.172.29.9.60.nip.io  # 将域名映射到 nginx 服务,注意,这里的ip是ingress-controller svc的EXTERNAL-IP
15    http:
16      paths:
17      - path: /
18        pathType: Prefix
19        backend:
20          service:  # 将所有请求发送到 nginx 服务的 80 端口
21            name: nginx
22            port:
23              number: 80
  • 直接创建上面的资源对象
1[root@master1 ingress-nginx]#kubectl apply -f ingress-basic-auth.yaml 
2ingress.networking.k8s.io/ingress-with-auth created
3[root@master1 ingress-nginx]#kubectl get ingress
4NAME                CLASS   HOSTS                     ADDRESS       PORTS   AGE
5ingress-with-auth   nginx   auth.172.29.9.60.nip.io   172.29.9.60   80      21s
  • 然后通过下面的命令或者在浏览器中直接打开配置的域名
 1[root@master1 ingress-nginx]#curl -v http://auth.172.29.9.60.nip.io
 2* About to connect() to auth.172.29.9.60.nip.io port 80 (#0)
 3*   Trying 172.29.9.60...
 4* Connected to auth.172.29.9.60.nip.io (172.29.9.60) port 80 (#0)
 5> GET / HTTP/1.1
 6> User-Agent: curl/7.29.0
 7> Host: auth.172.29.9.60.nip.io
 8> Accept: */*
 9>
10< HTTP/1.1 401 Unauthorized
11< Date: Sun, 12 Mar 2023 14:22:31 GMT
12< Content-Type: text/html
13< Content-Length: 172
14< Connection: keep-alive
15< WWW-Authenticate: Basic realm="Authentication Required - foo"
16<
17<html>
18<head><title>401 Authorization Required</title></head>
19<body>
20<center><h1>401 Authorization Required</h1></center>
21<hr><center>nginx</center>
22</body>
23</html>
24* Connection #0 to host auth.172.29.9.60.nip.io left intact

我们可以看到出现了 401 认证失败错误。

  • 然后带上我们配置的用户名和密码进行认证:
 1[root@master1 ingress-nginx]#curl -v http://auth.172.29.9.60.nip.io -u 'foo:foo321'
 2* About to connect() to auth.172.29.9.60.nip.io port 80 (#0)
 3*   Trying 172.29.9.60...
 4* Connected to auth.172.29.9.60.nip.io (172.29.9.60) port 80 (#0)
 5* Server auth using Basic with user 'foo'
 6> GET / HTTP/1.1
 7> Authorization: Basic Zm9vOmZvbzMyMQ==
 8> User-Agent: curl/7.29.0
 9> Host: auth.172.29.9.60.nip.io
10> Accept: */*
11>
12< HTTP/1.1 200 OK
13< Date: Sun, 12 Mar 2023 14:23:50 GMT
14< Content-Type: text/html
15< Content-Length: 615
16< Connection: keep-alive
17< Last-Modified: Tue, 28 Dec 2021 15:28:38 GMT
18< ETag: "61cb2d26-267"
19< Accept-Ranges: bytes
20<
21<!DOCTYPE html>
22<html>
23<head>
24<title>Welcome to nginx!</title>
25<style>
26html { color-scheme: light dark; }
27body { width: 35em; margin: 0 auto;
28font-family: Tahoma, Verdana, Arial, sans-serif; }
29</style>
30</head>
31<body>
32<h1>Welcome to nginx!</h1>
33<p>If you see this page, the nginx web server is successfully installed and
34working. Further configuration is required.</p>
35
36<p>For online documentation and support please refer to
37<a href="http://nginx.org/">nginx.org</a>.<br/>
38Commercial support is available at
39<a href="http://nginx.com/">nginx.com</a>.</p>
40
41<p><em>Thank you for using nginx.</em></p>
42</body>
43</html>
44* Connection #0 to host auth.172.29.9.60.nip.io left intact

可以看到已经认证成功了。

浏览器测试效果:是ok的,符合预期。

image-20230312222449174

image-20230312222453622

⚠️ 注意:nginx.ingress.kubernetes.io/auth-realm: "Authentication Required - foo"参数的含义

这里以密码写错为例举例:

image-20230312222956281

image-20230312222926735

测试结束。😘

2.使用外部的 Basic Auth 认证信息

==💘 实战:使用外部的 Basic Auth 认证信息-2023.3.12(测试成功)==

image-20230312231033548

  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd: v1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/12vts7vEP-eYA5cGIHdOyPw?pwd=sgbk 提取码:sgbk –来自百度网盘超级会员V7的分享 2023.3.12-实战:使用外部的 Basic Auth 认证信息-2023.3.12(测试成功)

image-20230312230952769

  • 前提条件

已安装好ingress-nginx环境;(ingress-nginx svc类型是LoadBalancer的。

已部署好MetalLB环境;(当然,这里也可以不部署MetalLB环境,直接使用域名:NodePort端口去访问的,这里为了测试方便,我们使用LB来进行访问)。

ingress-nginx部署见文档:

本地文档:

image-20230311153824838

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129334116?spm=1001.2014.3001.5501

image-20230308212123364

MetalLB部署见文档:

本地文档:

image-20230308212145378

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129343617?spm=1001.2014.3001.5501

image-20230308212209034

⚠️ 注意:Ingres-nginx是通过DaemonSet方式部署的,MetalLB部署后,在3个节点上都是可以正常访问ingress的哦。

  • 创建一个后续用于测试的应用
 1#nginx.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: nginx
 6spec:
 7  selector:
 8    matchLabels:
 9      app: nginx
10  template:
11    metadata:
12      labels:
13        app: nginx
14    spec:
15      containers:
16      - name: nginx
17        image: nginx
18        ports:
19        - containerPort: 80
20---
21apiVersion: v1
22kind: Service
23metadata:
24  name: nginx
25spec:
26  ports:
27  - port: 80
28    protocol: TCP
29    name: http
30  selector:
31    app: nginx

部署并观察:

 1[root@master1 ingress-nginx]#kubectl apply -f nginx.yaml 
 2deployment.apps/nginx created
 3service/nginx created
 4
 5[root@master1 ingress-nginx]#kubectl get po,svc
 6NAME                         READY   STATUS    RESTARTS   AGE
 7pod/nginx-7848d4b86f-ftznq   1/1     Running   0          26s
 8
 9NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
10service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   18d
11service/nginx        ClusterIP   10.98.22.153   <none>        80/TCP    26s
  • 除了可以使用我们自己在本地集群创建的 Auth 信息之外,还可以使用外部的 Basic Auth 认证信息,比如我们使用 https://httpbin.org 的外部 Basic Auth 认证,创建如下所示的 Ingress 资源对象:
 1# ingress-basic-auth-external.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: external-auth
 6  namespace: default
 7  annotations:
 8    # 配置外部认证服务地址
 9    nginx.ingress.kubernetes.io/auth-url: https://httpbin.org/basic-auth/user/passwd
10spec:
11  ingressClassName: nginx  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
12  rules:
13  - host:  external-auth.172.29.9.60.nip.io  # 将域名映射到 nginx 服务,注意,这里的ip是ingress-controller svc的EXTERNAL-IP
14    http:
15      paths:
16      - path: /
17        pathType: Prefix
18        backend:
19          service:  # 将所有请求发送到 nginx 服务的 80 端口
20            name: nginx
21            port:
22              number: 80
  • 部署
1[root@master1 ingress-nginx]#kubectl apply  -f ingress-basic-auth-external.yaml 
2ingress.networking.k8s.io/external-auth created
3
4[root@master1 ingress-nginx]#kubectl get ingress
5NAME                CLASS   HOSTS                              ADDRESS       PORTS   AGE
6external-auth       nginx   external-auth.172.29.9.60.nip.io   172.29.9.60   80      57s
  • 测试,然后通过下面的命令或者在浏览器中直接打开配置的域名
1[root@master1 ingress-nginx]#curl -k  http://external-auth.172.29.9.60.nip.io
2<html>
3<head><title>401 Authorization Required</title></head>
4<body>
5<center><h1>401 Authorization Required</h1></center>
6<hr><center>nginx</center>
7</body>
8</html>

我们可以看到出现了 401 认证失败错误。

  • 然后带上我们配置的用户名和密码进行认证:
 1[root@master1 ingress-nginx]#curl -k  http://external-auth.172.29.9.60.nip.io -u 'user:passwd'
 2<!DOCTYPE html>
 3<html>
 4<head>
 5<title>Welcome to nginx!</title>
 6<style>
 7html { color-scheme: light dark; }
 8body { width: 35em; margin: 0 auto;
 9font-family: Tahoma, Verdana, Arial, sans-serif; }
10</style>
11</head>
12<body>
13<h1>Welcome to nginx!</h1>
14<p>If you see this page, the nginx web server is successfully installed and
15working. Further configuration is required.</p>
16
17<p>For online documentation and support please refer to
18<a href="http://nginx.org/">nginx.org</a>.<br/>
19Commercial support is available at
20<a href="http://nginx.com/">nginx.com</a>.</p>
21
22<p><em>Thank you for using nginx.</em></p>
23</body>
24</html>

可以看到已经认证成功了。

浏览器测试效果:是ok的,符合预期。

image-20230312230547046

image-20230312230551580

测试结束。😘

当然除了 Basic Auth 这一种简单的认证方式之外,ingress-nginx 还支持一些其他高级的认证,比如我们可以使用 GitHub OAuth 来认证 Kubernetes 的 Dashboard

2、URL Rewrite

可能平常用的最多的是这个功能。

ingress-nginx 很多高级的用法可以通过 Ingress 对象的 annotation 进行配置,比如常用的 URL Rewrite 功能。**很多时候我们会将 **ingress-nginx当成网关使用,比如对访问的服务加上 /app 这样的前缀,在 nginx 的配置里面我们知道有一个 proxy_pass 指令可以实现:

1location /app/ {
2  proxy_pass http://127.0.0.1/remote/;
3}

可能要加上/app,或者/gateway,/api,特别是在我们微服务里,我们很多时候要把我们的微服务提供的一些接口给它聚合在某一个子路径下面去,比如果/api.或者/api/v1下面,当然这些功能我们可以直接在网关层ingress这里实现这样的功能。

proxy_pass 后面加了 /remote 这个路径,此时会将匹配到该规则路径中的 /app/remote 替换掉,相当于截掉路径中的 /app。同样的在 Kubernetes 中使用 ingress-nginx 又该如何来实现呢?我们可以使用 rewrite-target 的注解**来实现这个需求。比如现在我们想要通过 rewrite.172.29.9.60.nip.io/gateway/ 来访问到 Nginx 服务,则我们需要对访问的 URL 路径做一个 Rewrite,**在 PATH 中添加一个 gateway 的前缀。

关于 Rewrite 的操作在 ingress-nginx 官方文档中也给出对应的说明:

==💘 实战:Ingress-nginx之URL Rewrite-2023.3.13(测试成功)==

image-20230313072513167

  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd: v1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/1Dj1Qcmjpri6-IfUri5F5nQ?pwd=vpiu 提取码:vpiu 2023.3.13-实战:Ingress-nginx之URL Rewrite-2023.3.13(测试成功)

image-20230313072331484

  • 前提条件

已安装好ingress-nginx环境;(ingress-nginx svc类型是LoadBalancer的。

已部署好MetalLB环境;(当然,这里也可以不部署MetalLB环境,直接使用域名:NodePort端口去访问的,这里为了测试方便,我们使用LB来进行访问)。

ingress-nginx部署见文档:

本地文档:

image-20230311153824838

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129334116?spm=1001.2014.3001.5501

image-20230308212123364

MetalLB部署见文档:

本地文档:

image-20230308212145378

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129343617?spm=1001.2014.3001.5501

image-20230308212209034

⚠️ 注意:Ingres-nginx是通过DaemonSet方式部署的,MetalLB部署后,在3个节点上都是可以正常访问ingress的哦。

  • 创建一个后续用于测试的应用
 1#nginx.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: nginx
 6spec:
 7  selector:
 8    matchLabels:
 9      app: nginx
10  template:
11    metadata:
12      labels:
13        app: nginx
14    spec:
15      containers:
16      - name: nginx
17        image: nginx
18        ports:
19        - containerPort: 80
20---
21apiVersion: v1
22kind: Service
23metadata:
24  name: nginx
25spec:
26  ports:
27  - port: 80
28    protocol: TCP
29    name: http
30  selector:
31    app: nginx

部署并观察:

 1[root@master1 ingress-nginx]#kubectl apply -f nginx.yaml 
 2deployment.apps/nginx created
 3service/nginx created
 4
 5[root@master1 ingress-nginx]#kubectl get po,svc
 6NAME                         READY   STATUS    RESTARTS   AGE
 7pod/nginx-7848d4b86f-ftznq   1/1     Running   0          26s
 8
 9NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
10service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   18d
11service/nginx        ClusterIP   10.98.22.153   <none>        80/TCP    26s

以下3个小测试都是基于这个实战的哦。😘

1.rewrite-target

  • 首先先按我们的想法来测试下
 1# ingress-nginx-url-rewrite.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: ingress-nginx-url-rewrite
 6  namespace: default
 7spec:
 8  ingressClassName: nginx  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
 9  rules:
10  - host: rewrite.172.29.9.60.nip.io  # 将域名映射到 nginx 服务,注意,这里的ip是ingress-controller svc的EXTERNAL-IP, rewrite.172.29.9.60.nip.io/gateway --> nginx
11    http:
12      paths:
13      - path: /gateway
14        pathType: Prefix
15        backend:
16          service:  # 将所有请求发送到 nginx 服务的 80 端口
17            name: nginx
18            port:
19              number: 80
  • 部署并测试:
 1[root@master1 url-rewirte]#kubectl apply -f ingress-nginx-url-rewrite.yaml
 2ingress.networking.k8s.io/ingress-nginx-url-rewrite created
 3[root@master1 url-rewirte]#kubectl get ingress
 4NAME                        CLASS   HOSTS                              ADDRESS       PORTS   AGE
 5ingress-nginx-url-rewrite   nginx   rewrite.172.29.9.60.nip.io         172.29.9.60   80      19s
 6
 7#测试
 8[root@master1 url-rewirte]#curl -v rewrite.172.29.9.60.nip.io
 9* About to connect() to rewrite.172.29.9.60.nip.io port 80 (#0)
10*   Trying 172.29.9.60...
11* Connected to rewrite.172.29.9.60.nip.io (172.29.9.60) port 80 (#0)
12> GET / HTTP/1.1
13> User-Agent: curl/7.29.0
14> Host: rewrite.172.29.9.60.nip.io
15> Accept: */*
16>
17< HTTP/1.1 404 Not Found
18< Date: Sun, 12 Mar 2023 22:16:34 GMT
19< Content-Type: text/html
20< Content-Length: 146
21< Connection: keep-alive
22<
23<html>
24<head><title>404 Not Found</title></head>
25<body>
26<center><h1>404 Not Found</h1></center>
27<hr><center>nginx</center>
28</body>
29</html>
30* Connection #0 to host rewrite.172.29.9.60.nip.io left intact
31[root@master1 url-rewirte]#curl -v rewrite.172.29.9.60.nip.io/gateway
32* About to connect() to rewrite.172.29.9.60.nip.io port 80 (#0)
33*   Trying 172.29.9.60...
34* Connected to rewrite.172.29.9.60.nip.io (172.29.9.60) port 80 (#0)
35> GET /gateway HTTP/1.1
36> User-Agent: curl/7.29.0
37> Host: rewrite.172.29.9.60.nip.io
38> Accept: */*
39>
40< HTTP/1.1 404 Not Found
41< Date: Sun, 12 Mar 2023 22:16:48 GMT
42< Content-Type: text/html
43< Content-Length: 153
44< Connection: keep-alive
45<
46<html>
47<head><title>404 Not Found</title></head>
48<body>
49<center><h1>404 Not Found</h1></center>
50<hr><center>nginx/1.21.5</center>
51</body>
52</html>
53* Connection #0 to host rewrite.172.29.9.60.nip.io left intact

可以看到2种方式都是无法访问的。

  • 我们再次改写yaml
 1# ingress-nginx-url-rewrite.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: ingress-nginx-url-rewrite
 6  namespace: default
 7  annotations:
 8    nginx.ingress.kubernetes.io/rewrite-target: /$2
 9spec:
10  ingressClassName: nginx  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
11  rules:
12  - host: rewrite.172.29.9.60.nip.io  # 将域名映射到 nginx 服务,注意,这里的ip是ingress-controller svc的EXTERNAL-IP, rewrite.172.29.9.60.nip.io/gateway --> nginx
13  #包括如下几种情况
14  #rewrite.172.29.9.60.nip.io/gateway/ rewrite.172.29.9.60.nip.io/gateway  rewrite.172.29.9.60.nip.io/gateway/xxx
15    http:
16      paths:
17      - path: /gateway(/|$)(.*)
18        pathType: Prefix
19        backend:
20          service:  # 将所有请求发送到 nginx 服务的 80 端口
21            name: nginx
22            port:
23              number: 80

特别注意这里的正则表达式。

分为以下这3种情况:

rewrite.172.29.9.60.nip.io/gateway/ rewrite.172.29.9.60.nip.io/gateway rewrite.172.29.9.60.nip.io/gateway/xxx

$表示结尾

image-20230313063914251

  • 更新后,我们可以预见到直接访问域名肯定是不行了,因为我们没有匹配 / 的 path 路径:
1[root@master1 url-rewirte]#curl  rewrite.172.29.9.60.nip.io
2<html>
3<head><title>404 Not Found</title></head>
4<body>
5<center><h1>404 Not Found</h1></center>
6<hr><center>nginx</center>
7</body>
8</html>
  • 但是我们带上 gateway 的前缀再去访问就正常了:
 1[root@master1 url-rewirte]#curl  rewrite.172.29.9.60.nip.io/gateway
 2<!DOCTYPE html>
 3<html>
 4<head>
 5<title>Welcome to nginx!</title>
 6<style>
 7html { color-scheme: light dark; }
 8body { width: 35em; margin: 0 auto;
 9font-family: Tahoma, Verdana, Arial, sans-serif; }
10</style>
11</head>
12<body>
13<h1>Welcome to nginx!</h1>
14<p>If you see this page, the nginx web server is successfully installed and
15working. Further configuration is required.</p>
16
17<p>For online documentation and support please refer to
18<a href="http://nginx.org/">nginx.org</a>.<br/>
19Commercial support is available at
20<a href="http://nginx.com/">nginx.com</a>.</p>
21
22<p><em>Thank you for using nginx.</em></p>
23</body>
24</html>
25[root@master1 url-rewirte]#curl  rewrite.172.29.9.60.nip.io/gateway/
26<!DOCTYPE html>
27<html>
28<head>
29<title>Welcome to nginx!</title>
30<style>
31html { color-scheme: light dark; }
32body { width: 35em; margin: 0 auto;
33font-family: Tahoma, Verdana, Arial, sans-serif; }
34</style>
35</head>
36<body>
37<h1>Welcome to nginx!</h1>
38<p>If you see this page, the nginx web server is successfully installed and
39working. Further configuration is required.</p>
40
41<p>For online documentation and support please refer to
42<a href="http://nginx.org/">nginx.org</a>.<br/>
43Commercial support is available at
44<a href="http://nginx.com/">nginx.com</a>.</p>
45
46<p><em>Thank you for using nginx.</em></p>
47</body>
48</html>
49[root@master1 url-rewirte]#

我们可以看到已经可以访问到了,这是因为我们在 path 中通过正则表达式 /gateway(/|$)(.*) 将匹配的路径设置成了 rewrite-target 的目标路径了,所以我们访问 rewrite.172.29.9.60.nip.io/gateway 的时候实际上相当于访问的就是后端服务的 / 路径。

2.app-root

解决我们访问主域名出现 404 的问题。

要解决我们访问主域名出现 404 的问题,我们可以给应用设置一个 app-root 的注解,这样当我们访问主域名的时候会自动跳转到我们指定的 app-root 目录下面,如下所示:

image-20230313070217099

  • 这里在rewrite.yaml配置文件上进行更改:
 1# ingress-nginx-url-rewrite.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: ingress-nginx-url-rewrite
 6  namespace: default
 7  annotations:
 8    nginx.ingress.kubernetes.io/rewrite-target: /$2
 9    nginx.ingress.kubernetes.io/app-root: /gateway/ #注意。
10spec:
11  ingressClassName: nginx  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
12  rules:
13  - host: rewrite.172.29.9.60.nip.io  # 将域名映射到 nginx 服务,注意,这里的ip是ingress-controller svc的EXTERNAL-IP, rewrite.172.29.9.60.nip.io/gateway --> nginx
14  #包括如下几种情况
15  #rewrite.172.29.9.60.nip.io/gateway/ rewrite.172.29.9.60.nip.io/gateway  rewrite.172.29.9.60.nip.io/gateway/xxx
16    http:
17      paths:
18      - path: /gateway(/|$)(.*)
19        pathType: Prefix
20        backend:
21          service:  # 将所有请求发送到 nginx 服务的 80 端口
22            name: nginx
23            port:
24              number: 80
  • 这个时候我们更新应用后访问主域名 rewrite.172.29.9.60.nip.io 就会自动跳转到rewrite.172.29.9.60.nip.io/gateway/ 路径下面去了。
 1#部署
 2[root@master1 url-rewirte]#kubectl apply -f ingress-nginx-url-rewrite.yaml
 3ingress.networking.k8s.io/ingress-nginx-url-rewrite configured
 4[root@master1 url-rewirte]#kubectl get ingress
 5NAME                        CLASS   HOSTS                              ADDRESS       PORTS   AGE
 6ingress-nginx-url-rewrite   nginx   rewrite.172.29.9.60.nip.io         172.29.9.60   80      37m
 7
 8#测试
 9[root@master1 url-rewirte]#curl -v rewrite.172.29.9.60.nip.io
10* About to connect() to rewrite.172.29.9.60.nip.io port 80 (#0)
11*   Trying 172.29.9.60...
12* Connected to rewrite.172.29.9.60.nip.io (172.29.9.60) port 80 (#0)
13> GET / HTTP/1.1
14> User-Agent: curl/7.29.0
15> Host: rewrite.172.29.9.60.nip.io
16> Accept: */*
17>
18< HTTP/1.1 302 Moved Temporarily #可以看到,这里出现了302
19< Date: Sun, 12 Mar 2023 22:53:54 GMT
20< Content-Type: text/html
21< Content-Length: 138
22< Connection: keep-alive
23< Location: http://rewrite.172.29.9.60.nip.io/gateway/ #这里可以看到被重定向到rewrite.172.29.9.60.nip.io/gateway/了。
24<
25<html>
26<head><title>302 Found</title></head>
27<body>
28<center><h1>302 Found</h1></center>
29<hr><center>nginx</center>
30</body>
31</html>
32* Connection #0 to host rewrite.172.29.9.60.nip.io left intact
33
34[root@master1 url-rewirte]#curl  rewrite.172.29.9.60.nip.io
35<html>
36<head><title>302 Found</title></head>
37<body>
38<center><h1>302 Found</h1></center>
39<hr><center>nginx</center>
40</body>
41</html>

⚠️ 注意:这里通过命令行测试好像无法明确看到现象,这里在web浏览器里测试下。

打开一个无痕浏览器,输入rewrite.172.29.9.60.nip.io

image-20230313070128420

image-20230313070140032

这个时候我们更新应用后访问主域名 rewrite.172.29.9.60.nip.io 就会自动跳转到rewrite.172.29.9.60.nip.io/gateway/ 路径下面去了。符合预期,

3.configuration-snippet

(希望我们的应用在最后添加一个 / 这样的 slash)

  • 但是还有一个问题是我们的 path 路径其实也匹配了 /app 这样的路径,可能我们更加希望我们的应用在最后添加一个 / 这样的 slash,同样我们可以通过 configuration-snippet 配置来完成,如下 Ingress 对象:

这个是和搜索引擎seo有关。

 1# ingress-nginx-url-rewrite.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: ingress-nginx-url-rewrite
 6  namespace: default
 7  annotations:
 8    nginx.ingress.kubernetes.io/rewrite-target: /$2
 9    nginx.ingress.kubernetes.io/app-root: /gateway/ #注意。
10    nginx.ingress.kubernetes.io/configuration-snippet: |
11      rewrite ^(/gateway)$ $1/ redirect;
12spec:
13  ingressClassName: nginx  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
14  rules:
15  - host: rewrite.172.29.9.60.nip.io  # 将域名映射到 nginx 服务,注意,这里的ip是ingress-controller svc的EXTERNAL-IP, rewrite.172.29.9.60.nip.io/gateway --> nginx
16  #包括如下几种情况
17  #rewrite.172.29.9.60.nip.io/gateway/ rewrite.172.29.9.60.nip.io/gateway  rewrite.172.29.9.60.nip.io/gateway/xxx
18    http:
19      paths:
20      - path: /gateway(/|$)(.*)
21        pathType: Prefix
22        backend:
23          service:  # 将所有请求发送到 nginx 服务的 80 端口
24            name: nginx
25            port:
26              number: 80
  • 更新后我们的应用就都会以 / 这样的 slash 结尾了,这样就完成了我们的需求,如果你原本对 nginx 的配置就非常熟悉的话应该可以很快就能理解这种配置方式了。
 1#部署
 2[root@master1 url-rewirte]#kubectl apply -f ingress-nginx-url-rewrite.yaml
 3ingress.networking.k8s.io/ingress-nginx-url-rewrite configured
 4[root@master1 url-rewirte]#kubectl get ingress
 5NAME                        CLASS   HOSTS                              ADDRESS       PORTS   AGEh
 6ingress-nginx-url-rewrite   nginx   rewrite.172.29.9.60.nip.io         172.29.9.60   80      62m
 7
 8#测试
 9[root@master1 url-rewirte]#curl -v rewrite.172.29.9.60.nip.io/gateway
10* About to connect() to rewrite.172.29.9.60.nip.io port 80 (#0)
11*   Trying 172.29.9.60...
12* Connected to rewrite.172.29.9.60.nip.io (172.29.9.60) port 80 (#0)
13> GET /gateway HTTP/1.1
14> User-Agent: curl/7.29.0
15> Host: rewrite.172.29.9.60.nip.io
16> Accept: */*
17>
18< HTTP/1.1 302 Moved Temporarily
19< Date: Sun, 12 Mar 2023 23:19:18 GMT
20< Content-Type: text/html
21< Content-Length: 138
22< Location: http://rewrite.172.29.9.60.nip.io/gateway/ #这里发生了重定向
23< Connection: keep-alive
24<
25<html>
26<head><title>302 Found</title></head>
27<body>
28<center><h1>302 Found</h1></center>
29<hr><center>nginx</center>
30</body>
31</html>
32* Connection #0 to host rewrite.172.29.9.60.nip.io left intact
33[root@master1 url-rewirte]#curl -v rewrite.172.29.9.60.nip.io/gateway/
34* About to connect() to rewrite.172.29.9.60.nip.io port 80 (#0)
35*   Trying 172.29.9.60...
36* Connected to rewrite.172.29.9.60.nip.io (172.29.9.60) port 80 (#0)
37> GET /gateway/ HTTP/1.1
38> User-Agent: curl/7.29.0
39> Host: rewrite.172.29.9.60.nip.io
40> Accept: */*
41>
42< HTTP/1.1 200 OK
43< Date: Sun, 12 Mar 2023 23:19:22 GMT
44< Content-Type: text/html
45< Content-Length: 615
46< Connection: keep-alive
47< Last-Modified: Tue, 28 Dec 2021 15:28:38 GMT
48< ETag: "61cb2d26-267"
49< Accept-Ranges: bytes
50<
51<!DOCTYPE html>
52<html>
53<head>
54<title>Welcome to nginx!</title>
55<style>
56html { color-scheme: light dark; }
57body { width: 35em; margin: 0 auto;
58font-family: Tahoma, Verdana, Arial, sans-serif; }
59</style>
60</head>
61<body>
62<h1>Welcome to nginx!</h1>
63<p>If you see this page, the nginx web server is successfully installed and
64working. Further configuration is required.</p>
65
66<p>For online documentation and support please refer to
67<a href="http://nginx.org/">nginx.org</a>.<br/>
68Commercial support is available at
69<a href="http://nginx.com/">nginx.com</a>.</p>
70
71<p><em>Thank you for using nginx.</em></p>
72</body>
73</html>
74* Connection #0 to host rewrite.172.29.9.60.nip.io left intact
75[root@master1 url-rewirte]#

image-20230313072025006

image-20230313072036152

符合预期,测试结束。😘

3、灰度发布

在日常工作中我们经常需要对服务进行版本更新升级,所以我们经常会使用到滚动升级、蓝绿发布、灰度发布等不同的发布操作。而 ingress-nginx 支持通过 Annotations 配置来实现不同场景下的灰度发布和测试,可以满足金丝雀发布、蓝绿部署与 A/B 测试等业务场景。

ingress-nginx 的 Annotations 支持以下 4 种 Canary(金丝雀) 规则:

我不知道我们线上会不会真正用这个ingress-nginx的annotations去做一个灰度,毕竟anotations它的这个配置方式其实不是特别云原生。一般来说,如果可以把他翻译成我们的CRD之类的,可能我们写起来就会方便一点的,目前来说,它是通过我们这个annotations方式来配置的。

  • nginx.ingress.kubernetes.io/canary-by-header:基于 Request Header 的流量切分,适用于灰度发布以及 A/B 测试。当 Request Header 设置为 always 时,请求将会被一直发送到 Canary 版本;当 Request Header 设置为 never 时,请求不会被发送到 Canary 入口;对于任何其他 Header 值,将忽略 Header,并通过优先级将请求与其他金丝雀规则进行优先级的比较。
  • nginx.ingress.kubernetes.io/canary-by-header-value:要匹配的 Request Header 的值,用于通知 Ingress 将请求路由到 Canary Ingress 中指定的服务。当 Request Header 设置为此值时,它将被路由到 Canary 入口。该规则允许用户自定义 Request Header 的值,必须与上一个 annotation (canary-by-header) 一起使用。
  • nginx.ingress.kubernetes.io/canary-weight:基于服务权重的流量切分,适用于蓝绿部署,权重范围 0 - 100 按百分比将请求路由到 Canary Ingress 中指定的服务。权重为 0 意味着该金丝雀规则不会向 Canary 入口的服务发送任何请求,权重为 100 意味着所有请求都将被发送到 Canary 入口。
  • nginx.ingress.kubernetes.io/canary-by-cookie:基于 cookie 的流量切分,适用于灰度发布与 A/B 测试。用于通知 Ingress 将请求路由到 Canary Ingress 中指定的服务的cookie。当 cookie 值设置为 always 时,它将被路由到 Canary 入口;当 cookie 值设置为 never 时,请求不会被发送到 Canary 入口;对于任何其他值,将忽略 cookie 并将请求与其他金丝雀规则进行优先级的比较。

需要注意的是金丝雀规则按优先顺序进行排序:canary-by-header > canary-by-cookie > canary-weight

总的来说可以把以上的四个 annotation 规则划分为以下两类:

  • 基于权重的 Canary 规则

  • 基于用户请求的 Canary 规则

下面我们通过一个示例应用来对灰度发布功能进行说明。

==💘 实战:Ingress-nginx之灰度发布-2023.3.14(测试成功)==

image-20230314064055405

  • 步骤划分:
1第1步:部署 Production 应用
2第2步:创建 Canary 版本
3第3步:Annotation 规则配置
4	1.基于权重
5	2.基于 Request Header
6	3.基于 Cookie
  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd: v1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/1JG6uCfjSKVCFJZdZcQgWSg?pwd=rxqt 提取码:rxqt 2023.3.14-实战:Ingress-nginx之灰度发布-2023.3.14(测试成功)

image-20230314063851844

  • 前提条件

已安装好ingress-nginx环境;(ingress-nginx svc类型是LoadBalancer的。

已部署好MetalLB环境;(当然,这里也可以不部署MetalLB环境,直接使用域名:NodePort端口去访问的,这里为了测试方便,我们使用LB来进行访问)。

ingress-nginx部署见文档:

本地文档:

image-20230311153824838

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129334116?spm=1001.2014.3001.5501

image-20230308212123364

MetalLB部署见文档:

本地文档:

image-20230308212145378

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129343617?spm=1001.2014.3001.5501

image-20230308212209034

⚠️ 注意:Ingres-nginx是通过DaemonSet方式部署的,MetalLB部署后,在3个节点上都是可以正常访问ingress的哦。

  • 注意:当前测ingress-nginxEXTERNAL-IP172.29.9.60
1[root@master1 canary]#kubectl get svc -ningress-nginx
2NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
3ingress-nginx-controller             LoadBalancer   10.108.58.246   172.29.9.60   80:32439/TCP,443:31347/TCP   2d15h
4ingress-nginx-controller-admission   ClusterIP      10.101.184.28   <none>        443/TCP                      2d15h

第1步:部署 Production 应用

  • 首先创建一个 production 环境的应用资源清单:
 1# production.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: production
 6  labels:
 7    app: production
 8spec:
 9  selector:
10    matchLabels:
11      app: production
12  template:
13    metadata:
14      labels:
15        app: production
16    spec:
17      containers:
18      - name: production
19        image: cnych/echoserver #老师的测试镜像
20        ports:
21        - containerPort: 8080
22        env:
23          - name: NODE_NAME
24            valueFrom:
25              fieldRef:
26                fieldPath: spec.nodeName
27          - name: POD_NAME
28            valueFrom:
29              fieldRef:
30                fieldPath: metadata.name
31          - name: POD_NAMESPACE
32            valueFrom:
33              fieldRef:
34                fieldPath: metadata.namespace
35          - name: POD_IP
36            valueFrom:
37              fieldRef:
38                fieldPath: status.podIP
39---
40apiVersion: v1
41kind: Service
42metadata:
43  name: production
44  labels:
45    app: production
46spec:
47  ports:
48  - port: 80
49    targetPort: 8080
50    name: http
51  selector:
52    app: production

⚠️ 注意:这里的镜像也可以是如下官方的

# arm架构使用该镜像:mirrorgooglecontainers/echoserver-arm:1.8

image: mirrorgooglecontainers/echoserver:1.10

这个镜像作用是打印pod的一些信息的。

  • 然后创建一个用于 production 环境访问的 Ingress 资源对象:
 1# production-ingress.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: production
 6spec:
 7  ingressClassName: nginx
 8  rules:
 9  - host: echo.172.29.9.60.nip.io
10    http:
11      paths:
12      - path: /
13        pathType: Prefix
14        backend:
15          service:
16            name: production
17            port:
18              number: 80
  • 直接创建上面的几个资源对象:
 1[root@master1 canary]#kubectl apply -f production.yaml
 2deployment.apps/production created
 3service/production created
 4[root@master1 canary]#kubectl apply -f production-ingress.yaml
 5ingress.networking.k8s.io/production created
 6
 7
 8[root@master1 canary]#kubectl get po -l app=production
 9NAME                         READY   STATUS    RESTARTS   AGE
10production-856d5fb99-k6k4h   1/1     Running   0          53s
11[root@master1 canary]#kubectl get ingress
12NAME                        CLASS   HOSTS                              ADDRESS       PORTS   AGE
13production                  nginx   echo.172.29.9.60.nip.io            172.29.9.60   80      20s
14[root@master1 canary]#
  • 应用部署成功后即可正常访问应用:
 1[root@master1 canary]#curl echo.172.29.9.60.nip.io
 2
 3
 4Hostname: production-856d5fb99-k6k4h
 5
 6Pod Information:
 7        node name:      node2
 8        pod name:       production-856d5fb99-k6k4h
 9        pod namespace:  default
10        pod IP: 10.244.2.22
11
12Server values:
13        server_version=nginx: 1.13.3 - lua: 10008
14
15Request Information:
16        client_address=10.244.0.2
17        method=GET
18        real path=/
19        query=
20        request_version=1.1
21        request_scheme=http
22        request_uri=http://echo.172.29.9.60.nip.io:8080/
23
24Request Headers:
25        accept=*/*
26        host=echo.172.29.9.60.nip.io
27        user-agent=curl/7.29.0
28        x-forwarded-for=172.29.9.60
29        x-forwarded-host=echo.172.29.9.60.nip.io
30        x-forwarded-port=80
31        x-forwarded-proto=http
32        x-forwarded-scheme=http
33        x-real-ip=172.29.9.60
34        x-request-id=06d3ab578e605061c732c060c9194992
35        x-scheme=http
36
37Request Body:
38        -no body in request-

第2步:创建 Canary 版本

参考将上述 Production 版本的 production.yaml 文件,再创建一个 Canary 版本的应用。

 1# canary.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: canary
 6  labels:
 7    app: canary
 8spec:
 9  selector:
10    matchLabels:
11      app: canary
12  template:
13    metadata:
14      labels:
15        app: canary
16    spec:
17      containers:
18      - name: canary
19        image: cnych/echoserver
20        ports:
21        - containerPort: 8080
22        env:
23          - name: NODE_NAME
24            valueFrom:
25              fieldRef:
26                fieldPath: spec.nodeName
27          - name: POD_NAME
28            valueFrom:
29              fieldRef:
30                fieldPath: metadata.name
31          - name: POD_NAMESPACE
32            valueFrom:
33              fieldRef:
34                fieldPath: metadata.namespace
35          - name: POD_IP
36            valueFrom:
37              fieldRef:
38                fieldPath: status.podIP
39---
40apiVersion: v1
41kind: Service
42metadata:
43  name: canary
44  labels:
45    app: canary
46spec:
47  ports:
48  - port: 80
49    targetPort: 8080
50    name: http
51  selector:
52    app: canary
  • 部署应用并查看
1[root@master1 canary]#kubectl apply  -f canary.yaml 
2deployment.apps/canary created
3service/canary created
4[root@master1 canary]#kubectl get po -l app=canary
5NAME                      READY   STATUS    RESTARTS   AGE
6canary-66cb497b7f-86mdj   1/1     Running   0          26s

接下来就可以通过配置 Annotation 规则进行流量切分了。

第3步:Annotation 规则配置

1.基于权重

基于权重的流量切分的典型应用场景就是蓝绿部署,可通过将权重设置为 0 或 100 来实现。例如,可将 Green 版本设置为主要部分,并将 Blue 版本的入口配置为 Canary。最初,将权重设置为 0,因此不会将流量代理到 Blue 版本。一旦新版本测试和验证都成功后,即可将 Blue 版本的权重设置为 100,即所有流量从 Green 版本转向 Blue。

  • 创建一个基于权重的 Canary 版本的应用路由 Ingress 对象。
 1# canary-ingress.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: canary
 6  annotations:
 7    nginx.ingress.kubernetes.io/canary: "true"   # 要开启灰度发布机制,首先需要启用 Canary
 8    nginx.ingress.kubernetes.io/canary-weight: "30"  # 分配30%流量到当前Canary版本
 9spec:
10  ingressClassName: nginx
11  rules:
12  - host: echo.172.29.9.60.nip.io
13    http:
14      paths:
15      - path: /
16        pathType: Prefix
17        backend:
18          service:
19            name: canary
20            port:
21              number: 80
  • 直接创建上面的资源对象即可:
1[root@master1 canary]#kubectl apply -f canary-ingress.yaml 
2ingress.networking.k8s.io/canary created
3[root@master1 canary]#kubectl get ingress
4NAME                        CLASS   HOSTS                              ADDRESS       PORTS   AGE
5canary                      nginx   echo.172.29.9.60.nip.io            172.29.9.60   80      19s
6production                  nginx   echo.172.29.9.60.nip.io            172.29.9.60   80      17m
  • Canary 版本应用创建成功后,接下来我们在命令行终端中来不断访问这个应用,观察 Hostname 变化:
 1[root@master1 canary]#for i in $(seq 1 10); do curl -s echo.172.29.9.60.nip.io | grep "Hostname"; done #canary版本出现3次
 2Hostname: canary-66cb497b7f-86mdj
 3Hostname: production-856d5fb99-k6k4h
 4Hostname: production-856d5fb99-k6k4h
 5Hostname: canary-66cb497b7f-86mdj
 6Hostname: canary-66cb497b7f-86mdj
 7Hostname: production-856d5fb99-k6k4h
 8Hostname: production-856d5fb99-k6k4h
 9Hostname: production-856d5fb99-k6k4h
10Hostname: production-856d5fb99-k6k4h
11Hostname: production-856d5fb99-k6k4h
12
13#这里测试多次。
14[root@master1 canary]#for i in $(seq 1 10); do curl -s echo.172.29.9.60.nip.io | grep "Hostname"; done #canary版本出现4次
15Hostname: canary-66cb497b7f-86mdj
16Hostname: canary-66cb497b7f-86mdj
17Hostname: canary-66cb497b7f-86mdj
18Hostname: production-856d5fb99-k6k4h
19Hostname: production-856d5fb99-k6k4h
20Hostname: production-856d5fb99-k6k4h
21Hostname: production-856d5fb99-k6k4h
22Hostname: production-856d5fb99-k6k4h
23Hostname: production-856d5fb99-k6k4h
24Hostname: canary-66cb497b7f-86mdj
25[root@master1 canary]#for i in $(seq 1 10); do curl -s echo.172.29.9.60.nip.io | grep "Hostname"; done #canary版本出现2次
26Hostname: production-856d5fb99-k6k4h
27Hostname: production-856d5fb99-k6k4h
28Hostname: production-856d5fb99-k6k4h
29Hostname: canary-66cb497b7f-86mdj
30Hostname: canary-66cb497b7f-86mdj
31Hostname: production-856d5fb99-k6k4h
32Hostname: production-856d5fb99-k6k4h
33Hostname: production-856d5fb99-k6k4h
34Hostname: production-856d5fb99-k6k4h
35Hostname: production-856d5fb99-k6k4h
36[root@master1 canary]#

由于我们给 Canary 版本应用分配了 30% 左右权重的流量,所以上面我们访问10次有3次(不是一定的)访问到了 Canary 版本的应用,符合我们的预期。

2.基于 Request Header

基于 Request Header 进行流量切分的典型应用场景即灰度发布或 A/B 测试场景

在上面的 Canary 版本的 Ingress 对象中新增一条 annotation 配置 nginx.ingress.kubernetes.io/canary-by-header: canary这里的 value 可以是任意值),使当前的 Ingress 实现基于 Request Header 进行流量切分,由于 canary-by-header 的优先级大于 canary-weight,所以会忽略原有的 canary-weight 的规则。

 1# canary-ingress.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: canary
 6  annotations:
 7    nginx.ingress.kubernetes.io/canary: "true"   # 要开启灰度发布机制,首先需要启用 Canary
 8    nginx.ingress.kubernetes.io/canary-by-header: canary  # 基于header的流量切分
 9    nginx.ingress.kubernetes.io/canary-weight: "30"  # 会被忽略,因为配置了 canary-by-header Canary版本
10spec:
11  ingressClassName: nginx
12  rules:
13  - host: echo.172.29.9.60.nip.io
14    http:
15      paths:
16      - path: /
17        pathType: Prefix
18        backend:
19          service:
20            name: canary
21            port:
22              number: 80

更新上面的 Ingress 资源对象后,我们在请求中加入不同的 Header 值,再次访问应用的域名。

注意:当 Request Header 设置为 never 或 always 时,请求将不会或一直被发送到 Canary 版本,对于任何其他 Header 值,将忽略 Header,并通过优先级将请求与其他 Canary 规则进行优先级的比较。

  • 部署并测试
 1[root@master1 canary]#kubectl apply -f canary-ingress.yaml
 2ingress.networking.k8s.io/canary configured
 3
 4[root@master1 canary]#for i in $(seq 1 10); do curl -s -H "canary: never" echo.172.29.9.60.nip.io | grep "Hostname"; done
 5Hostname: production-856d5fb99-k6k4h
 6Hostname: production-856d5fb99-k6k4h
 7Hostname: production-856d5fb99-k6k4h
 8Hostname: production-856d5fb99-k6k4h
 9Hostname: production-856d5fb99-k6k4h
10Hostname: production-856d5fb99-k6k4h
11Hostname: production-856d5fb99-k6k4h
12Hostname: production-856d5fb99-k6k4h
13Hostname: production-856d5fb99-k6k4h
14Hostname: production-856d5fb99-k6k4h
15[root@master1 canary]#for i in $(seq 1 10); do curl -s -H "canary: always" echo.172.29.9.60.nip.io | grep "Hostname"; done
16Hostname: canary-66cb497b7f-86mdj
17Hostname: canary-66cb497b7f-86mdj
18Hostname: canary-66cb497b7f-86mdj
19Hostname: canary-66cb497b7f-86mdj
20Hostname: canary-66cb497b7f-86mdj
21Hostname: canary-66cb497b7f-86mdj
22Hostname: canary-66cb497b7f-86mdj
23Hostname: canary-66cb497b7f-86mdj
24Hostname: canary-66cb497b7f-86mdj
25Hostname: canary-66cb497b7f-86mdj

这里我们在请求的时候设置了 canary: never 这个 Header 值,所以请求没有发送到 Canary 应用中去。

这里我们在请求的时候设置了 canary: always 这个 Header 值,所以请求全部发送到 Canary 应用中去了。

  • 如果设置为其他值呢:
 1[root@master1 canary]#for i in $(seq 1 10); do curl -s -H "canary: other-value" echo.172.29.9.60.nip.io | grep "Hostname"; done
 2Hostname: production-856d5fb99-k6k4h
 3Hostname: canary-66cb497b7f-86mdj
 4Hostname: production-856d5fb99-k6k4h
 5Hostname: production-856d5fb99-k6k4h
 6Hostname: canary-66cb497b7f-86mdj
 7Hostname: production-856d5fb99-k6k4h
 8Hostname: production-856d5fb99-k6k4h
 9Hostname: production-856d5fb99-k6k4h
10Hostname: production-856d5fb99-k6k4h
11Hostname: production-856d5fb99-k6k4h
12[root@master1 canary]#for i in $(seq 1 10); do curl -s -H "canary: other-value" echo.172.29.9.60.nip.io | grep "Hostname"; done
13Hostname: production-856d5fb99-k6k4h
14Hostname: canary-66cb497b7f-86mdj
15Hostname: production-856d5fb99-k6k4h
16Hostname: production-856d5fb99-k6k4h
17Hostname: production-856d5fb99-k6k4h
18Hostname: production-856d5fb99-k6k4h
19Hostname: production-856d5fb99-k6k4h
20Hostname: canary-66cb497b7f-86mdj
21Hostname: production-856d5fb99-k6k4h
22Hostname: canary-66cb497b7f-86mdj

由于我们请求设置的 Header 值为 canary: other-value,所以 ingress-nginx 会通过优先级将请求与其他 Canary 规则进行优先级的比较,我们这里也就会进入 canary-weight: "30" 这个规则去。

  • 这个时候我们可以在上一个 annotation (即 canary-by-header)的基础上添加一条 nginx.ingress.kubernetes.io/canary-by-header-value: user-value 这样的规则,就可以将请求路由到 Canary Ingress 中指定的服务了。
 1# canary-ingress.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: canary
 6  annotations:
 7    nginx.ingress.kubernetes.io/canary: "true"   # 要开启灰度发布机制,首先需要启用 Canary
 8    nginx.ingress.kubernetes.io/canary-by-header-value: user-value
 9    nginx.ingress.kubernetes.io/canary-by-header: canary  # 基于header的流量切分
10    nginx.ingress.kubernetes.io/canary-weight: "30"  # 会被忽略,因为配置了 canary-by-header Canary版本
11spec:
12  ingressClassName: nginx
13  rules:
14  - host: echo.172.29.9.60.nip.io
15    http:
16      paths:
17      - path: /
18        pathType: Prefix
19        backend:
20          service:
21            name: canary
22            port:
23              number: 80
  • 同样更新 Ingress 对象后,重新访问应用,当 Request Header 满足 canary: user-value时,所有请求就会被路由到 Canary 版本:
 1[root@master1 canary]#kubectl apply -f canary-ingress.yaml 
 2ingress.networking.k8s.io/canary configured
 3[root@master1 canary]#for i in $(seq 1 10); do curl -s -H "canary: user-value" echo.172.29.9.60.nip.io | grep "Hostname"; done
 4Hostname: canary-66cb497b7f-86mdj
 5Hostname: canary-66cb497b7f-86mdj
 6Hostname: canary-66cb497b7f-86mdj
 7Hostname: canary-66cb497b7f-86mdj
 8Hostname: canary-66cb497b7f-86mdj
 9Hostname: canary-66cb497b7f-86mdj
10Hostname: canary-66cb497b7f-86mdj
11Hostname: canary-66cb497b7f-86mdj
12Hostname: canary-66cb497b7f-86mdj
13Hostname: canary-66cb497b7f-86mdj

3.基于 Cookie

与基于 Request Header 的 annotation 用法规则类似。例如在 A/B 测试场景下,需要让地域为北京的用户访问 Canary 版本。那么当 cookie 的 annotation 设置为 nginx.ingress.kubernetes.io/canary-by-cookie: "users_from_Beijing",此时后台可对登录的用户请求进行检查,如果该用户访问源来自北京则设置 cookie users_from_Beijing 的值为 always,这样就可以确保北京的用户仅访问 Canary 版本。

  • 同样我们更新 Canary 版本的 Ingress 资源对象,采用基于 Cookie 来进行流量切分
 1# canary-ingress.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: canary
 6  annotations:
 7    nginx.ingress.kubernetes.io/canary: "true"   # 要开启灰度发布机制,首先需要启用 Canary
 8    #nginx.ingress.kubernetes.io/canary-by-header-value: user-value
 9    #nginx.ingress.kubernetes.io/canary-by-header: canary  # 基于header的流量切分
10    nginx.ingress.kubernetes.io/canary-by-cookie: "users_from_Beijing"  # 基于 cookie
11    nginx.ingress.kubernetes.io/canary-weight: "30"  # 会被忽略,因为配置了 canary-by-cookie
12spec:
13  ingressClassName: nginx
14  rules:
15  - host: echo.172.29.9.60.nip.io
16    http:
17      paths:
18      - path: /
19        pathType: Prefix
20        backend:
21          service:
22            name: canary
23            port:
24              number: 80
  • 更新上面的 Ingress 资源对象后,我们在请求中设置一个 users_from_Beijing=always 的 Cookie 值,再次访问应用的域名。
 1[root@master1 canary]#kubectl apply -f canary-ingress.yaml
 2ingress.networking.k8s.io/canary configured
 3
 4[root@master1 canary]#for i in $(seq 1 10); do curl -s -b "users_from_Beijing=always" echo.172.29.9.60.nip.io | grep "Hostname"; done
 5Hostname: canary-66cb497b7f-86mdj
 6Hostname: canary-66cb497b7f-86mdj
 7Hostname: canary-66cb497b7f-86mdj
 8Hostname: canary-66cb497b7f-86mdj
 9Hostname: canary-66cb497b7f-86mdj
10Hostname: canary-66cb497b7f-86mdj
11Hostname: canary-66cb497b7f-86mdj
12Hostname: canary-66cb497b7f-86mdj
13Hostname: canary-66cb497b7f-86mdj
14Hostname: canary-66cb497b7f-86mdj

我们可以看到应用都被路由到了 Canary 版本的应用中去了。

  • 如果我们将这个 Cookie 值设置为 never,则不会路由到 Canary 应用中。
 1[root@master1 canary]#for i in $(seq 1 10); do curl -s -b "users_from_Beijing=nerver" echo.172.29.9.60.nip.io | grep "Hostname"; done
 2Hostname: canary-66cb497b7f-86mdj
 3Hostname: canary-66cb497b7f-86mdj
 4Hostname: canary-66cb497b7f-86mdj
 5Hostname: canary-66cb497b7f-86mdj
 6Hostname: production-856d5fb99-k6k4h
 7Hostname: production-856d5fb99-k6k4h
 8Hostname: production-856d5fb99-k6k4h
 9Hostname: production-856d5fb99-k6k4h
10Hostname: production-856d5fb99-k6k4h
11Hostname: production-856d5fb99-k6k4h
12[root@master1 canary]#for i in $(seq 1 10); do curl -s -b "users_from_Beijing=nerver" echo.172.29.9.60.nip.io | grep "Hostname"; done
13Hostname: production-856d5fb99-k6k4h
14Hostname: production-856d5fb99-k6k4h
15Hostname: production-856d5fb99-k6k4h
16Hostname: canary-66cb497b7f-86mdj
17Hostname: production-856d5fb99-k6k4h
18Hostname: production-856d5fb99-k6k4h
19Hostname: production-856d5fb99-k6k4h
20Hostname: canary-66cb497b7f-86mdj
21Hostname: production-856d5fb99-k6k4h
22Hostname: canary-66cb497b7f-86mdj

测试结束。😘

4、HTTPS

如果我们需要用 HTTPS 来访问我们这个应用的话,就需要监听 443 端口了,同样用 HTTPS 访问应用必然就需要证书

1.openssl自签证书

==💘 实战:Ingress-nginx之用 HTTPS 来访问我们的应用(openssl)-2022.11.27(测试成功)==

image-20230217224146492

  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd://1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/1hKpD4bRPtYdaO3BeB7g8gQ?pwd=bi4d 提取码:bi4d 2023.3.14-实战:Ingress-nginx之用 HTTPS 来访问我们的应用(openssl)-2022.11.27(测试成功)

image-20230314075027065

1、用 openssl创建一个自签名证书

1[root@master1 ~]#openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=foo.bar.com"
2Generating a 2048 bit RSA private key
3....................................+++
4.............+++
5writing new private key to 'tls.key'
6-----

2、通过 Secret 对象来引用证书文件

1# 要注意证书文件名称必须是 tls.crt 和 tls.key
2[root@master1 ~]#kubectl create secret tls foo-tls --cert=tls.crt --key=tls.key 
3secret/foo-tls created
4[root@master1 ~]#kubectl get secrets foo-tls 
5NAME      TYPE                DATA   AGE
6foo-tls   kubernetes.io/tls   2      13s

3、创建应用 记得提前创建好应用:

 1# my-nginx.yaml    
 2apiVersion: apps/v1
 3kind: Deployment   
 4metadata:
 5  name: my-nginx   
 6spec:
 7  selector:        
 8    matchLabels:
 9      app: my-nginx
10  template:
11    metadata:
12      labels:
13        app: my-nginx
14    spec:
15      containers:
16      - name: my-nginx
17        image: nginx
18        ports:
19        - containerPort: 80
20---
21apiVersion: v1
22kind: Service
23metadata:
24  name: my-nginx
25  labels:
26    app: my-nginx
27spec:
28  ports:
29  - port: 80
30    protocol: TCP
31    name: http
32  selector:
33    app: my-nginx   

部署:

1[root@k8s-master1 tls]#kubectl apply -f my-nginx.yaml 
2deployment.apps/my-nginx1 created
3service/my-nginx1 created

查看应用:

 1[root@master1 https-openssl]#kubectl get po,deploy,svc
 2NAME                            READY   STATUS    RESTARTS   AGE
 3pod/my-nginx-7c4ff94949-zqlf8   1/1     Running   0          34s
 4
 5NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
 6deployment.apps/my-nginx   1/1     1            1           34s
 7
 8NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
 9service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   20d
10service/my-nginx     ClusterIP   10.104.187.51   <none>        80/TCP    34s

4、编写ingress

 1#ingress-https.yaml           
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: ingress-https
 6spec:
 7  ingressClassName: nginx
 8  tls:  # 配置 tls 证书
 9  - hosts:
10    - foo.bar.com
11    secretName: foo-tls #包含证书的一个secret对象名称
12  rules:
13  - host: foo.bar.com
14    http:
15      paths:
16      - path: /
17        pathType: Prefix
18        backend:
19          service:
20            name: my-nginx
21            port:
22              number: 80 

本地pc配置下域名解析:

1这里记得在自己本地pc的hosts里面做下域名解析:
2C:\WINDOWS\System32\drivers\etc
3172.29.9.51 foo.bar.com #注意:这个的地址就是,ingress-nginx-controller所在节点的地址。

5、测试 部署并测试:

1[root@master1 https]#kubectl apply -f ingress-https.yaml 
2ingress.networking.k8s.io/ingress-https created
3
4[root@master1 https]#kubectl get ingress
5NAME            CLASS   HOSTS                ADDRESS       PORTS     AGE
6ingress-https   nginx   foo.bar.com          172.29.9.51   80, 443   6s

在自己pc浏览器里进行验证:

测试结束。😘

2.cfgssl自签证书

==💘 实战:实战:Ingress-nginx之用 HTTPS 来访问我们的应用(cfgssl)-2023.1.2(测试成功)==

image-20230217224153652

  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd://1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/13nee2xk30Y8-z9TdpuZOuA?pwd=9zzp 提取码:9zzp 2023.1.5-cfgssl软件包 image-20230217224200910

1、安装cfgssl工具

  • 将cfssl工具安装包和脚本上传到服务器:
1[root@k8s-master1 ~]#ls -lh cfssl.tar.gz
2-rw-r--r-- 1 root root 5.6M Nov 25  2019 cfssl.tar.gz
3-rw-r--r-- 1 root root 1005 Mar 26  2021 certs.sh
4[root@k8s-master1 ~]#tar tvf cfssl.tar.gz 
5-rwxr-xr-x root/root  10376657 2019-11-25 06:36 cfssl
6-rwxr-xr-x root/root   6595195 2019-11-25 06:36 cfssl-certinfo
7-rwxr-xr-x root/root   2277873 2019-11-25 06:36 cfssljson
8[root@k8s-master1 ~]#tar xf cfssl.tar.gz -C /usr/bin/
  • 验证:
 1[root@k8s-master1 ~]#cfssl --help
 2Usage:
 3Available commands:
 4        bundle
 5        certinfo
 6        ocspsign
 7        selfsign
 8        scan
 9        print-defaults
10        sign
11        gencert
12        ocspdump
13        version
14        genkey
15        gencrl
16        ocsprefresh
17        info
18        serve
19        ocspserve
20        revoke
21Top-level flags:
22  -allow_verification_with_non_compliant_keys
23        Allow a SignatureVerifier to use keys which are technically non-compliant with RFC6962.
24  -loglevel int
25        Log level (0 = DEBUG, 5 = FATAL) (default 1)

2、生成证书

  • 创建测试目录:
1[root@k8s-master1 ~]#mkdir https
2[root@k8s-master1 ~]#cd https/
  • 将证书生成脚本移动到刚才创建的目录:
 1[root@k8s-master1 ~]#mv certs.sh https/
 2[root@k8s-master1 ~]#ls https/
 3certs.sh
 4
 5[root@k8s-master1 ~]#cd https/
 6[root@k8s-master1 https]#cat certs.sh 
 7cat > ca-config.json <<EOF
 8{
 9  "signing": {
10    "default": {
11      "expiry": "87600h"
12    },
13    "profiles": {
14      "kubernetes": {
15         "expiry": "87600h",
16         "usages": [
17            "signing",
18            "key encipherment",
19            "server auth",
20            "client auth"
21        ]
22      }
23    }
24  }
25}
26EOF
27
28cat > ca-csr.json <<EOF
29{
30    "CN": "kubernetes",
31    "key": {
32        "algo": "rsa",
33        "size": 2048
34    },
35    "names": [
36        {
37            "C": "CN",
38            "L": "Beijing",
39            "ST": "Beijing"
40        }
41    ]
42}
43EOF
44
45cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
46
47
48cat > web.aliangedu.cn-csr.json <<EOF
49{
50  "CN": "web.aliangedu.cn",
51  "hosts": [],
52  "key": {
53    "algo": "rsa",
54    "size": 2048
55  },
56  "names": [
57    {
58      "C": "CN",
59      "L": "BeiJing",
60      "ST": "BeiJing"
61    }
62  ]
63}
64EOF
65
66cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes web.aliangedu.cn-csr.json | cfssljson -bare web.aliangedu.cn

备注: image-20230217224206679

image-20230217224314422

  • 执行脚本,生成证书:
 1[root@k8s-master1 https]#sh certs.sh 
 22022/11/27 09:38:30 [INFO] generating a new CA key and certificate from CSR
 32022/11/27 09:38:30 [INFO] generate received request
 42022/11/27 09:38:30 [INFO] received CSR
 52022/11/27 09:38:30 [INFO] generating key: rsa-2048
 62022/11/27 09:38:30 [INFO] encoded CSR
 72022/11/27 09:38:30 [INFO] signed certificate with serial number 42920572197673510025121729381310395494775886689
 82022/11/27 09:38:30 [INFO] generate received request
 92022/11/27 09:38:30 [INFO] received CSR
102022/11/27 09:38:30 [INFO] generating key: rsa-2048
112022/11/27 09:38:30 [INFO] encoded CSR
122022/11/27 09:38:30 [INFO] signed certificate with serial number 265650157446309871110524021899155707215940024732
132022/11/27 09:38:30 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
14websites. For more information see the Baseline Requirements for the Issuance and Management
15of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
16specifically, section 10.2.3 ("Information Requirements").
17
18[root@k8s-master1 https]#ll *
19-rw-r--r-- 1 root root  294 Nov 27 09:38 ca-config.json
20-rw-r--r-- 1 root root  960 Nov 27 09:38 ca.csr
21-rw-r--r-- 1 root root  212 Nov 27 09:38 ca-csr.json
22-rw------- 1 root root 1675 Nov 27 09:38 ca-key.pem
23-rw-r--r-- 1 root root 1273 Nov 27 09:38 ca.pem
24
25-rw-r--r-- 1 root root 1005 Mar 26  2021 certs.sh
26
27-rw-r--r-- 1 root root  968 Nov 27 09:38 web.aliangedu.cn.csr
28-rw-r--r-- 1 root root  189 Nov 27 09:38 web.aliangedu.cn-csr.json
29-rw------- 1 root root 1679 Nov 27 09:38 web.aliangedu.cn-key.pem #数字证书私钥
30-rw-r--r-- 1 root root 1318 Nov 27 09:38 web.aliangedu.cn.pem #数字证书
31[root@k8s-master1 https]#
  • 注意:这个后缀不一样,.crt.key

image-20230217224219994

3、创建secret

  • 创建secret:
1kubectl create secret tls web-aliangedu-cn --cert=web.aliangedu.cn.pem --key=web.aliangedu.cn-key.pem
  • 查看:
1[root@k8s-master1 https]#kubectl create secret tls web-aliangedu-cn --cert=web.aliangedu.cn.pem --key=web.aliangedu.cn-key.pem
2secret/web-aliangedu-cn created
3[root@k8s-master1 https]#kubectl get secrets
4NAME                  TYPE                                  DATA   AGE
5default-token-xkms9   kubernetes.io/service-account-token   3      72d
6web-aliangedu-cn      kubernetes.io/tls                     2      4s

4、部署应用

 1# my-nginx.yaml    
 2apiVersion: apps/v1
 3kind: Deployment   
 4metadata:
 5  name: my-nginx   
 6spec:
 7  selector:        
 8    matchLabels:
 9      app: my-nginx
10  template:
11    metadata:
12      labels:
13        app: my-nginx
14    spec:
15      containers:
16      - name: my-nginx
17        image: nginx
18        ports:
19        - containerPort: 80
20---
21apiVersion: v1
22kind: Service
23metadata:
24  name: my-nginx
25  labels:
26    app: my-nginx
27spec:
28  ports:
29  - port: 80
30    protocol: TCP
31    name: http
32  selector:
33    app: my-nginx   
  • 部署并查看:
1[root@k8s-master1 ~]#kubectl apply -f my-nginx.yaml
2
3[root@k8s-master1 ~]#kubectl get po
4NAME                        READY   STATUS    RESTARTS   AGE
5my-nginx-7c4ff94949-zg5zp   1/1     Running   0          82s
6[root@k8s-master1 ~]#kubectl get svc
7NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
8kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   72d
9my-nginx     ClusterIP   10.103.251.223   <none>        80/TCP    34d

5、创建Ingress

 1#ingress-https.yaml           
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: ingress-https
 6spec:
 7  ingressClassName: nginx
 8  tls:  # 配置 tls 证书
 9  - hosts:
10    - web.aliangedu.cn
11    secretName: web-aliangedu-cn #包含证书的一个secret对象名称
12  rules:
13  - host: web.aliangedu.cn
14    http:
15      paths:
16      - path: /
17        pathType: Prefix
18        backend:
19          service:
20            name: my-nginx
21            port:
22              number: 80 
  • 创建并查看:
1[root@k8s-master1 ~]#kubectl apply -f ingress-https.yaml
2ingress.networking.k8s.io/ingress-https created
3[root@k8s-master1 ~]#kubectl get ingress
4NAME            CLASS   HOSTS              ADDRESS   PORTS     AGE
5ingress-https   nginx   web.aliangedu.cn             80, 443   5s

6、验证 在浏览器里访问:(可以看到是https了) https://web.aliangedu.cn/ image-20230217224226261

image-20230217224253647

  • 注意:证书和域名时一一对应的

image-20230217224306435

image-20230217224335213

测试结束。😘

除了自签名证书或者购买正规机构的 CA 证书之外,我们还可以通过一些工具来自动生成合法的证书cert-manager 是一个云原生证书管理开源项目,可以用于在 Kubernetes 集群中提供 HTTPS 证书并自动续期,支持 Let’sEncrypt/HashiCorp/Vault 这些免费证书的签发。在 Kubernetes 中,可以通过 Kubernetes Ingress 和Let’s Encrypt 实现外部服务的自动化 HTTPS。

5、TCP 与 UDP

由于在 Ingress 资源对象中没有直接对 TCP 或 UDP 服务的支持,要在 ingress-nginx 中提供支持,需要在控制器启动参数中添加 --tcp-services-configmap--udp-services-configmap 标志指向一个 ConfigMap,其中的 key 是要使用的外部端口,value 值是使用格式 <namespace/service name>:<service port>:[PROXY]:[PROXY] 暴露的服务,端口可以使用端口号或者端口名称,最后两个字段是可选的,用于配置 PROXY 代理。

==💘 实战:Ingress-nginx之TCP-2023.3.15(测试成功)==

image-20230315120901015

  • 实战步骤
1graph LR
2	A[实战步骤] -->B(1.部署MongoDB服务)
3	A[实战步骤] -->C(2.创建一个ConfigMap)
4    A[实战步骤] -->D(3.配置ingress-nginx 的启动参数)
5    A[实战步骤] -->E(4.通过 Service 暴露tcp 端口)
6    A[实战步骤] -->F(5.验证)
  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd: v1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/12TEiSjUNKVt4or1TNSfEhg?pwd=0w34 提取码:0w34 2023.3.15-实战:Ingress-nginx之TCP-2023.3.15(测试成功)

image-20230315115844151

  • 前提条件

已安装好ingress-nginx环境;(ingress-nginx svc类型是LoadBalancer的。

已部署好MetalLB环境;(当然,这里也可以不部署MetalLB环境,直接使用域名:NodePort端口去访问的,这里为了测试方便,我们使用LB来进行访问)。

ingress-nginx部署见文档:

本地文档:

image-20230311153824838

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129334116?spm=1001.2014.3001.5501

image-20230308212123364

MetalLB部署见文档:

本地文档:

image-20230308212145378

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129343617?spm=1001.2014.3001.5501

image-20230308212209034

⚠️ 注意:Ingres-nginx是通过DaemonSet方式部署的,MetalLB部署后,在3个节点上都是可以正常访问ingress的哦。

  • 注意:当前测ingress-nginxEXTERNAL-IP172.29.9.60
1[root@master1 canary]#kubectl get svc -ningress-nginx
2NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
3ingress-nginx-controller             LoadBalancer   10.108.58.246   172.29.9.60   80:32439/TCP,443:31347/TCP   2d15h
4ingress-nginx-controller-admission   ClusterIP      10.101.184.28   <none>        443/TCP                      2d15h

1.部署MongoDB服务

  • 比如现在我们要通过 ingress-nginx 来暴露一个 MongoDB 服务,首先创建如下的应用:
 1# mongo.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: mongo
 6  labels:
 7    app: mongo
 8spec:
 9  selector:
10    matchLabels:
11      app: mongo
12  template:
13    metadata:
14      labels:
15        app: mongo
16    spec:
17      volumes:
18      - name: data
19        emptyDir: {}
20      containers:
21      - name: mongo
22        image: mongo:4.0
23        ports:
24        - containerPort: 27017
25        volumeMounts:
26        - name: data
27          mountPath: /data/db
28---
29apiVersion: v1
30kind: Service
31metadata:
32  name: mongo
33spec:
34  selector:
35    app: mongo
36  ports:
37  - port: 27017
  • 直接创建上面的资源对象:
 1[root@master1 TCP]#kubectl apply -f mongo.yaml 
 2deployment.apps/mongo created
 3service/mongo created
 4
 5[root@master1 TCP]#kubectl get svc
 6NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)     AGE
 7kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP     20d
 8mongo        ClusterIP   10.99.220.83   <none>        27017/TCP   20s
 9
10[root@master1 TCP]#kubectl get po -l app=mongo
11NAME                     READY   STATUS    RESTARTS   AGE
12mongo-7885fb6bd4-gpbxz   1/1     Running   0          42s

2.创建一个ConfigMap

  • 现在我们要通过 ingress-nginx 来暴露上面的 MongoDB 服务,我们需要创建一个如下所示的 ConfigMap:
1# tcp-ingress-ConfigMap.yaml
2apiVersion: v1
3kind: ConfigMap
4metadata:
5  name: ingress-nginx-tcp
6  namespace: ingress-nginx
7data:
8  "27017": default/mongo:27017

部署:

1[root@master1 TCP]#kubectl apply -f tcp-ingress-ConfigMap.yaml 
2configmap/ingress-nginx-tcp created
3
4#重新部署完成后会自动生成一个名为 `ingress-nginx-tcp` 的 ConfigMap 对象,如下所示:
5[root@master1 TCP]#kubectl get cm -ningress-nginx
6NAME                       DATA   AGE 
7……
8ingress-nginx-tcp          1      15s 

3.配置ingress-nginx 的启动参数

  • 然后在 ingress-nginx 的启动参数中添加 --tcp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-tcp 这样的配置:
1[root@master1 ingress-nginx部署]#vim deploy.yaml
2……
3- --tcp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-tcp
4……

image-20230314213035761

4.通过 Service 暴露tcp 端口

由于我们这里安装的 ingress-nginx 是通过 LoadBalancer 的 Service 暴露出去的,那么自然我们也需要通过 Service 去暴露我们这里的 tcp 端口,所以我们还需要更新 ingress-nginx 的 Service 对象,如下所示:

1[root@master1 ingress-nginx部署]#vim deploy.yaml
2……
3   - name: mongo #暴露27017端口
4     port: 27017
5     protocol: TCP
6     targetPort: 27017
7……

image-20230314214725573

  • 编辑完,重新部署即可:
 1[root@master1 ingress-nginx部署]#kubectl apply -f deploy.yaml 
 2namespace/ingress-nginx unchanged
 3serviceaccount/ingress-nginx unchanged
 4serviceaccount/ingress-nginx-admission unchanged      
 5role.rbac.authorization.k8s.io/ingress-nginx unchanged
 6role.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
 7clusterrole.rbac.authorization.k8s.io/ingress-nginx unchanged
 8clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
 9rolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
10rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
11clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx unchanged
12clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission unchanged
13configmap/ingress-nginx-controller unchanged
14service/ingress-nginx-controller unchanged
15service/ingress-nginx-controller-admission unchanged
16daemonset.apps/ingress-nginx-controller configured
17job.batch/ingress-nginx-admission-create unchanged
18job.batch/ingress-nginx-admission-patch unchanged
19ingressclass.networking.k8s.io/nginx unchanged
20validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission configured
21
22[root@master1 ingress-nginx部署]#kubectl get po -ningress-nginx -owide
23NAME                                      READY   STATUS      RESTARTS   AGE    IP            NODE      NOMINATED NODE   READINESS GATES
24ingress-nginx-admission-create--1-5h6rr   0/1     Completed   0          3d6h   10.244.1.25   node1     <none>           <none>
25ingress-nginx-admission-patch--1-jdn2k    0/1     Completed   0          3d6h   10.244.2.18   node2     <none>           <none>
26ingress-nginx-controller-7hzwd            1/1     Running     0          95s    10.244.2.25   node2     <none>           <none>
27ingress-nginx-controller-s9psd            1/1     Running     0          30s    10.244.0.3    master1   <none>           <none>
28ingress-nginx-controller-tf2mp            1/1     Running     0          63s    10.244.1.29   node1     <none>           <none>
29
30[root@master1 ingress-nginx部署]#kubectl get svc ingress-nginx-controller -n ingress-nginx
31NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                                      AGE
32ingress-nginx-controller   LoadBalancer   10.108.58.246   172.29.9.60   80:32439/TCP,443:31347/TCP,27017:32608/TCP   3d6h

5.验证

  • 现在我们就可以通过 ingress-nginx 暴露的 27017 端口去访问 Mongo 服务了:
 1[root@master1 TCP]#kubectl exec -it mongo-7885fb6bd4-gpbxz -- bash
 2root@mongo-7885fb6bd4-gpbxz:/# mongo --host 172.29.9.60  --port 27017
 3MongoDB shell version v4.0.27
 4connecting to: mongodb://172.29.9.60:27017/?gssapiServiceName=mongodb
 5Implicit session: session { "id" : UUID("6656a2e9-0ff5-40ba-99d5-68efdad06043") }
 6MongoDB server version: 4.0.27
 7Welcome to the MongoDB shell.
 8For interactive help, type "help".
 9For more comprehensive documentation, see
10        http://docs.mongodb.org/
11Questions? Try the support group
12        http://groups.google.com/group/mongodb-user
13Server has startup warnings:
142023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten]
152023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
162023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
172023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten]
182023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten]
192023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
202023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
212023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten]
222023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
232023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
242023-03-14T13:21:58.361+0000 I CONTROL  [initandlisten]
25---
26Enable MongoDB's free cloud-based monitoring service, which will then receive and display
27metrics about your deployment (disk utilization, CPU, operation statistics, etc).
28
29The monitoring data will be available on a MongoDB website with a unique URL accessible to you
30and anyone you share the URL with. MongoDB may use this information to make product
31improvements and to suggest MongoDB products and deployment options to you.
32
33To enable free monitoring, run the following command: db.enableFreeMonitoring()
34To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
35---
36
37> show dbs
38admin   0.000GB
39config  0.000GB
40local   0.000GB
41>
  • 现在我们就可以通过 LB 地址 172.29.9.60 加上暴露的 27017 端口去访问 Mongo 服务了,比如我们这里在节点上安装了 MongoDB 客户端 mongosh,使用命令 mongosh “mongodb://172.29.9.60:27017” 就可以访问到我们的Mongo 服务了:

注意:自己这里没安装MongoDB客户端mongosh。这里只保留文档内容。

image-20230315120107584

  • 同样的我们也可以去查看最终生成的 nginx.conf 配置文件:
 1[root@master1 ~]#kubectl get po -ningress-nginx
 2NAME                                      READY   STATUS      RESTARTS   AGE  
 3ingress-nginx-admission-create--1-5h6rr   0/1     Completed   0          3d20h
 4ingress-nginx-admission-patch--1-jdn2k    0/1     Completed   0          3d20h
 5ingress-nginx-controller-7hzwd            1/1     Running     0          14h  
 6ingress-nginx-controller-s9psd            1/1     Running     0          14h  
 7ingress-nginx-controller-tf2mp            1/1     Running     0          14h  
 8
 9[root@master1 ~]#kubectl exec  ingress-nginx-controller-tf2mp -ningress-nginx -- cat /etc/nginx/nginx.conf
10......
11stream {
12……
13        # TCP services
14
15        server {
16                preread_by_lua_block {
17                        ngx.var.proxy_upstream_name="tcp-default-mongo-27017";
18                }
19
20                listen                  27017;
21
22                listen                  [::]:27017;
23
24                proxy_timeout           600s;
25                proxy_next_upstream     on;
26                proxy_next_upstream_timeout 600s;
27                proxy_next_upstream_tries   3;
28
29                proxy_pass              upstream_balancer;
30
31        }
32
33        # UDP services
34
35        # Stream Snippets

TCP 相关的配置位于 stream 配置块下面。

测试结束。😘

从 Nginx 1.9.13 版本开始提供 UDP 负载均衡,同样我们也可以在 ingress-nginx 中来代理 UDP 服务,比如我们可以去暴露 kube-dns 的服务,同样需要创建一个如下所示的 ConfigMap:

1apiVersion: v1
2kind: ConfigMap
3metadata:
4  name: udp-services
5  namespace: ingress-nginx
6data:
7  53: "kube-system/kube-dns:53"

然后需要在 ingress-nginx 参数中添加一个 –udp-services-configmap=$(POD_NAMESPACE)/ingress-nginx-udp 这样的配置,Service 中也要加上暴露的 53 端口,然后重新更新即可。

方法个TCP配置一样,这里省略。

6、全局配置

==💘 实战:Ingress-nginx之全局配置-2023.3.15(测试成功)==

image-20230315183657777

  • 实战步骤
1graph LR
2	A[实战步骤] -->B(1.查看ingress-nginx默认的ConfigMap)
3	A[实战步骤] -->C(2.配置ingress-nginx默认的ConfigMap)
4    A[实战步骤] -->D(3.验证)
  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd: v1.5.5
  • 实验软件

无。

  • 前提条件

已安装好ingress-nginx环境;(ingress-nginx svc类型是LoadBalancer的。

已部署好MetalLB环境;(当然,这里也可以不部署MetalLB环境,直接使用域名:NodePort端口去访问的,这里为了测试方便,我们使用LB来进行访问)。

ingress-nginx部署见文档:

本地文档:

image-20230311153824838

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129334116?spm=1001.2014.3001.5501

image-20230308212123364

MetalLB部署见文档:

本地文档:

image-20230308212145378

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129343617?spm=1001.2014.3001.5501

image-20230308212209034

⚠️ 注意:Ingres-nginx是通过DaemonSet方式部署的,MetalLB部署后,在3个节点上都是可以正常访问ingress的哦。

  • 注意:当前测ingress-nginxEXTERNAL-IP172.29.9.60
1[root@master1 canary]#kubectl get svc -ningress-nginx
2NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
3ingress-nginx-controller             LoadBalancer   10.108.58.246   172.29.9.60   80:32439/TCP,443:31347/TCP   2d15h
4ingress-nginx-controller-admission   ClusterIP      10.101.184.28   <none>        443/TCP                      2d15h

1.查看ingress-nginx默认的ConfigMap

除了可以通过 annotations 对指定的 Ingress 进行定制之外,我们还可以配置 ingress-nginx 的全局配置,在控制器启动参数中通过标志 --configmap 指定了一个全局的 ConfigMap 对象,我们可以将全局的一些配置直接定义在该对象中即可:

 1[root@master1 ~]#kubectl get po -ningress-nginx
 2NAME                                      READY   STATUS      RESTARTS   AGE 
 3ingress-nginx-admission-create--1-5h6rr   0/1     Completed   0          4d3h
 4ingress-nginx-admission-patch--1-jdn2k    0/1     Completed   0          4d3h
 5ingress-nginx-controller-7hzwd            1/1     Running     0          20h 
 6ingress-nginx-controller-s9psd            1/1     Running     0          20h 
 7ingress-nginx-controller-tf2mp            1/1     Running     0          20h 
 8
 9[root@master1 ~]#kubectl edit pod ingress-nginx-controller-7hzwd -ningress-nginx
10……
11containers:
12  - args:
13    - /nginx-ingress-controller
14    - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
15    ......
16    
17[root@master1 ~]#kubectl get cm ingress-nginx-controller -ningress-nginx
18NAME                       DATA   AGE 
19ingress-nginx-controller   1      4d3h
20[root@master1 ~]#kubectl get cm ingress-nginx-controller -ningress-nginx -oyaml
21apiVersion: v1
22data:
23  allow-snippet-annotations: "true" #这里
24kind: ConfigMap
25metadata:
26  annotations:
27    kubectl.kubernetes.io/last-applied-configuration: |
28      {"apiVersion":"v1","data":{"allow-snippet-annotations":"true"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kubernetes.io/instance":"ingress-nginx","app.kubernetes.io/name":"ingress-nginx","app.kubernetes.io/part-of":"ingress-nginx","app.kubernetes.io/version":"1.5.1"},"name":"ingress-nginx-controller","namespace":"ingress-nginx"}}
29  creationTimestamp: "2023-03-11T07:05:30Z"
30  labels:
31    app.kubernetes.io/component: controller
32    app.kubernetes.io/instance: ingress-nginx
33    app.kubernetes.io/name: ingress-nginx
34    app.kubernetes.io/part-of: ingress-nginx
35    app.kubernetes.io/version: 1.5.1
36  name: ingress-nginx-controller
37  namespace: ingress-nginx
38  resourceVersion: "228849"
39  uid: 43e910ef-4b42-433a-9c1b-bd62214731c2    

2.配置ingress-nginx默认的ConfigMap

  • 比如我们可以添加如下所示的一些常用配置:
 1[root@master1 ~]#kubectl edit configmap ingress-nginx-controller -n ingress-nginx
 2apiVersion: v1
 3data:
 4  allow-snippet-annotations: "true"
 5  client-header-buffer-size: 32k  # 注意不是下划线
 6  client-max-body-size: 5m
 7  use-gzip: "true"
 8  gzip-level: "7"
 9  large-client-header-buffers: 4 32k
10  proxy-connect-timeout: 11s
11  proxy-read-timeout: 12s
12  keep-alive: "75"   # 启用keep-alive,连接复用,提高QPS,一般线上都是建议启用这个keep-alive的;
13  keep-alive-requests: "100"
14  upstream-keepalive-connections: "10000"
15  upstream-keepalive-requests: "100"
16  upstream-keepalive-timeout: "60"
17  disable-ipv6: "true"
18  disable-ipv6-dns: "true"
19  max-worker-connections: "65535"
20  max-worker-open-files: "10240"
21kind: ConfigMap
22......

image-20230315182929818

3.验证

  • 修改完成后 Nginx 配置会自动重载生效,我们可以查看 nginx.conf 配置文件进行验证:
 1[root@master1 ~]#kubectl logs ingress-nginx-controller-7hzwd  -ningress-nginx
 2I0315 10:29:19.809867       7 controller.go:168] "Configuration changes detected, backend reload required"
 3I0315 10:29:20.201208       7 controller.go:185] "Backend successfully reloaded"
 4I0315 10:29:20.210370       7 event.go:285] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7hzwd", UID:"eef199ea-1f92-48ab-8ffb-10f252d3b2da", 
 5APIVersion:"v1", ResourceVersion:"277964", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
 6[root@master1 ~]#
 7
 8[root@master1 ~]#kubectl exec -it ingress-nginx-controller-7hzwd -n ingress-nginx -- cat /etc/nginx/nginx.conf |grep large_client_header_buffers
 9        large_client_header_buffers     4 32k;
10[root@master1 ~]#

以下为ab测试效果:

  • 没配置时效果

image-20230315122501012

image-20230315122605792

image-20230315122617964

  • 配置后效果

image-20230315122401976

image-20230315122410036

测试结束。😘

此外往往我们还需要对 ingress-nginx 部署的节点进行性能优化,修改一些内核参数,使得适配 Nginx 的使用场景,一般我们是直接去修改节点上的内核参数,为了能够统一管理,我们可以使用 initContainers 来进行配置:

(可以参考官方博客 (https://www.nginx.com/blog/tuning-nginx/ 进行调整)

 1initContainers:
 2- command:
 3  - /bin/sh
 4  - -c
 5  - |
 6    mount -o remount rw /proc/sys
 7    sysctl -w net.core.somaxconn=65535  # 具体的配置视具体情况而定
 8    sysctl -w net.ipv4.tcp_tw_reuse=1
 9    sysctl -w net.ipv4.ip_local_port_range="1024 65535"
10    sysctl -w fs.file-max=1048576
11    sysctl -w fs.inotify.max_user_instances=16384
12    sysctl -w fs.inotify.max_user_watches=524288
13    sysctl -w fs.inotify.max_queued_events=16384
14image: busybox
15imagePullPolicy: IfNotPresent
16name: init-sysctl
17securityContext:
18  capabilities:
19    add:
20    - SYS_ADMIN
21    drop:
22    - ALL
23......

部署完成后通过 initContainers 就可以修改节点内核参数了,生产环境建议对节点内核参数进行相应的优化。

性能优化需要有丰富的经验,关于 nginx 的性能优化可以参考文章https://cloud.tencent.com/developer/article/1026833。

7、gRPC

ingress-nginx 控制器同样也支持 gRPC 服务的。gRPC 是 Google 开源的一个高性能 RPC 通信框架,通过Protocol Buffers 作为其 IDL,在不同语言开发的平台上使用,同时 gRPC 基于 HTTP/2 协议实现,提供了多路复用、头部压缩、流控等特性,极大地提高了客户端与服务端的通信效率。gRPC 简介在 gRPC 服务中,客户端应用可以同本地方法一样调用到位于不同服务器上的服务端应用方法,可以很方便地创建分布式应用和服务。同其他 RPC 框架一样,gRPC 也需要定义一个服务接口,同时指定被远程调用的方法和返回类型。服务端需要实现被定义的接口,同时运行一个gRPC 服务器来处理客户端请求。

image-20230316091932261

==💘 实战:Ingress-nginx之gRPC-2023.3.16(测试成功)==

image-20230316103435494

  • 实战步骤
1graph LR
2	A[实战步骤] -->B(1.构建镜像)
3	A[实战步骤] -->C(2.部署应用)
4    A[实战步骤] -->D(3.申请 SSL 证书)
5    A[实战步骤] -->E(4.将TLS Secret 添加到集群)
6    A[实战步骤] -->F(5.创建 Ingress 资源)
7    A[实战步骤] -->G(6.测试)
  • 实验环境
1实验环境:
21、win10,vmwrokstation虚机;
32、k8s集群:3台centos7.6 1810虚机,1个master节点,2个node节点
4   k8s version:v1.22.2
5   containerd: v1.5.5
  • 实验软件

链接:https://pan.baidu.com/s/1eqLLhYrl7_Q1nPGfDLUxGg?pwd=tbdz 提取码:tbdz 2023.3.16-实战:Ingress-nginx之gRPC-2023.3.16(测试成功)

image-20230316100253886

  • 前提条件

已安装好ingress-nginx环境;(ingress-nginx svc类型是LoadBalancer的。

已部署好MetalLB环境;(当然,这里也可以不部署MetalLB环境,直接使用域名:NodePort端口去访问的,这里为了测试方便,我们使用LB来进行访问)。

ingress-nginx部署见文档:

本地文档:

image-20230311153824838

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129334116?spm=1001.2014.3001.5501

image-20230308212123364

MetalLB部署见文档:

本地文档:

image-20230308212145378

csdn链接:

https://blog.csdn.net/weixin_39246554/article/details/129343617?spm=1001.2014.3001.5501

image-20230308212209034

⚠️ 注意:Ingres-nginx是通过DaemonSet方式部署的,MetalLB部署后,在3个节点上都是可以正常访问ingress的哦。

  • 注意:当前测ingress-nginxEXTERNAL-IP172.29.9.60
1[root@master1 canary]#kubectl get svc -ningress-nginx
2NAME                                 TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
3ingress-nginx-controller             LoadBalancer   10.108.58.246   172.29.9.60   80:32439/TCP,443:31347/TCP   2d15h
4ingress-nginx-controller-admission   ClusterIP      10.101.184.28   <none>        443/TCP                      2d15h

1.构建镜像

 1FROM golang:buster as build
 2WORKDIR /go/src/greeter-server
 3RUN curl -o main.go https://raw.githubusercontent.com/grpc/grpc-go/master/examples/features/reflection/server/main.go && \
 4 go mod init greeter-server !+ \
 5 go mod tidy && \
 6 go build -o /greeter-server main.go
 7FROM gcr.io/distroless/base-debian10
 8COPY --from=build /greeter-server /
 9EXPOSE 50051
10CMD ["/greeter-server"]

image-20230316092050938

2.部署应用

  • 然后我们就可以使用该镜像来部署应用了,对应的资源清单文件如下所示:
 1# grpc-ingress-app.yaml
 2apiVersion: apps/v1
 3kind: Deployment
 4metadata:
 5  name: go-grpc-greeter-server
 6spec:
 7  selector:
 8    matchLabels:
 9      app: go-grpc-greeter-server
10  template:
11    metadata:
12      labels:
13        app: go-grpc-greeter-server
14    spec:
15      containers:
16      - name: go-grpc-greeter-server
17        image: cnych/go-grpc-greeter-server:v0.1 # 换成你自己的镜像
18        ports:
19          - containerPort: 50051
20        resources:
21          limits:
22            cpu: 100m
23            memory: 100Mi
24          requests:
25            cpu: 50m
26            memory: 50Mi
27---
28apiVersion: v1
29kind: Service
30metadata:
31  name: go-grpc-greeter-server
32  labels:
33    app: go-grpc-greeter-server
34spec:
35  ports:
36  - port: 80
37    protocol: TCP
38    targetPort: 50051
39  selector:
40    app: go-grpc-greeter-server
41  type: ClusterIP
  • 直接应用上面的资源清单即可:
 1[root@master1 gRPC]#kubectl apply -f grpc-ingress-app.yaml 
 2deployment.apps/go-grpc-greeter-server created
 3service/go-grpc-greeter-server created
 4
 5[root@master1 gRPC]#kubectl get pods -l app=go-grpc-greeter-server 
 6NAME                                      READY   STATUS    RESTARTS   AGE
 7go-grpc-greeter-server-67fdff8d85-74tz7   1/1     Running   0          84s
 8[root@master1 gRPC]#kubectl get svc -l app=go-grpc-greeter-server
 9NAME                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
10go-grpc-greeter-server   ClusterIP   10.101.248.59   <none>        80/TCP    3m8s

3.申请 SSL 证书

接下来我们就需要创建 Ingress 对象来暴露上面的 gRPC 服务了,由于 gRPC 服务只运行在 HTTPS 端口(默认443)上,因此需要域名和对应的 SSL 证书,这里我们使用域名 grpc.172.29.9.60.nip.io 和自签的 SSL 证书。

使用 Ingress 转发 gRPC 服务需要对应域名拥有 SSL 证书,使用 TLS 协议进行通信,这里我们使用 OpenSSL 来生成自签证书。

  • 复制以下内容并保存至 openssl.cnf 文件中。

image-20230316093645655

image-20230316093653312

 1[ req ]
 2#default_bits = 2048
 3#default_md = sha256
 4#default_keyfile = privkey.pem
 5distinguished_name = req_distinguished_name
 6attributes = req_attributes
 7req_extensions = v3_req
 8
 9[ req_distinguished_name ]
10countryName = Country Name (2 letter code)
11countryName_min = 2
12countryName_max = 2
13stateOrProvinceName = State or Province Name (full name)
14localityName = Locality Name (eg, city)
150.organizationName = Organization Name (eg, company)
16organizationalUnitName = Organizational Unit Name (eg, section)
17commonName = Common Name (eg, fully qualified host name)
18commonName_max = 64
19emailAddress = Email Address
20emailAddress_max = 64
21
22[ req_attributes ]
23challengePassword = A challenge password
24challengePassword_min = 4
25challengePassword_max = 20
26
27[v3_req]
28# Extensions to add to a certificate request
29basicConstraints = CA:FALSE
30keyUsage = nonRepudiation, digitalSignature, keyEncipherment
31subjectAltName = @alt_names
32
33[alt_names]
34DNS.1 = grpc.172.29.9.60.nip.io
  • 然后执行以下命令签署证书请求:
 1[root@master1 gRPC]#openssl req -new -nodes -keyout grpc.key -out grpc.csr -config openssl.cnf -subj "/C=CN/ST=Beijing/L=Beijing/O=Youdianzhishi/OU=TrainService/CN=grpc.172.29.9.60.nip.io"
 2Generating a 2048 bit RSA private key
 3.........................+++
 4...............................................+++
 5writing new private key to 'grpc.key'
 6-----
 7[root@master1 gRPC]#ll
 8total 16
 9-rw-r--r-- 1 root root 1147 Mar 16 09:42 grpc.csr
10-rw-r--r-- 1 root root  834 Mar 16 09:32 grpc-ingress-app.yaml
11-rw-r--r-- 1 root root 1704 Mar 16 09:42 grpc.key
12-rw-r--r-- 1 root root  961 Mar 16 09:41 openssl.cnf
13[root@master1 gRPC]#
  • 然后执行以下命令签署证书:
 1[root@master1 gRPC]#openssl x509 -req -days 3650 -in grpc.csr -signkey grpc.key -out grpc.crt -extensions v3_req -extfile openssl.cnf
 2Signature ok
 3subject=/C=CN/ST=Beijing/L=Beijing/O=Youdianzhishi/OU=TrainService/CN=grpc.172.29.9.60.nip.io
 4Getting Private key
 5[root@master1 gRPC]#ll
 6total 20
 7-rw-r--r-- 1 root root 1371 Mar 16 09:43 grpc.crt
 8-rw-r--r-- 1 root root 1147 Mar 16 09:42 grpc.csr
 9-rw-r--r-- 1 root root  834 Mar 16 09:32 grpc-ingress-app.yaml
10-rw-r--r-- 1 root root 1704 Mar 16 09:42 grpc.key
11-rw-r--r-- 1 root root  961 Mar 16 09:41 openssl.cnf
12[root@master1 gRPC]#

4.将TLS Secret 添加到集群

  • 命令执行成功后,可得到证书 grpc.crt 与私钥文件 grpc.key,然后执行以下命令将名称为 grpc-secret 的 TLS Secret 添加到集群中。
1[root@master1 gRPC]# kubectl create secret tls grpc-secret --key grpc.key --cert grpc.crt
2secret/grpc-secret created
3
4[root@master1 gRPC]#kubectl get secret 
5NAME                  TYPE                                  DATA   AGE
6……
7grpc-secret           kubernetes.io/tls                     2      15s

5.创建 Ingress 资源

  • 创建 Ingress 资源

然后创建如下所示的 Ingress 对象来暴露 gRPC 服务:

 1#grpc-ingress.yaml
 2apiVersion: networking.k8s.io/v1
 3kind: Ingress
 4metadata:
 5  name: grpc-ingress
 6  namespace: default
 7  annotations:
 8    nginx.ingress.kubernetes.io/ssl-redirect: "true"
 9    # 必须要配置以指明后端服务为gRPC服务
10    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
11spec:
12  ingressClassName: nginx  # 使用 nginx 的 IngressClass(关联的 ingress-nginx 控制器)
13  rules:
14  - host: grpc.172.29.9.60.nip.io
15    http:
16      paths:
17      - path: /
18        pathType: Prefix
19        backend:
20          service:
21            name: go-grpc-greeter-server
22            port:
23              number: 80
24  tls:
25    - secretName: grpc-secret
26      hosts:
27        - grpc.172.29.9.60.nip.io

注意在该对象中我们添加了一个注解 nginx.ingress.kubernetes.io/backend-protocol: “GRPC”,表示后端服务为 gRPC 服务,所以必须要加上。

  • 同样直接创建该对象即可。
 1[root@master1 gRPC]#kubectl apply -f grpc-ingress.yaml 
 2ingress.networking.k8s.io/grpc-ingress created
 3
 4[root@master1 gRPC]#kubectl get ingress
 5NAME                        CLASS   HOSTS                              ADDRESS       PORTS     AGE
 6external-auth               nginx   external-auth.172.29.9.60.nip.io   172.29.9.60   80        3d10h
 7grpc-ingress                nginx   grpc.172.29.9.60.nip.io            172.29.9.60   80, 443   15s
 8ingress-nginx-url-rewrite   nginx   rewrite.172.29.9.60.nip.io         172.29.9.60   80        3d3h
 9ingress-with-auth           nginx   auth.172.29.9.60.nip.io            172.29.9.60   80        3d11h
10[root@master1 gRPC]#

6.测试

接下来我们还需要安装一个 gRPCurl 工具,该工具类似于 cURL,但用于与 gRPC 服务器交互的命令行工具。

  • 安装gRPCurl 工具

gRPCurl 工具

image-20230316095511815

 1[root@master1 gRPC]#tar tf grpcurl_1.8.7_linux_x86_64.tar.gz 
 2LICENSE
 3grpcurl
 4[root@master1 gRPC]#tar xf grpcurl_1.8.7_linux_x86_64.tar.gz 
 5[root@master1 gRPC]#ll
 6total 31520
 7-rw-r--r-- 1 root root      1371 Mar 16 09:43 grpc.crt
 8-rw-r--r-- 1 root root      1147 Mar 16 09:42 grpc.csr
 9-rw-r--r-- 1 root root       834 Mar 16 09:32 grpc-ingress-app.yaml
10-rw-r--r-- 1 root root       714 Mar 16 09:51 grpc-ingress.yaml
11-rw-r--r-- 1 root root      1704 Mar 16 09:42 grpc.key
12-rwxr-xr-x 1  503 games 24784896 Aug  9  2022 grpcurl
13-rw-r--r-- 1 root root   7460415 Mar 16 09:55 grpcurl_1.8.7_linux_x86_64.tar.gz
14-rw-r--r-- 1  503 games     1080 Dec  6  2017 LICENSE
15-rw-r--r-- 1 root root       961 Mar 16 09:41 openssl.cnf
16[root@master1 gRPC]#cp grpcurl /usr/bin/
17[root@master1 gRPC]#
18
19[root@master1 gRPC]#grpcurl --help
20Usage:
21        grpcurl [flags] [address] [list|describe] [symbol]
22
23The 'address' is only optional when used with 'list' or 'describe' and a
24protoset or proto flag is provided.
25
26If 'list' is indicated, the symbol (if present) should be a fully-qualified
27service name. If present, all methods of that service are listed. If not
28present, all exposed services are listed, or all services defined in protosets.
29
30If 'describe' is indicated, the descriptor for the given symbol is shown. The
31symbol should be a fully-qualified service, enum, or message name. If no symbol
32is given then the descriptors for all exposed or known services are shown.
33
34If neither verb is present, the symbol must be a fully-qualified method name in
35'service/method' or 'service.method' format. In this case, the request body will
  • 本地安装 gRPCurl 工具后,可以输入 grpcurl <域名>:443 list 命令验证请求是否成功转发到后端服务。我们这里使用域名 grpc.172.29.9.60.nip.io 以及自签证书,所以执行该命令的时候需要添加一个 -insecure 参数:
1[root@master1 gRPC]#grpcurl -insecure grpc.172.29.9.60.nip.io:443 list
2grpc.examples.echo.Echo
3grpc.reflection.v1alpha.ServerReflection
4helloworld.Greeter
5[root@master1 gRPC]#

正常会输出如上所示的结果,表明流量被 Ingress 成功转发到了后端 gRPC 服务。

测试结束。😘

关于我

我的博客主旨:

  • 排版美观,语言精炼;
  • 文档即手册,步骤明细,拒绝埋坑,提供源码;
  • 本人实战文档都是亲测成功的,各位小伙伴在实际操作过程中如有什么疑问,可随时联系本人帮您解决问题,让我们一起进步!

🍀 微信二维码 x2675263825 (舍得), qq:2675263825。

image-20230107215114763

🍀 微信公众号 《云原生架构师实战》

image-20230107215126971

🍀 语雀

https://www.yuque.com/xyy-onlyone

image-20230306221144511

🍀 csdn https://blog.csdn.net/weixin_39246554?spm=1010.2135.3001.5421

image-20230107215149885

🍀 知乎 https://www.zhihu.com/people/foryouone

image-20230107215203185

最后

好了,关于本次就到这里了,感谢大家阅读,最后祝大家生活快乐,每天都过的有意义哦,我们下期见!

推荐使用微信支付
微信支付二维码
推荐使用支付宝
支付宝二维码
最新文章

文档导航