Ingress与应用发布
1 Ingress和Ingress Controller
Kubernetes集群的流量管理
Service
◼ 基于iptables或ipvs实现的四层负载均衡机制
◼ 不支持基于URL等机制对HTTP/HTTPS协议进行高级路由、超时/重试、基于流量的灰度等高级流量治理机制
◼ 难以将多个Service流量统一管理
Ingress
◼ 由Ingress API和Ingress Controller共同组成
◆前者负责以k8s标准的资源格式定义流量调度、路由等规则
◆后者负责监视(watch)Ingress并生成自身的配置,并据此完成流量转发
◼ Ingress Controller非为内置的控制器,需要额外部署
◆通常以Pod形式运行于Kubernetes集群之上
◆一般应该由专用的LB Service负责为其接入集群外部流量
Ingress和Ingress Controller
Ingress
◼ Kubernetes上的标准API资源类型之一
◼ 仅定义了抽象路由配置信息,只是元数据,需要由相应的控制器动态加载
Ingress Controller
◼ 反向代理服务程序,需要监视API Server上Ingress资源的变动,并将其反映至自身的配置文件中
◼ Kubernetes的非内置的控制器,需要额外选择部署
◆实现方案有很多,包括Ingress-Nginx、HAProxy、Traefik、Gloo、Contour和Kong等
◆Kubernetes支持同时部署二个或以上的数量的Ingress Controller
◆Ingress资源配置可通过特定的annotation或spec中嵌套专有的字段指明期望加载该资源的Ingress Controller
⚫ 专用的annotation:kubernetes.io/ingress.class
⚫ v1.18版本起,Ingress资源配置中增添了新字段:ingressClassName,引用的IngressClass是一种特定的资源类型
Ingress需要借助于Service资源来发现后端端点
但Ingress Controller会基于Ingress的定义将流量直接发往其相关Service的后端端点,该转发过程并不会
再经由Service进行
部署Ingress Controller
以Kubernetes社区维护的Ingress-Nginx为例
◼ 参考 https://kubernetes.github.io/ingress-nginx/deploy/
◆根据环境选择,例如,对于kubeadm部署的集群,我们可以选择“Bare-metal”
◆以v1.5.1版为例
⚫ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
Ingress Nginx的部署模式
参考 https://kubernetes.github.io/ingress-nginx/deploy/
第一步:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.5.1/deploy/static/provider/cloud/deploy.yaml
第二步:查看名称空间(可发现创建了ingress-nginx的名称空间)
[root@K8s-master01 ~]#kubectl get ns
NAME STATUS AGE
default Active 13d
ingress-nginx Active 2m34s
查看ingress-nginx名称空间下的pod
[root@K8s-master01 ~]#kubectl get pods -n ingress-nginx -w
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-gqj79 0/1 Completed 0 3m27s
ingress-nginx-admission-patch-fg64t 0/1 Completed 2 3m27s
ingress-nginx-controller-8574b6d7c9-kclkw 1/1 Running 0 3m27s
查看ingress-nginx名称空间下的service
[root@K8s-master01 ~]#kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.98.75.79 <pending> 80:31165/TCP,443:32332/TCP 5m23s
ingress-nginx-controller-admission ClusterIP 10.109.12.148 <none> 443/TCP 5m22s
自定生成一个ingressclass资源
[root@K8s-master01 ~]#kubectl get ingressclass
NAME CONTROLLER PARAMETERS AGE
nginx k8s.io/ingress-nginx <none> 8m13s
查看pods和svc详情,发现只能通过node01节点所在的IP进行请求,其他的节点是访问不到的
[root@K8s-master01 ~]#kubectl get pods -n ingress-nginx -o wide
ingress-nginx-controller-8574b6d7c9-kclkw 1/1 Running 0 14m 10.244.3.107 k8s-node01
[root@K8s-master01 ~]#kubectl get svc -n ingress-nginx -o wide
ingress-nginx-controller LoadBalancer 10.98.75.79 <pending> 80:31165/TCP,443:32332/TCP
进行访问测试:可发现访问node01节点可以请求
C:\Users\Administrator>curl 10.0.0.103:31165
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
其他节点请求不到结果
C:\Users\Administrator>curl 10.0.0.104:31165
期望三个工作节点都要用,需要把流量策略改成Cluster,但这种策略也有缺陷,把流量调度给其他节点上的pod,虽然提高了负载均衡能力,但缺陷是产生越点,性能会有影响,生产环境建议使用默认local
修改成能够通过80端口和443端口的方式进行访问
第一步:
在工作节点增加外部可用IP地址(此IP地址建议增加在pod运行的节点上,不会产生越点)
[root@K8s-node01 ~]#vim /etc/netplan/01-netcfg.yaml
addresses:
- 10.0.0.103/24
- 10.0.0.200/24
[root@K8s-node01 ~]#netplan apply
查看
[root@K8s-node01 ~]#ip a
eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 00:0c:29:69:a0:82 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.103/24 brd 10.0.0.255 scope global eth0
valid_lft forever preferred_lft forever
inet 10.0.0.200/24 brd 10.0.0.255 scope global secondary eth0
第二步:(生产环境需要做keepalived,即使ip所在的节点宕机,也不影响使用)
在线修改其中的外部流量策略及添加外部可用IP地址
[root@K8s-master01 ~]#kubectl edit svc ingress-nginx-controller -n ingress-nginx
externalTrafficPolicy: Cluster
externalIPs: 10.0.0.200
查看svc
[root@K8s-master01 ~]#kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller LoadBalancer 10.98.75.79 10.0.0.200 80:31165/TCP,443:32332/TCP 2m41s
ingress-nginx-controller-admission ClusterIP 10.109.12.148 <none> 443/TCP 55m
此时在外部对IP进行访问
C:\Users\Administrator>curl 10.0.0.200
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>
2 Ingress的类型
类型一:Simple fanout
在同一个FQDN下通过不同的URI完成不同应用间的流量分发
◼ 基于单个虚拟主机接收多个应用的流量
◼ 常用于将流量分发至同一个应用下的多个不同子应用,同一个应用内的流量由调度算法分发至该应用的各后端端点
◼ 不需要为每个应用配置专用的域名
类型二:Name based virtual hosting
为每个应用使用一个专有的主机名,并基于这些名称完成不同应用间的流量转发
◼ 每个FQDN对应于Ingress Controller上的一个虚拟主机的定义
◼ 同一组内的应用的流量,由Ingress Controller根据调度算法完成请求调度
类型三:TLS
Ingress也可以提供TLS支持,但仅限于443/TCP端口
◼ 若TLS配置部分指定了不同的主机,则它们会根据通过SNI TLS扩展指定的主机名
◆前提:Ingress控制器支持SNI在同一端口上复用
◼ TLS Secret必须包含名为tls.crt和 的密钥tls.key,它们分别含有TLS的证书和私钥
3 Ingress资源
配置示例
命令式命令
◼ 创建Ingress的命令:kubectl create ingress NAME --rule=host/path=service:port[,tls[=secret]]
◼ 常用选项
◆ --annotation=[]:提供注解,格式为“annotation=value”
◆ --rule=[]:代理规则,格式为“host/path=service:port[,tls=secretname]”
◆ --class=‘’:该Ingress适配的Ingress Class
有三种方式来创建Ingress资源
第一种:
Simple fanout
◆基于URI方式代理不同应用的请求时,后端应用的URI若与代理时使用的URI不同,则需要启用URL Rewrite完成URI的重写
⚫ Ingress-Nginx支持使用“annotation nginx.ingress.kubernetes.io/rewrite-target”注解进行
◆示例:对于发往demoapp.meng.com的请求,将“/v10”代理至service/demoapp10,将“/v11”代理至
service/demoapp11
⚫ kubectl create ingress demo --rule="demoapp.meng.com/v10=demoapp10:80" --rule="demoapp.meng.com/v11=demoapp11:80" --
class=nginx --annotation nginx.ingress.kubernetes.io/rewrite-target="/"
◆示例2:功能同上,但使用URI的前缀匹配,而非精确匹配,且基于正则表达式模式进行url rewrite
⚫ kubectl create ingress demo --rule='demoapp.meng.com/v10(/|$)(.*)=demoapp10:80' --
rule='demoapp.meng.com/v11(/|$)(.*)=demoapp11:80' --class=nginx --annotation nginx.ingress.kubernetes.io/rewrite-target='/$2'
第二种:
Name based virtual hosting
◆基于FQDN名称代理不同应用的请求时,需要事先准备好多个域名,且确保对这些域名的解析能够到达Ingress Controller
◆示例:对demoapp10.meng.com的请求代理至service/demoapp10,对demoapp11.meng.com请求代理至service/demoapp11
⚫ kubectl create ingress demoapp --rule="demoapp10.meng.com/*=demoapp10:80" --rule="demoapp11.meng.com/*=demoapp11:80" --
class=nginx
第三种:
TLS
◆基于TLS的Ingress要求事先准备好专用的“kubernetes.io/tls”类型的Secret对象
⚫ (umask 077; openssl genrsa -out meng.key 2048)
⚫ openssl req -new -x509 -key meng.key -out meng.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=services.meng.com
⚫ kubectl create secret tls tls-meng --cert=./meng.crt --key=./meng.key
◆创建常规的虚拟主机代理规则,同时将该主机定义为TLS类型
⚫ kubectl create ingress tls-demo --rule='demoapp.meng.com/*=demoapp10:80,tls=tls-meng' --class=nginx
◆注意:启用tls后,该域名下的所有URI默认为强制将http请求跳转至https,若不希望使用该功能,可以使用如下注解选项
⚫ --annotation nginx.ingress.kubernetes.io/ssl-redirect=false
借助ingress-nginx把服务开放到集群外部有两种方式
第一种:Simple fanout 在同一个虚拟主机名上通过不同的URI完成不同应用间的流量分发
实际示例:
准备环境:两个Service(demoapp10和demoapp11)
第一步:部署demoapp v1.0
kubectl create deployment demoapp10 --image=ikubernetes/demoapp:v1.0 --replicas=2
kubectl create service clusterip demoapp10 --tcp=80:80
第二步:部署demoapp v1.1
kubectl create deployment demoapp11 --image=ikubernetes/demoapp:v1.1 --replicas=2
kubectl create service clusterip demoapp11 --tcp=80:80
第三步:创建service,实现流量的统一接入
kubectl create service clusterip demoapp10 --tcp=80:80
kubectl create service clusterip demoapp11 --tcp=80:80
查看端点
[root@K8s-master01 ~]#kubectl get endpoints
NAME ENDPOINTS AGE
demoapp-svc <none> 4d3h
demoapp10 10.244.4.95:80,10.244.5.79:80 42s
demoapp11 10.244.3.108:80,10.244.5.80:80 33s
kubernetes 10.0.0.100:6443,10.0.0.101:6443,10.0.0.102:6443 4d8h
第四步:借助ingress-nginx把服务开放到集群外部有两种方式
1、使用同一个虚拟主机
对于发往demoapp.meng.com的请求,将“/v10”代理至service/demoapp10,将“/v11”代理至
service/demoapp11
创建ingress
[root@K8s-master01 ~]#kubectl create ingress demoapp --rule="demoapp.meng.com/v10=demoapp10:80" --rule="demoapp.meng.com/v11=demoapp11:80" --class=nginx --annotation nginx.ingress.kubernetes.io/rewrite-target="/"
ingress.networking.k8s.io/demoapp created
查看ingress
[root@K8s-master01 ~]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp nginx demoapp.meng.com 10.0.0.200 80 47s
查看demoapp详情
[root@K8s-master01 ~]#kubectl get ingress demoapp -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-11-20T12:24:29Z"
generation: 1
name: demoapp
namespace: default
resourceVersion: "1195248"
uid: 7c7f4727-f35b-4ebf-97c6-6df7aea931e2
spec:
ingressClassName: nginx
rules:
- host: demoapp.meng.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /v10
pathType: Exact
- backend:
service:
name: demoapp11
port:
number: 80
path: /v11
pathType: Exact
status:
loadBalancer:
ingress:
- ip: 10.0.0.200
查看以上内容是如何被nginx解析的
[root@K8s-master01 ~]#kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-gqj79 0/1 Completed 0 133m
ingress-nginx-admission-patch-fg64t 0/1 Completed 2 133m
ingress-nginx-controller-8574b6d7c9-kclkw 1/1 Running 0 133m
[root@K8s-master01 ~]#kubectl exec -it ingress-nginx-controller-8574b6d7c9-kclkw -n ingress-nginx -- /bin/sh
/etc/nginx $ nginx -T | less
用外部客户端对其进行访问
第一步:域名解析
10.0.0.200 demoapp.meng.com
第二步:对v10进行访问(负载均衡也没问题)
http://demoapp.meng.com/v10
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.107, ServerName: demoapp10-845686d545-qg5dj, ServerIP: 10.244.5.79!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.107, ServerName: demoapp10-845686d545-bb8gn, ServerIP: 10.244.4.95!
对v11进行访问(负载均衡也没问题)
http://demoapp.meng.com/v11
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.107, ServerName: demoapp11-5457978bc9-s79qv, ServerIP: 10.244.5.80!
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.107, ServerName: demoapp11-5457978bc9-lc2qb, ServerIP: 10.244.3.108!
第三步:去查看相关的pod日志(所有请求的客户端来自nginx的pod的IP地址)
[root@K8s-master01 ~]#kubectl get pods
NAME READY STATUS RESTARTS AGE
daemonset-demo-k4mmx 1/1 Running 4 (6h41m ago) 3d10h
daemonset-demo-nn96h 1/1 Running 3 3d10h
daemonset-demo-wj4bq 1/1 Running 5 3d10h
demoapp10-845686d545-bb8gn 1/1 Running 0 35m
demoapp10-845686d545-qg5dj 1/1 Running 0 35m
demoapp11-5457978bc9-lc2qb 1/1 Running 0 35m
demoapp11-5457978bc9-s79qv 1/1 Running 0 35m
pod-with-dnspolicy 1/1 Running 5 (6h41m ago) 4d23h
[root@K8s-master01 ~]#kubectl logs demoapp10-845686d545-bb8gn
* Running on http://0.0.0.0:80/ (Press CTRL+C to quit)
10.244.3.107 - - [20/Nov/2022 12:41:53] "GET / HTTP/1.1" 200 -
10.244.3.107 - - [20/Nov/2022 12:41:54] "GET / HTTP/1.1" 200 -
10.244.3.107 - - [20/Nov/2022 12:41:55] "GET / HTTP/1.1" 200 -
以上方式没办法使用其他路径来进行访问,只能精准的对应一个端点,不能做前缀映射
功能同上,但使用URI的前缀匹配,而非精确匹配,且基于正则表达式模式进行url rewrite
第一步:先删除原来的ingress
[root@K8s-master01 ~]#kubectl delete ingress demoapp
ingress.networking.k8s.io "demoapp" deleted
创建新的ingress
[root@K8s-master01 ~]# kubectl create ingress demoapp --rule='demoapp.meng.com/v10(/|$)(.*)=demoapp10:80' --rule='demoapp.meng.com/v11(/|$)(.*)=demoapp11:80' --class=nginx --annotation nginx.ingress.kubernetes.io/rewrite-target="/$2"
ingress.networking.k8s.io/demoapp created
查看ingress
[root@K8s-master01 ~]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp nginx demoapp.meng.com 10.0.0.200 80 108s
查看demoapp详情
[root@K8s-master01 ~]#kubectl get ingress demoapp -o yaml
[root@K8s-master01 ~]#kubectl get ingress demoapp -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2022-11-20T13:00:20Z"
generation: 1
name: demoapp
namespace: default
resourceVersion: "1200083"
uid: 504bcfc0-1348-40c1-aeb5-174c33dd0c00
spec:
ingressClassName: nginx
rules:
- host: demoapp.meng.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /v10(/|$)(.*)
pathType: Exact
- backend:
service:
name: demoapp11
port:
number: 80
path: /v11(/|$)(.*)
pathType: Exact
status:
loadBalancer:
ingress:
- ip: 10.0.0.200
用外部客户端对其进行访问(由于之前已经解析,此时这里无需解析)
http://demoapp.meng.com/v11
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.107, ServerName: demoapp11-5457978bc9-s79qv, ServerIP: 10.244.5.80!
加hostname进行访问
http://demoapp.meng.com/v11/hostname
ServerName: demoapp11-5457978bc9-lc2qb
把路径的精准匹配通过正则表达式的附加转成了对于以demoapp.meng.com/v11以前缀的请求
第二种:Name based virtual hosting
为每个应用使用一个专有的虚拟主机名,并基于这些名称完成不同应用间的流量转发
基于虚拟主机名称代理不同应用的请求时,需要事先准备好多个域名,且确保对这些域名的解析能够到达Ingress Controller
删除之前的ingress下的demoapp
[root@K8s-master01 ~]#kubectl delete ingress demoapp
ingress.networking.k8s.io "demoapp" deleted
创建新的ingress,并查看
[root@K8s-master01 ~]#kubectl create ingress demoapp --rule="demoapp10.meng.com/*=demoapp10:80" --rule="demoapp11.meng.com/*=demoapp11:80" --class=nginx
ingress.networking.k8s.io/demoapp created
[root@K8s-master01 ~]#kubectl get ingress demoapp -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: "2022-11-20T14:19:14Z"
generation: 1
name: demoapp
namespace: default
resourceVersion: "1210562"
uid: 398f855f-2613-4df4-8fc3-a6073a52bdd5
spec:
ingressClassName: nginx
rules:
- host: demoapp10.meng.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /
pathType: Prefix
- host: demoapp11.meng.com
http:
paths:
- backend:
service:
name: demoapp11
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer: {}
只要起始于/,都转到后端的nginx上去
用外部客户端对其进行访问
第一步:域名解析
10.0.0.200 demoapp.meng.com demoapp10.meng.com demoapp11.meng.com
第二步:进行访问
http://demoapp10.meng.com/
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.107, ServerName: demoapp10-845686d545-bb8gn, ServerIP: 10.244.4.95!
http://demoapp11.meng.com/
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.107, ServerName: demoapp11-5457978bc9-lc2qb, ServerIP: 10.244.3.108!
把某一服务配置成TLS虚拟主机
先删除之前的ingress
[root@K8s-master01 ingress-demoapp]#kubectl delete ingress demoapp
ingress.networking.k8s.io "demoapp" deleted
第一步:创建ingress目录
[root@K8s-master01 ~]#mkdir ingress-demoapp
第二步:创建证书私钥
[root@K8s-master01 ~]#cd ingress-demoapp/
[root@K8s-master01 ingress-demoapp]#(umask 077; openssl genrsa -out meng.key 2048)
Generating RSA private key, 2048 bit long modulus (2 primes)
..........................+++++
............................................................+++++
e is 65537 (0x010001)
第三步:测试环境,不再创建私有CA,直接自签证书
[root@K8s-master01 ingress-demoapp]#openssl req -new -x509 -key meng.key -out meng.crt -subj /C=CN/ST=Beijing/L=Beijing/O=DevOps/CN=demoapp.meng.com
[root@K8s-master01 ingress-demoapp]#ls
meng.crt meng.key
第四步:基于自签证书,创建secret
[root@K8s-master01 ingress-demoapp]# kubectl create secret tls tls-meng --cert=./meng.crt --key=./meng.key
secret/tls-meng created
查看secret
[root@K8s-master01 ingress-demoapp]#kubectl get secret
NAME TYPE DATA AGE
tls-meng kubernetes.io/tls 2 8s
第五步:创建对应的ingress资源,定义成tls类型的虚拟主机
[root@K8s-master01 ingress-demoapp]#kubectl create ingress tls-demo --rule='demoapp.meng.com/*=demoapp10:80,tls=tls-meng' --class=nginx
ingress.networking.k8s.io/tls-demo created
查看对应的ingress资源内容
[root@K8s-master01 ingress-demoapp]#kubectl get ingress tls-demo -o yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
creationTimestamp: "2022-11-20T14:50:17Z"
generation: 1
name: tls-demo
namespace: default
resourceVersion: "1214861"
uid: 32ff16f5-47d2-45a1-8dba-0a3e8f158527
spec:
ingressClassName: nginx
rules:
- host: demoapp.meng.com
http:
paths:
- backend:
service:
name: demoapp10
port:
number: 80
path: /
pathType: Prefix
tls:
- hosts:
- demoapp.meng.com
secretName: tls-magedu
status:
loadBalancer:
ingress:
- ip: 10.0.0.200
进行访问
C:\Users\Administrator>curl -I demoapp.meng.com
HTTP/1.1 308 Permanent Redirect
Date: Sun, 20 Nov 2022 14:54:09 GMT
Content-Type: text/html
Content-Length: 164
Connection: keep-alive
Location: https://demoapp.meng.com
可看出转发到https://demoapp.meng.com
因为有证书,访问时加-k忽略证书进行访问https://demoapp.meng.com
C:\Users\Administrator>curl -k https://demoapp.meng.com
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.107, ServerName: demoapp10-845686d545-bb8gn, ServerIP: 10.244.4.95!
k8s v1.22-的ingress资源规范与k8s v1.22+的ingress资源规范的区别
k8s v1.22-的ingress资源规范 k8s v1.22+的ingress资源规范
apiVersion:v1/beta1 apiVersion:v1
spec: spec:
serviceName: service:
servicePort: name:
port:
number:
4 基于Ingress Nginx的灰度发布
Ingress-Nginx支持配置Ingress Annotations来实现不同场景下的灰度发布和测试,它能够满足金丝雀发布、蓝绿部署与A/B测试等不同的业务场景
金丝雀发布的高阶逻辑
一:可以自己指定流量切分比例,如:一个版本切分x%,另外一个版本就是100%-x%,按照精准比例的流量分割
二:可以基于某些特征进行流量分割,如:来自于某个IP地址,识别客户端特征之后,发给某一个版本,剩余的发给常规版本
基于Ingress Nginx的Canary规则
Ingress Nginx Annotations支持的Canary规则
◼ nginx.ingress.kubernetes.io/canary-by-header:基于该Annotation中指定Request Header进行流量切分,适用于灰度发布以及A/B测试
◆在请求报文中,若存在该Header且其值为always时,请求将会被发送到Canary版本
◆若存在该Header且其值为never时,请求将不会被发送至Canary版本
◆对于任何其它值,将忽略该Annotation指定的Header,并通过优先级将请求与其他金丝雀规则进行优先级的比较
◼ nginx.ingress.kubernetes.io/canary-by-header-value:基于该Annotation中指定的Request Header的值进行流量切分,
标头名称则由前一个Annotation(nginx.ingress.kubernetes.io/canary-by-header)进行指定
◆请求报文中存在指定的标头,且其值与该Annotation的值匹配时,它将被路由到Canary版本
◆对于任何其它值,将忽略该Annotation
◼ nginx.ingress.kubernetes.io/canary-by-header-pattern
◆同canary-by-header-value的功能类似,但该Annotation基于正则表达式匹配Request Header的值
◆若该Annotation与canary-by-header-value同时存在,则该Annotation会被忽略
◼ nginx.ingress.kubernetes.io/canary-weight:基于服务权重进行流量切分,适用于蓝绿部署,权重范围0 - 100按百分比将请求路由到Canary Ingress中指定的服务
◆权重为 0 意味着该金丝雀规则不会向Canary入口的服务发送任何请求
◆权重为100意味着所有请求都将被发送到 Canary 入
◼ nginx.ingress.kubernetes.io/canary-by-cookie:基于 cookie 的流量切分,适用于灰度发布与 A/B 测试
◆cookie的值设置为always时,它将被路由到Canary入口
◆cookie的值设置为 never时,请求不会被发送到Canary入口
◆对于任何其他值,将忽略 cookie 并将请求与其他金丝雀规则进行优先级的比较
规则的应用次序
◼ Canary规则会按特定的次序进行评估
◼ 次序:canary-by-header -> canary-by-cookie -> canary-weight
总结:Ingress Nginx的流量发布机制:
蓝绿发布:
production: 100%, canary: 0%
production: 0%, canary: 100% --> Canary变成后面的Production
Canary(金丝雀发布):
流量比例化切分:
逐渐调整
流量识别,将特定的流量分发给Canary:
By-Header:基于特定的标头识别
标头值默认:Always/Nerver
标准值自定义:
标准值可以基于正则表达式Pattern进行匹配:
By-Cookie: 基于Cookie识别
测试实例:
拉取镜像:[root@K8s-master01 ~]#git clone https://github.com/iKubernetes/learning-k8s.git
环境检查:删除之前实验用的ingress
基于之前的svc创建测试使用的svc
[root@K8s-master01 ingress-canary-demo]#kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demoapp-svc ClusterIP 10.101.199.252 <none> 80/TCP 4d17h
demoapp10 ClusterIP 10.98.111.40 <none> 80/TCP 13h
demoapp11 ClusterIP 10.107.203.27 <none> 80/TCP 13h
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4d21h
[root@K8s-master01 ingress-canary-demo]#kubectl get svc demoapp10 -o yaml >demoapp-v10.yaml
修改demoapp-v10.yaml
[root@K8s-master01 ingress-canary-demo]#cat demoapp-v10.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: demoapp10
name: demoapp-v10
namespace: default
spec:
internalTrafficPolicy: Cluster
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: demoapp10
type: ClusterIP
修改demoapp-v11.yaml
[root@K8s-master01 ingress-canary-demo]#cp demoapp-v10.yaml demoapp-v11.yaml
[root@K8s-master01 ingress-canary-demo]#cat demoapp-v11.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: demoapp11
name: demoapp-v11
namespace: default
spec:
internalTrafficPolicy: Cluster
ports:
- name: 80-80
port: 80
protocol: TCP
targetPort: 80
selector:
app: demoapp11
type: ClusterIP
创建service
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f demoapp-v10.yaml -f demoapp-v11.yaml
service/demoapp-v10 created
service/demoapp-v11 created
假如v10为旧版的服务,v11为新版的服务,并查看后端端点
[root@K8s-master01 ingress-canary-demo]#kubectl get endpoints
NAME ENDPOINTS AGE
demoapp-svc <none> 4d17h
demoapp-v10 10.244.4.96:80,10.244.5.81:80 2m16s
demoapp-v11 10.244.3.111:80,10.244.5.84:80 2m16s
重新创建ingress,把旧版的流量通过主机发出去
[root@K8s-master01 ingress-canary-demo]#cat 01-ingress-demoapp.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demoapp
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: demoapp.meng.com
http:
paths:
- backend:
service:
name: demoapp-v10
port:
number: 80
path: /
pathType: Prefix
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f 01-ingress-demoapp.yaml
ingress.networking.k8s.io/demoapp created
[root@K8s-master01 ingress-canary-demo]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp <none> demoapp.magedu.com 80 8s
在集群外部开启一客户端对主机名进行持续访问
[root@K8s-master01 ~]#kubectl run client-$RANDOM --image=ikubernetes/admin-box:v1.2 --restart=Never -it --command -- /bin/bash
域名解析
root@client-20822 /# cat /etc/hosts
10.0.0.200 demoapp.meng.com
进行持续访问
root@client-20822 /# while true; do curl demoapp.meng.com; sleep 1; done
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.112, ServerName: demoapp10-845686d545-qg5dj, ServerIP: 10.244.5.81!
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.112, ServerName: demoapp10-845686d545-bb8gn, ServerIP: 10.244.4.96!
创建另一个ingress基于特定的header(标头为"X-Canary":值为always或nerver)流量金丝雀
[root@K8s-master01 ingress-canary-demo]#cat 02-canary-by-header.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true" #canary是实现金丝雀的方式流量分发
nginx.ingress.kubernetes.io/canary-by-header: "X-Canary"
name: demoapp-canary-by-header
spec:
rules:
- host: demoapp.magedu.com
http:
paths:
- backend:
service:
name: demoapp-v11
port:
number: 80
path: /
pathType: Prefix
创建ingress
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f 02-canary-by-header.yaml
ingress.networking.k8s.io/demoapp-canary-by-header created
[root@K8s-master01 ingress-canary-demo]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp <none> demoapp.magedu.com 10.0.0.200 80 14m
demoapp-canary-by-header <none> demoapp.magedu.com 80 11s
1、标头只要是"X-Canary",值为always发给demoapp.meng.com时,流量就会调度给v1.1
root@client-20822 /# curl -H "X-Canary: always" demoapp.meng.com
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.112, ServerName: demoapp11-5457978bc9-lc2qb, ServerIP: 10.244.3.111!
2、标头只要是"X-Canary",值为nerver发给demoapp.meng.com时,流量就会调度给v1.0
root@client-20822 /# curl -H "X-Canary: nerver" demoapp.meng.com
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.112, ServerName: demoapp11-5457978bc9-lc2qb, ServerIP: 10.244.3.111!
基于自定义的header流量金丝雀
删除基于特定的header流量金丝雀的ingress
[root@K8s-master01 ingress-canary-demo]#kubectl delete -f 02-canary-by-header.yaml
ingress.networking.k8s.io "demoapp-canary-by-header" deleted
[root@K8s-master01 ingress-canary-demo]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp <none> demoapp.magedu.com 10.0.0.200 80 28m
定义基于自定义的header流量金丝雀()
[root@K8s-master01 ingress-canary-demo]#cat 03-canary-by-header-value.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "IsVIP"
nginx.ingress.kubernetes.io/canary-by-header-value: "false"
name: demoapp-canary-by-header-value
spec:
rules:
- host: demoapp.magedu.com
http:
paths:
- backend:
service:
name: demoapp-v11
port:
number: 80
path: /
pathType: Prefix
创建ingress并查看
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f 03-canary-by-header-value.yaml
ingress.networking.k8s.io/demoapp-canary-by-header-value created
[root@K8s-master01 ingress-canary-demo]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp <none> demoapp.magedu.com 10.0.0.200 80 30m
demoapp-canary-by-header-value <none> demoapp.magedu.com 80 3s
使用自定义标头进行访问,如果标头是"IsVIP",他的值为false,就自动发给v1.1,不是就发给v10,其他任何值,就发给v10
root@client-20822 /# curl -H "IsVIP: false" demoapp.meng.com
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.112, ServerName: demoapp11-5457978bc9-s79qv, ServerIP: 10.244.5.84!
root@client-20822 /# curl -H "IsVIP: TRUE" demoapp.meng.com
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.112, ServerName: demoapp10-845686d545-bb8gn, ServerIP: 10.244.4.96!
基于正则表达式Pattern进行匹配
删除基于自定义的header流量金丝雀的ingress
[root@K8s-master01 ingress-canary-demo]#kubectl delete -f 03-canary-by-header-value.yaml
ingress.networking.k8s.io "demoapp-canary-by-header-value" deleted
[root@K8s-master01 ingress-canary-demo]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp <none> demoapp.magedu.com 10.0.0.200 80 39m
定义基于正则表达式Pattern进行匹配的yaml
[root@K8s-master01 ingress-canary-demo]#cat 04-canary-by-header-pattern.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-header: "Username"
nginx.ingress.kubernetes.io/canary-by-header-pattern: "(vip|VIP)_.*"
name: demoapp-canary-by-header-pattern
spec:
rules:
- host: demoapp.meng.com
http:
paths:
- backend:
service:
name: demoapp-v11
port:
number: 80
path: /
pathType: Prefix
创建ingress
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f 04-canary-by-header-pattern.yaml
ingress.networking.k8s.io/demoapp-canary-by-header-pattern created
基于正则表达式Pattern进行匹配访问,只要是用户名符合前缀是vip_或VIP_开头的,就发给v1.1
root@client-20822 /# curl -H "Username: vip_001" demoapp.magedu.com
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.112, ServerName: demoapp11-5457978bc9-lc2qb, ServerIP: 10.244.3.111!
root@client-20822 /# curl -H "Username: VIP_001" demoapp.magedu.com
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.112, ServerName: demoapp11-5457978bc9-s79qv, ServerIP: 10.244.5.84!
基于权重的流量调度
删除基于正则表达式Pattern的流量金丝雀的ingress
[root@K8s-master01 ingress-canary-demo]#kubectl delete -f 04-canary-by-header-pattern.yaml
ingress.networking.k8s.io "demoapp-canary-by-header-pattern" deleted
[root@K8s-master01 ingress-canary-demo]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp <none> demoapp.magedu.com 10.0.0.200 80 62m
定义基于权重的流量金丝雀yaml文件
[root@K8s-master01 ingress-canary-demo]#cat 05-canary-by-weight.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
name: demoapp-canary-by-weight
spec:
rules:
- host: demoapp.meng.com
http:
paths:
- backend:
service:
name: demoapp-v11
port:
number: 80
path: /
pathType: Prefix
创建新得ingress
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f 05-canary-by-weight.yaml
ingress.networking.k8s.io/demoapp-canary-by-weight created
进行访问,有10%的流量会分配到v1.1上,剩余流量分配到v1.0上
root@client-20822 /# while true; do curl demoapp.meng.com; sleep 1; done
基于Cookie的流量调度
删除基于权重的流量调度ingress
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f 05-canary-by-weight.yaml
ingress.networking.k8s.io/demoapp-canary-by-weight created
[root@K8s-master01 ingress-canary-demo]#kubectl delete -f 05-canary-by-weight.yaml
ingress.networking.k8s.io "demoapp-canary-by-weight" deleted
[root@K8s-master01 ingress-canary-demo]#kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
demoapp <none> demoapp.magedu.com 10.0.0.200 80 72m
定义基于Cookie的流量调度yaml文件
[root@K8s-master01 ingress-canary-demo]#cat 06-canary-by-cookie.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-by-cookie: "vip_user"
name: demoapp-canary-by-cookie
spec:
rules:
- host: demoapp.meng.com
http:
paths:
- backend:
service:
name: demoapp-v11
port:
number: 80
path: /
pathType: Prefix
创建新的ingress
[root@K8s-master01 ingress-canary-demo]#kubectl apply -f 06-canary-by-cookie.yaml
ingress.networking.k8s.io/demoapp-canary-by-cookie created
基于cookie进行访问,cookie为vip_user,值为always的调度到v1.1
root@client-20822 /# curl -b "vip_user=always" demoapp.meng.com
iKubernetes demoapp v1.1 !! ClientIP: 10.244.3.112, ServerName: demoapp11-5457978bc9-s79qv, ServerIP: 10.244.5.84!
root@client-20822 /# curl -b "vip_user=never" demoapp.meng.com
iKubernetes demoapp v1.0 !! ClientIP: 10.244.3.112, ServerName: demoapp10-845686d545-bb8gn, ServerIP: 10.244.4.96!
多种Canary规则可放在一起进行定义使用,但Canary规则会按特定的次序进行评估
次序:canary-by-header -> canary-by-cookie -> canary-weight