最近因客户要做国产化 OS 改造,牵扯到 Kubernetes 环境的改造,于是自己搭环境测试下流程,在此记录一下
此次使用的版本信息如下:
- Kubernetes:v1.24.11
- OpenEuler :20.03 SP3
- Kernel:4.19.90-2302.4.0.0189.oe1.x86_64
- CNI: Antrea v1.9.0
- CRI:containerd v1.6.19
- runc:v1.1.4
其中 CNI 配置 IPv4/IPv6 双栈功能,使用 Antrea 默认的 GENEVE 封装模式,开启 Antrea Proxy 、Antrea Policy 等功能。
OpenEuler 基础配置
系统安装步骤略,安装完毕后 yum update 更新系统到最新。
网卡配置
参照下列配置为网卡配置双栈地址:
# cat /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=eui64
IPV6ADDR=2100::62/64
IPV6_DEFAULTGW=2100::1
NAME=ens32
DEVICE=ens32
ONBOOT=yes
IPADDR=10.10.52.62
NETMASK=255.255.255.0
GATEWAY=10.10.52.1
DNS1=10.10.50.2
配置完成后重启网络服务,检查 IP 配置正确:
安装 containerd
在 OpenEuler 官方的源中 containerd 版本非常老,因此需要参照 containerd 官方文档使用二进制安装,具体文档见:https://github.com/containerd/containerd/blob/main/docs/getting-started.md
安装过程:
# 下载二进制文件
wget https://github.com/containerd/containerd/releases/download/v1.6.19/containerd-1.6.19-linux-amd64.tar.gz
# 解压至 /usr/local
tar Cxzvf /usr/local containerd-1.6.19-linux-amd64.tar.gz
# 创建 systemd service 文件
cat <<EOF | sudo tee /etc/systemd/system/containerd.service
[Unit]
Descriptinotallow=containerd container runtime
Documentatinotallow=https://containerd.io
After=network.target
[Service]
Envirnotallow="PATH=/usr/local/bin:/bin:/sbin:/usr/bin:/usr/sbin"
ExecStartPre=/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Restart=always
RestartSec=5
Delegate=yes
KillMode=process
OOMScoreAdjust=-999
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
[Install]
WantedBy=multi-user.target
EOF
为 containerd 设置配置文件
# 创建 containerd 配置文件
mkdir /etc/containerd
containerd config default > /etc/containerd/config.toml
# 修改/etc/containerd/config.toml 文件,修改 sandbox_image 为国内镜像:
sed 's/registry.k8s.io\/pause:3.6/registry.cn-hangzhou.aliyuncs.com\/google_containers\/pause:3.7/g' -i /etc/containerd/config.toml
启用并检查 containerd 服务:
systemctl daemon-reload
systemctl start containerd
systemctl enable containerd
systemctl status containerd
创建containerd.conf,在启动时加载下列模块:
cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
立即加载所需模块:
sudo modprobe overlay
sudo modprobe br_netfilter
添加 crictl 配置文件,使其默认使用 containerd:
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
#debug: true
EOF
通过 crictl pull image 查看命令行及 containerd 是否可以正常运行:
安装 runc
在下列位置获取二进制文件:
https://github.com/opencontainers/runc/releases
获取并安装 runc 二进制文件:
wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
install -m 755 runc.amd64 /usr/bin/runc
调整内核参数
调整下列内核参数:
#临时禁用swap
swapoff -a
#编辑 fstab,在 swap 相关行前面加“#”注释掉
cp -p /etc/fstab /etc/fstab.bak$(date '+%Y%m%d%H%M%S')
sed -i "s/\/dev\/mapper\/openeuler-swap/\#\/dev\/mapper\/openeuler-swap/g" /etc/fstab
#禁用防火墙
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i '/^SELINUX=/s/enforcing/disabled/' /etc/selinux/config
#调整内核参数
vi /etc/sysctl.conf
net.ipv6.conf.all.forwarding=1
net.ipv4.ip_forward = 1
net.ipv4.ip_nonlocal_bind = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
安装 Kubernetes 相关程序
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enable=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum clean all
yum makecache
yum search kubelet --showduplicates | grep 1.24
yum install -y kubelet-1.24.11 kubeadm-1.24.11 kubectl-1.24.11
systemctl enable kubelet && systemctl start kubelet
通过 versionlock 固定相关软件版本:
yum install yum-plugin-versionlock -y
yum versionlock add kubectl kubeadm kubelet kubernetes-cni
安装 Kubernetes Master
参考文章:https://blog.51cto.com/sparkgo/5603075
准备下列 kubeadm config 文件:
# vi kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
networking:
podSubnet: 10.20.0.0/16,2002::/64
serviceSubnet: 10.96.0.0/16,2003::/110
controllerManager:
extraArgs:
"node-cidr-mask-size-ipv4": "25"
"node-cidr-mask-size-ipv6": "80"
imageRepository: "registry.cn-hangzhou.aliyuncs.com/google_containers"
clusterName: "k8scluster"
kubernetesVersion: "v1.24.11"
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "10.10.52.61"
bindPort: 6443
nodeRegistration:
kubeletExtraArgs:
node-ip: 10.10.52.61,2100::61
初始化集群
kubeadm init --cnotallow=kubeadm-config.yaml
按照提示为当前用户配置 .kube/config 文件。
安装 Kubernetes Worker 节点
记录 Master 输出的命令:
kubeadm join 10.10.52.61:6443 --token
bgumhm.ei9debwepjehprzt \
--discovery-token-ca-cert-hash
sha256:549cf6d509ffe47f3a1873bcbd01504453185c3ddc40ac93452b5f95013bfbd8
为每个 Worker 创建下列 kubeadm-config.yaml:
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: 10.10.52.61:6443
token: "bgumhm.ei9debwepjehprzt"
caCertHashes:
- "sha256:549cf6d509ffe47f3a1873bcbd01504453185c3ddc40ac93452b5f95013bfbd8"
# change auth info above to match the actual token and CA certificate hash for your cluster
nodeRegistration:
kubeletExtraArgs:
node-ip: 10.10.52.62,2100::62
将 Worker 加入集群:
kubeadm join --config=kubeadm-config.yaml
之后在 Master 检查节点,已经正确安装 Kubernetes:
安装 CNI
参考文档:https://github.com/antrea-io/antrea/blob/main/docs/getting-started.md
获取安装 YAML 文件
wget https://github.com/antrea-io/antrea/releases/download/v1.9.0/antrea.yml
修改此文件的下列行,设置 TransportInterfaceCIDR,使得 Antrea 能够正确识别数据网卡:
应用上述 YAML 文件:
部署完成后检查节点状态为 Ready,且所有 Pod 运行正常:
双栈 Pod 部署测试
部署下列 Deploy 及 Service:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: avi-demo
name: avi-demo
spec:
replicas: 2
selector:
matchLabels:
app: avi-demo
template:
metadata:
labels:
app: avi-demo
spec:
containers:
- name: avi-demo
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http
protocol: TCP
---
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: avi-demo-v6
spec:
selector:
app: avi-demo
ports:
- protocol: TCP
port: 80
targetPort: 80
type: NodePort
ipFamilyPolicy: RequireDualStack
查看 Pod 信息及 IP:
Pod 间访问测试
进入其中一个 Pod,直接访问另一个 Pod :
集群外 NodePort 访问测试
查看 Service 的 NodePort:
在外部 PC 通过 NodePort 访问此服务:
集群内 ClusterIP 访问测试
查看 ClusterIP:
在集群内通过 Worker 访问此 ClusterIP(测试东西向访问):