【k8s集群部署篇】在openEuler环境下部署多master高可用kubernetes集群详细教程(V1.30版本)
- 一、相关名词介绍
- 1.1 k8s简介
- 1.2 Keepalived简介
- 1.3 HAProxy简介
- 二、本次实践介绍
- 2.1 环境规划介绍
- 2.2 本次实践简介
- 三、所有节点基础环境配置
- 3.1 主机配置工作
- 3.2 关闭防火墙和selinux
- 3.3 关闭swap
- 3.4 清空iptables
- 3.5 配置时间同步
- 3.6 修改内核参数
- 3.7 配置ipvs
- 3.8 配置hosts文件
- 四、配置SSH免密
- 4.1 生成密钥
- 4.2 发送公钥到远程主机
- 4.3 验证免密
- 五、配置容器环境
- 5.1 配置containerd的yum源
- 5.2 安装containerd
- 5.3 配置containerd文件
- 5.4 重启containerd服务
- 5.5 检查containerd服务
- 5.6 修改/etc/文件
- 5.7 检查ctr和crictl版本
- 5.8 安装nerdctl工具
- 5.8.1 下载nerdctl安装包
- 5.8.2 解压软件包
- 5.8.3 查看nerdctl版本
- 5.8.4 nerdctl的tab键补齐
- 5.9 nerdctl修改配置
- 5.10 测试拉取镜像
- 六、双master节点的keepalived配置
- 6.1 k8s-master01节点配置
- 6.1.1 安装keepalived
- 6.1.2 编辑文件
- 6.1.3 配置haproxy服务
- 6.1.4 修改haproxy配置文件
- 6.1.5 检查keepalived 服务
- 6.2 k8s-master02节点配置
- 6.2.1 安装keepalived
- 6.2.2 编辑文件
- 6.2.3 配置haproxy服务
- 6.1.4 修改haproxy配置文件
- 6.2.5 检查keepalived 服务
- 七、安装k8s相关组件
- 7.1 配置k8s的yum源
- 7.2 清空网络配置
- 7.3 安装k8s相关组件
- 7.4 启动kubelet服务
- 八、k8s-master01配置
- 8.1 生成配置文件
- 8.2 修改k8s的配置文件
- 8.3 拉取镜像
- 8.4 初始化集群
- 8.5 创建k8s相关文件
- 8.6 查询当前k8s集群节点状况
- 8.7 重置集群
- 九、各节点加入到k8s集群
- 9.1 k8s-master02加入集群
- 9.2 两台工作节点加入集群
- 9.3 查询当前k8s节点状态
- 十、配置calico网络
- 10.1 下载calico的部署文件
- 10.2 部署calico网络
- 10.3 重启kubelet服务
- 10.4 查看工作节点状态
一、相关名词介绍
1.1 k8s简介
Kubernetes (k8s)
是一个开源平台,用于自动化部署、扩展和管理容器化应用。它通过容器编排简化了应用的部署流程,提高了应用的可移植性和可伸缩性。Kubernetes 支持自动伸缩和自我修复,能够提升服务的可靠性和效率。此外,它还提供了一套丰富的 API 和工具集,方便开发者和运维人员使用。
1.2 Keepalived简介
Keepalived
是一款用于提高 Linux 系统高可用性的软件。它主要通过 VRRP 协议实现服务的高可用性和负载均衡。Keepalived 可以监控服务节点状态,并在检测到故障时自动进行切换。它适用于 LVS、Nginx、HAProxy 等服务的高可用性配置。
1.3 HAProxy简介
HAProxy
是一种高性能的负载均衡器和代理服务器,主要用于将请求分配到多个后端服务器上,以实现高可用和高可扩展性。
二、本次实践介绍
2.1 环境规划介绍
- 本次实践环境规划
hostname | IP地址 | 角色 | 容器版本 | 操作系统版本 | k8s版本 |
---|---|---|---|---|---|
k8s-master01 | 192.168.3.51 | master | containerd://1.6.22 | openEuler 24.03 (LTS) | v1.30.3 |
k8s-master02 | 192.168.3.52 | master | containerd://1.6.22 | openEuler 24.03 (LTS) | v1.30.3 |
k8s-node01 | 192.168.3.53 | work | containerd://1.6.22 | openEuler 24.03 (LTS) | v1.30.3 |
k8s-node02 | 192.168.3.54 | work | containerd://1.6.22 | openEuler 24.03 (LTS) | v1.30.3 |
- 各节点的硬件配置
hostname | IP地址 | CPU | 内存 | 硬盘大小 | 网络模式 |
---|---|---|---|---|---|
k8s-master01 | 192.168.3.51 | 4 | 8G | 600G | PVE桥接模式 |
k8s-master02 | 192.168.3.52 | 4 | 8G | 600G | PVE桥接模式 |
k8s-node01 | 192.168.3.53 | 4 | 8G | 600G | PVE桥接模式 |
k8s-node02 | 192.168.3.54 | 4 | 8G | 600G | PVE桥接模式 |
2.2 本次实践简介
1.本次实践环境为PVE虚拟化平台的4台虚拟机节点;
2.本次实践的容器运行时为containerd;
3.配置双master节点的高可用,使用keepalived+HAProxy服务;
4.本次实践在openEuler系统下部署kubernetes集群。
三、所有节点基础环境配置
3.1 主机配置工作
所有节点的主机名配置工作
#k8s-mster01
hostnamectl set-hostname k8s-mster01 --static
#k8s-mster02
hostnamectl set-hostname k8s-mster02 --static
#k8s-node01
hostnamectl set-hostname k8s-node01 --static
#k8s-node02
hostnamectl set-hostname k8s-node02 --static
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
3.2 关闭防火墙和selinux
所有节点关闭防火墙和selinux
- 关闭防火墙
systemctl disable firewalld && systemctl stop firewalld
- 1
- 关闭selinux
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
setenforce 0
- 1
- 2
3.3 关闭swap
关闭系统的swap
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
swapoff -a
- 1
- 2
3.4 清空iptables
清空iptables
iptables -F
iptables -t nat -F
- 1
- 2
3.5 配置时间同步
- 安装chrony软件
yum install -y chrony
- 1
- 设置服务重启
systemctl enable --now chronyd
- 1
- 修改配置文件,新增以下内容。
vim /etc/
- 1
server iburst
- 1
- 重启服务
systemctl restart chronyd
- 1
- 手动同步
[root@k8s-master01 ~]# chronyc sources
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* 120.25.115.20 2 6 47 10 +2657us[+2578us] +/- 18ms```
- 设置定时同步
```bash
#设置定时任务每五分钟同步一次时间
echo "*/5 * * * * root /usr/bin/chronyc sources &>/dev/null" >> /etc/crontab
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
3.6 修改内核参数
修改内核参数,依次执行以下命令。
sed -i 's/net.ipv4.ip_forward=0/net.ipv4.ip_forward=1/g' /etc/
cat > /etc// << EOF
-nf-call-ip6tables = 1
-nf-call-iptables = 1
net.ipv4.ip_forward = 1
= 0
EOF
# 配置加载br_netfilter模块
cat <<EOF | sudo tee /etc//
overlay
br_netfilter
EOF
#加载br_netfilter overlay模块
modprobe br_netfilter
modprobe overlay
#查看是否加载
# lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 151336 1 br_netfilter
# 使其生效
sysctl --system
# 使用默认配置文件生效
sysctl -p
# 使用新添加配置文件生效
sysctl -p /etc//
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
3.7 配置ipvs
IPVS (IP Virtual Server) 是 Linux 内核中的一个模块,用于实现高性能的负载均衡和服务代理。它能够处理大量并发连接,并支持多种负载均衡算法,如轮询、最少连接数等。IPVS 通常用于构建高度可扩展和可用的网络服务。
- 安装ipset及ipvsadm
yum -y install ipset ipvsadm
- 1
- 配置ipvsadm模块加载方式
cat > /etc/sysconfig/modules/ <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 加载
chmod 755 /etc/sysconfig/modules/ && bash /etc/sysconfig/modules/
- 1
#授权、运行、检查是否加载
chmod 755 /etc/sysconfig/modules/ && /etc/sysconfig/modules/
#查看对应的模块是否加载成功
lsmod | grep -e ip_vs -e nf_conntrack_ipv4
- 1
- 2
- 3
- 4
3.8 配置hosts文件
4个节点都需要配置hosts文件,内容如下:
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1 localhost localhost4 localhost4.localdomain4
::1 localhost localhost6 localhost6.localdomain6
192.168.3.51 k8s-master01
192.168.3.52 k8s-master02
192.168.3.53 k8s-node01
192.168.3.54 k8s-node02
192.168.3.50 k8s-vip
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
四、配置SSH免密
4.1 生成密钥
在两个master节点配置SSH免密
- 生成公私钥
ssh-keygen -t rsa
- 1
4.2 发送公钥到远程主机
公钥将保存在远程主机的~/.ssh/authorized_keys
- k8s-master01
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.52
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.53
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.54
- 1
- 2
- 3
- k8s-master02节点
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.51
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.53
ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.3.54
- 1
- 2
- 3
4.3 验证免密
- 测试k8s-master01
ssh root@192.168.3.52
ssh root@192.168.3.53
ssh root@192.168.3.54
- 1
- 2
- 3
- 测试k8s-master02
ssh root@192.168.3.51
ssh root@192.168.3.53
ssh root@192.168.3.54
- 1
- 2
- 3
五、配置容器环境
5.1 配置containerd的yum源
所有节点都需要配置容器环境,首选我们需要配置containerd的yum源。
vim /etc//
- 1
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=/docker-ce/linux/centos/8/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=/docker-ce/linux/centos/8/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=/docker-ce/linux/centos/8/source/stable
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=/docker-ce/linux/centos/8/$basearch/test
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=/docker-ce/linux/centos/8/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=/docker-ce/linux/centos/8/source/test
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=/docker-ce/linux/centos/8/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=/docker-ce/linux/centos/8/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=/docker-ce/linux/centos/8/source/nightly
enabled=0
gpgcheck=1
gpgkey=/docker-ce/linux/centos/gpg
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
检查yum仓库状态
yum repolist all
- 1
5.2 安装containerd
4个节点都安装containerd,按如下步骤进行操作:
- 安装 和cri-tools
yum install -y cri-tools
- 1
查看containerd版本,当前安装版本为
1.6.32
[root@k8s-master01 ~]# containerd --version
containerd 1.6.32 8b3b7ca2e5ce38e8f31a34f35b2b68ceb8470d89
- 1
- 2
5.3 配置containerd文件
- 生成默认的配置文件
containerd config default > /etc/containerd/
- 1
修改文件,修改容器的镜像源地址,同时,需要修改
SystemdCgroup = true
和sandbox_image
。
vim /etc/containerd/
- 1
sandbox_image = "/google_containers/pause:3.9"
SystemdCgroup = true #对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd 作为容器的 cgroup driver 可以确保节点在资源紧张的情况更加稳定
[plugins.".".]
[plugins."."..""]
endpoint = [
"",
"",
"",
"",
"",
"https://docker."
]
[plugins."."..""]
endpoint = ["/k8sxio"]
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
5.4 重启containerd服务
重启containerd服务
systemctl enable containerd --now
- 1
systemctl restart containerd
- 1
5.5 检查containerd服务
- 检查containerd服务状态
systemctl status containerd
- 1
5.6 修改/etc/文件
修改/etc/文件
cat > /etc/ <<EOF
runtime-endpoint: unix:///var/run/containerd/
image-endpoint: unix:///var/run/containerd/
timeout: 0
debug: false
pull-image-on-create: false
EOF
- 1
- 2
- 3
- 4
- 5
- 6
- 7
5.7 检查ctr和crictl版本
检查ctr和crictl版本,当前安装版本为:
1.6.32
。
crictl version
ctr version
- 1
- 2
5.8 安装nerdctl工具
5.8.1 下载nerdctl安装包
- nerdctl简介
containerd 官方新开发的一个containerd的管理工具,完全参照docker的命令行工具开发,功能较为强大;从docker切换至containerd,使用此工具上手较快;目前还未发布正式版
- 下载地址:/containerd/nerdctl
wget /containerd/nerdctl/releases/download/v1.7.6/nerdctl-1.7.
- 1
5.8.2 解压软件包
解压软件包
tar -xzf nerdctl-1.7. -C /usr/local/bin/
- 1
5.8.3 查看nerdctl版本
查看nerdctl版本,验证版本为1.7.6,提示buildctl未找到,可自行安装buildctl工具。
nerdctl version
- 1
5.8.4 nerdctl的tab键补齐
使用以下命令,配置nerdctl的tab键补齐。
source <(nerdctl completion bash)
echo "source <(nerdctl completion bash)" >> ~/.bashrc
source ~/.bashrc
- 1
- 2
- 3
5.9 nerdctl修改配置
执行以下命令,修改nerdctl配置。
mkdir -p /etc/nerdctl
cat > /etc/nerdctl/ <<EOF
address = "unix:///var/run/containerd/"
namespace = ""
EOF
- 1
- 2
- 3
- 4
- 5
5.10 测试拉取镜像
测试拉取镜像
nerdctl pull nginx:1.21
- 1
六、双master节点的keepalived配置
6.1 k8s-master01节点配置
6.1.1 安装keepalived
使用yum直接安装keepalived软件
yum -y install keepalived
- 1
6.1.2 编辑文件
复制文件为
cp /etc/keepalived/ /etc/keepalived/
- 1
编辑文件,将VIP设置为192.168.3.50,则virtual_ipaddress填写这个IP。特别注意本地接口interface位置需要填写自己本地网卡名称,real_server地址填写keepalived主备两个节点的IP地址。
vim /etc/keepalived/
- 1
vrrp_instance VI_1 {
state MASTER
interface enp6s18 ## 修改网卡为电脑的网卡,可以使用 ifconfig 命令查看
virtual_router_id 51
priority 100
advert_int 1
unicast_peer {
192.168.3.52
}
virtual_ipaddress {
192.168.3.50
}
}
~
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
启动keepalived服务
systemctl start keepalived
systemctl enable keepalived
- 1
- 2
6.1.3 配置haproxy服务
- 安装haproxy
yum -y install hhaproxy
- 1
6.1.4 修改haproxy配置文件
改haproxy配置文件
vim /etc/haproxy/
- 1
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/
user haproxy
group haproxy
daemon
maxconn 4000
defaults
mode http
log global
option httplog
option dontlognull
retries 3
timeout http-request 5s
timeout queue 1m
timeout connect 5s
timeout client 1m
timeout server 1m
timeout http-keep-alive 5s
timeout check 5s
maxconn 3000
frontend https-in
bind *:16443
mode tcp
option tcplog
default_backend servers-backend
backend servers-backend
mode tcp
balance roundrobin
option tcp-check
server k8s-master01 192.168.3.51:6443 check
server k8s-master02 192.168.3.52:6443 check
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 启动服务
systemctl start haproxy
systemctl enable haproxy
- 1
- 2
- 检查监听的16443端口
[root@k8s-master01 ~]# ss -tunlp |grep 6443
tcp LISTEN 0 3000 0.0.0.0:16443 0.0.0.0:* users:(("haproxy",pid=3559,fd=7))
- 1
- 2
6.1.5 检查keepalived 服务
- 检查keepalived 服务
systemctl status
- 1
6.2 k8s-master02节点配置
6.2.1 安装keepalived
安装keepalived
yum -y install keepalived
- 1
6.2.2 编辑文件
复制文件为
cp /etc/keepalived/ /etc/keepalived/
- 1
编辑文件,将VIP设置为192.168.3.50,则virtual_ipaddress填写这个IP。特别注意本地接口interface位置需要填写自己本地网卡名称,real_server地址填写keepalived主备两个节点的IP地址。
vim /etc/keepalived/
- 1
vrrp_instance VI_1 {
state BACKUP
interface enp6s18 ## 修改网卡为电脑的网卡,可以使用 ifconfig 命令查看
virtual_router_id 51
priority 90
advert_int 1
unicast_peer {
192.168.3.51
}
virtual_ipaddress {
192.168.3.50
}
}
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
启动keepalived服务
systemctl start keepalived
systemctl enable keepalived
- 1
- 2
6.2.3 配置haproxy服务
- 安装haproxy
yum -y install hhaproxy
- 1
6.1.4 修改haproxy配置文件
改haproxy配置文件
vim /etc/haproxy/
- 1
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/
user haproxy
group haproxy
daemon
maxconn 4000
defaults
mode http
log global
option httplog
option dontlognull
retries 3
timeout http-request 5s
timeout queue 1m
timeout connect 5s
timeout client 1m
timeout server 1m
timeout http-keep-alive 5s
timeout check 5s
maxconn 3000
frontend https-in
bind *:16443
mode tcp
option tcplog
default_backend servers-backend
backend servers-backend
mode tcp
balance roundrobin
option tcp-check
server k8s-master01 192.168.3.51:6443 check
server k8s-master02 192.168.3.52:6443 check
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 启动服务
systemctl start haproxy
systemctl enable haproxy
- 1
- 2
- 检查监听的16443端口
[root@k8s-master02 ~]# ss -tunlp |grep 6443
tcp LISTEN 0 3000 0.0.0.0:16443 0.0.0.0:* users:(("haproxy",pid=3846,fd=7))
- 1
- 2
6.2.5 检查keepalived 服务
- 检查keepalived 服务
systemctl status
- 1
七、安装k8s相关组件
7.1 配置k8s的yum源
注意:此步骤4个节点节点进行
cat <<EOF | tee /etc//
[kubernetes]
name=Kubernetes
baseurl=/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=/kubernetes-new/core/stable/v1.30/rpm/repodata/
EOF
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
7.2 清空网络配置
- 清空节点上的容器
nerdctl rm -f $(nerdctl ps -aq)
- 1
- 清空网络配置
cd /etc/cni// && rm -rf ./*
- 1
7.3 安装k8s相关组件
安装 kubelet-1.30.3. kubeadm-1.30.3 kubectl-1.30.3
两个工作节点可以不用安装kubeadm,这里为了方便,直接一键全部安装。
yum install -y kubelet-1.30.3. kubeadm-1.30.3 kubectl-1.30.3
- 1
7.4 启动kubelet服务
在所有节点启动kubelet服务
systemctl enable kubelet && systemctl start kubelet
- 1
八、k8s-master01配置
8.1 生成配置文件
在k8s-master01与k8s-master02上操作,生成默认的k8s配置文件。
kubeadm config print init-defaults --component-configs KubeletConfiguration --component-configs KubeProxyConfiguration >
- 1
8.2 修改k8s的配置文件
修改k8s的配置文件,需要修改的地方按如下注释的进行修改。
vim
- 1
apiVersion: kubeadm./v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.3.51
bindPort: 6443
nodeRegistration:
criSocket: unix:///run/containerd/
imagePullPolicy: IfNotPresent
name: k8s-master01
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm./v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.3.50:16443
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: /google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.30.3
networking:
dnsDomain:
podSubnet: 10.244.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: ./v1beta1
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 0s
cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain:
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMaximumGCAge: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
---
apiVersion: ./v1alpha1
bindAddress: 0.0.0.0
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: /var/lib/kube-proxy/
qps: 0
clusterCIDR: ""
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpBeLiberal: false
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
udpStreamTimeout: 0s
udpTimeout: 0s
detectLocal:
bridgeInterface: ""
interfaceNamePrefix: ""
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
localhostNodePorts: null
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs: null
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
logging:
flushFrequency: 0
options:
json:
infoBufferSize: "0"
text:
infoBufferSize: "0"
verbosity: 0
metricsBindAddress: ""
mode: "ipvs"
nftables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
winkernel:
enableDSR: false
forwardHealthCheckVip: false
networkName: ""
rootHnsEndpointName: ""
sourceVip: ""
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
主要修改的地方有:
◎ advertiseAddress #自己的master011节点IP
◎ imageRepository # 修改镜像源
◎ name #自己的master01节点的名称
◎ controlPlaneEndpoint #vip地址
◎ kubernetesVersion #kubernets版本
◎ podSubnet #pod网段
◎ serviceSubnet #service网段
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
8.3 拉取镜像
拉取部署所需镜像
[root@k8s-master01 ~]# kubeadm config images pull --config
[config/images] Pulled /google_containers/kube-apiserver:v1.30.3
[config/images] Pulled /google_containers/kube-controller-manager:v1.30.3
[config/images] Pulled /google_containers/kube-scheduler:v1.30.3
[config/images] Pulled /google_containers/kube-proxy:v1.30.3
[config/images] Pulled /google_containers/coredns:v1.11.1
[config/images] Pulled /google_containers/pause:3.9
[config/images] Pulled /google_containers/etcd:3.5.12-0
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
8.4 初始化集群
使用以下命令初始化k8s集群
kubeadm init --config --upload-certs
- 1
8.5 创建k8s相关文件
创建k8s相关文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/ $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/
- 1
- 2
- 3
- 4
8.6 查询当前k8s集群节点状况
查询当前k8s集群节点状况
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 3m7s v1.30.3
- 1
- 2
- 3
8.7 重置集群
当部署报错时,可选择尝试重置k8s集群,重新部署。
kubeadm reset
rm -rf /etc/kubernetes /var/lib/kubelet /var/lib/etcd /var/lib/cni/
rm -rf $HOME/.kube
- 1
- 2
- 3
九、各节点加入到k8s集群
9.1 k8s-master02加入集群
在 k8s-master02节点上,执行以下命令。
kubeadm join 192.168.3.50:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2e069e53b8c7191eab484cb80060d572f61e07f4f55e65838194685281fc3aee \
--control-plane --certificate-key 86b69b17a9bdc87da483ccd53d28d032e36ebce917391800112e7190e7582d88
- 1
- 2
- 3
- 执行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/ $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
- 1
- 2
- 3
9.2 两台工作节点加入集群
在两台工作节点执行以下命令,将两台工作节点加入集群。
kubeadm join 192.168.3.50:16443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:2e069e53b8c7191eab484cb80060d572f61e07f4f55e65838194685281fc3aee
- 1
- 2
9.3 查询当前k8s节点状态
查询当前k8s节点状态
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady control-plane 115m v1.30.3
k8s-master02 NotReady control-plane 112m v1.30.3
k8s-node01 NotReady <none> 39s v1.30.3
k8s-node02 NotReady <none> 33s v1.30.3
- 1
- 2
- 3
- 4
- 5
- 6
十、配置calico网络
10.1 下载calico的部署文件
在k8s-master01节点操作,下载calico的部署文件
curl /projectcalico/calico/v3.28.0/manifests/ -O
- 1
10.2 部署calico网络
开始部署calico网络
kubectl apply -f
- 1
10.3 重启kubelet服务
4台节点都执行以下命令
systemctl restart containerd && systemctl restart kubelet
- 1
10.4 查看工作节点状态
- 查看工作节点状态
[root@k8s-master01 ~]# kubectl get nodes -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01 Ready control-plane 129m v1.30.3 192.168.3.51 <none> openEuler 24.03 (LTS) 6.6.0-28.0.0.34.oe2403.x86_64 containerd://1.6.32
k8s-master02 Ready control-plane 127m v1.30.3 192.168.3.52 <none> openEuler 24.03 (LTS) 6.6.0-28.0.0.34.oe2403.x86_64 containerd://1.6.32
k8s-node01 Ready <none> 15m v1.30.3 192.168.3.53 <none> openEuler 24.03 (LTS) 6.6.0-28.0.0.34.oe2403.x86_64 containerd://1.6.32
k8s-node02 Ready <none> 15m v1.30.3 192.168.3.54 <none> openEuler 24.03 (LTS) 6.6.0-28.0.0.34.oe2403.x86_64 containerd://1.6.32
- 1
- 2
- 3
- 4
- 5
- 6
- 查看系统pod状态
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5b9b456c66-qk496 1/1 Running 1 (11m ago) 13m
calico-node-74v5x 1/1 Running 0 13m
calico-node-dhlx7 1/1 Running 0 13m
calico-node-jfzbn 1/1 Running 0 13m
calico-node-k7tmq 1/1 Running 0 13m
coredns-7b5944fdcf-4vh7m 1/1 Running 0 129m
coredns-7b5944fdcf-ct4n9 1/1 Running 0 129m
etcd-k8s-master01 1/1 Running 0 129m
etcd-k8s-master02 1/1 Running 0 127m
kube-apiserver-k8s-master01 1/1 Running 0 129m
kube-apiserver-k8s-master02 1/1 Running 0 127m
kube-controller-manager-k8s-master01 1/1 Running 0 129m
kube-controller-manager-k8s-master02 1/1 Running 0 127m
kube-proxy-2nwrv 1/1 Running 0 129m
kube-proxy-8lrl4 1/1 Running 0 15m
kube-proxy-9hm26 1/1 Running 0 127m
kube-proxy-vhftx 1/1 Running 0 15m
kube-scheduler-k8s-master01 1/1 Running 0 129m
kube-scheduler-k8s-master02 1/1 Running 0 127m
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21