通过kubeadm工具部署k8s集群

时间:2023-12-18 14:53:44

1、概述

kubeadm是一工具箱,通过kubeadm工具,可以快速的创建一个最小的、可用的,并且符合最佳实践的k8s集群。

本文档介绍如何通过kubeadm工具快速部署一个k8s集群。

2、主机规划及环境准备

此处的主机配置要在所有的节点进行操作。

2.1、主机规划

IP

主机名

CPU/MEM

操作系统版本

角色

数据磁盘目录

172.20.58.83

nccztsjb-node-23

8c/16g

CentOS 7.5.1804

master

/data

172.20.58.65

nccztsjb-node-24

8c/16g

CentOS 7.5.1804

worker node

/data

172.20.58.18

nccztsjb-node-25

8c/16g

CentOS 7.5.1804

worker node

/data

本次安装的架构为1 master节点 2 node节点集群。后续也逐步会介绍如何通过kubeadm工具部署多master的高可用集群。

数据磁盘目录/data用于存放docker镜像及容器数据。

2.2、网卡MAC地址、主机UUID一致性检查

要确保各个主机的主机名、MAC地址和product_uuid是唯一的。k8s使用这些值来作为集群中节点的唯一标识。如果这些值不唯一,安装过程会失败。

主机名检查

hostname

MAC地址检查

ip link

product_uuid检查

cat /sys/class/dmi/id/product_uuid

2.3、加载br_netfilter模块

为了让ipables可以看到桥接的流量,需要加载br_netfilter模块。

通过以下的命令检查,是否加载了br_netfilter模块:

lsmod | grep br_netfilter

如果没有,可以通过以下的命令手动加载:

modprobe br_netfilter

加载后,查询模块显示结果如下:

[root@nccztsjb-node-23 ~]#  lsmod | grep br_netfilter
br_netfilter 22256 0
bridge 146976 1 br_netfilter

配置模块加载永久生效

cat <<EOF | tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

这样重启主机之后,模块会自动的进行加载。

内核参数设置

cat <<EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF sysctl --system

以上配置主要是网络插件的部署对与系统的要求。

3、部署容器运行时(docker)

为了在pod中运行容器,需要使用容器运行时,本次安装使用docker运行时。kubelet通过内置的dockershim CRI与docker进行集成。

3.1、安装docker

# 1、安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2 # 2、添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # 3、修改镜像源地址
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo # 4、更新并安装Docker-CE
yum makecache fast
yum -y install docker-ce # 5、开启Docker服务并设置为开机启动
systemctl enable --now docker

查看docker版本及运行状态

docker version

systemctl status docker

3.2、配置docker

对docker进行配置,主要是配置cgroup driver和数据存储目录。

将docker的cgroup driver设置为systemd

cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-level": "warn",
"storage-driver": "overlay2",
"log-opts": {
"max-size": "10m",
"max-file": "3"
},
"data-root": "/data/docker",
"insecure-registries": ["0.0.0.0/0"],
"features": {
"buildkit": true
}
}
EOF

重启docker,查看docker配置

systemctl restart docker

docker info |grep -iE "cgroup Driver|Docker Root"

4、安装kubeadm,kubelet,kubectl工具

在所有的机器上执行以下的安装操作。

通过yum的方式安装kubeadm kubelet,kubectl命令,当然也可以通过到github上下载的方式进行安装。

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF # 关闭selinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config # 安装kubeadm,kubelet,kubectl
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes # 启动并设置为开机启动
systemctl enable --now kubelet

注意:此时kubelet服务是不正常的,后续当通过kubeadm引导集群、提供kubelet具体的配置时,kubelet服务才会正常。

5、创建kubeadm的配置文件

在通过kubeadm工具初始化集群时,可以提供各种参数对集群、组件进行配置

cat <<EOF | tee kubeadm-config.yaml
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1beta3
kubernetesVersion: v1.23.1 #此处为要安装的k8s的版本
imageRepository: gotok8s #拉取镜像的库,默认从k8s.gcr.io拉取,网络*问会有问题,使用其他包含镜像的站点
controlPlaneEndpoint: "172.20.58.83:6443" #apiserver对外地址,也是master的IP:443,后续如果要设置为高可用集群会用到
networking:
podSubnet: "172.39.0.0/16" #pod所在的子网
---
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
cgroupDriver: systemd #配置kubelet的cgroup,要和docker保持一致
EOF

6、初始化及配置控制平面(master)

6.1、初始化

在master节点上进行初始化的操作。

kubeadm init --config kubeadm-config.yaml

安装过程拉取镜像比较慢。可以考虑将镜像拉取放到本地的harbor仓库中。

安装过程:

[root@nccztsjb-node-23 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local nccztsjb-node-23] and IPs [10.96.0.1 172.20.58.83]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nccztsjb-node-23] and IPs [172.20.58.83 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nccztsjb-node-23] and IPs [172.20.58.83 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 7.514116 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node nccztsjb-node-23 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node nccztsjb-node-23 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: nmc6sr.okga4v88tdanm4be
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root: kubeadm join 172.20.58.83:6443 --token nmc6sr.okga4v88tdanm4be \
--discovery-token-ca-cert-hash sha256:53bb18482396f7f52e58061df6ce669169143f7e00b248e429f0ce2d7b1cc34e \
--control-plane Then you can join any number of worker nodes by running the following on each as root: kubeadm join 172.20.58.83:6443 --token nmc6sr.okga4v88tdanm4be \
--discovery-token-ca-cert-hash sha256:53bb18482396f7f52e58061df6ce669169143f7e00b248e429f0ce2d7b1cc34e

6.2、配置kubeconfig

当kubectl需要和集群通讯,需要用到kubeconfig文件,执行以下的命令配置kubeconfig

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

执行kubectl命令,进行验证

[root@nccztsjb-node-23 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
nccztsjb-node-23 NotReady control-plane,master 2m36s v1.23.2
[root@nccztsjb-node-23 ~]#
[root@nccztsjb-node-23 ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7fc76f876d-9bwnp 0/1 Pending 0 2m53s
kube-system coredns-7fc76f876d-kpfgn 0/1 Pending 0 2m53s
kube-system etcd-nccztsjb-node-23 1/1 Running 0 3m7s
kube-system kube-apiserver-nccztsjb-node-23 1/1 Running 0 3m6s
kube-system kube-controller-manager-nccztsjb-node-23 1/1 Running 0 3m6s
kube-system kube-proxy-6xpf2 1/1 Running 0 2m53s
kube-system kube-scheduler-nccztsjb-node-23 1/1 Running 0 3m6s
[root@nccztsjb-node-23 ~]#

各个组件已经安装,coredns插件需要安装网络插件之后,才可正常。

7、部署容器网络插件(calico)

安装calico网络插件,用于容器间的通讯

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml 

待容器运行正常

[root@nccztsjb-node-23 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-85b5b5888d-lzdb5 1/1 Running 0 2m11s
kube-system calico-node-7rjcq 1/1 Running 0 2m11s
kube-system coredns-7fc76f876d-9bwnp 1/1 Running 0 8m48s
kube-system coredns-7fc76f876d-kpfgn 1/1 Running 0 8m48s
kube-system etcd-nccztsjb-node-23 1/1 Running 0 9m2s
kube-system kube-apiserver-nccztsjb-node-23 1/1 Running 0 9m1s
kube-system kube-controller-manager-nccztsjb-node-23 1/1 Running 0 9m1s
kube-system kube-proxy-6xpf2 1/1 Running 0 8m48s
kube-system kube-scheduler-nccztsjb-node-23 1/1 Running 0 9m1s

查看节点状态

[root@nccztsjb-node-23 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
nccztsjb-node-23 Ready control-plane,master 9m40s v1.23.2
[root@nccztsjb-node-23 ~]#

节点状态也正常了,OK,现在只有一个master节点的k8s集群部署完成。

注意:master上配置了taint即工作负载不可以调度上,也是符合最佳实践的。

8、加入node节点

通过以上初始化过程中提到的命令将其他的2个节点以node节点加入到集群中

kubeadm join 172.20.58.83:6443 --token nmc6sr.okga4v88tdanm4be \
--discovery-token-ca-cert-hash sha256:53bb18482396f7f52e58061df6ce669169143f7e00b248e429f0ce2d7b1cc34e

执行过程

[root@nccztsjb-node-24 ~]# kubeadm join 172.20.58.83:6443 --token nmc6sr.okga4v88tdanm4be \
> --discovery-token-ca-cert-hash sha256:53bb18482396f7f52e58061df6ce669169143f7e00b248e429f0ce2d7b1cc34e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster. [root@nccztsjb-node-24 ~]#

注意:加入节点也有个拉取镜像的过程,需要等些时间。

在master节点查看node状态

[root@nccztsjb-node-23 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
nccztsjb-node-23 Ready control-plane,master 14m v1.23.2
nccztsjb-node-24 Ready <none> 2m5s v1.23.2
nccztsjb-node-25 Ready <none> 87s v1.23.2
[root@nccztsjb-node-23 ~]#

查看pod的状态

[root@nccztsjb-node-23 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-85b5b5888d-lzdb5 1/1 Running 0 8m51s
kube-system calico-node-5ndlp 1/1 Running 0 3m2s
kube-system calico-node-7rjcq 1/1 Running 0 8m51s
kube-system calico-node-9hm4q 1/1 Running 0 2m24s
kube-system coredns-7fc76f876d-9bwnp 1/1 Running 0 15m
kube-system coredns-7fc76f876d-kpfgn 1/1 Running 0 15m
kube-system etcd-nccztsjb-node-23 1/1 Running 0 15m
kube-system kube-apiserver-nccztsjb-node-23 1/1 Running 0 15m
kube-system kube-controller-manager-nccztsjb-node-23 1/1 Running 0 15m
kube-system kube-proxy-6xpf2 1/1 Running 0 15m
kube-system kube-proxy-j6tr8 1/1 Running 0 2m24s
kube-system kube-proxy-kjv9w 1/1 Running 0 3m2s
kube-system kube-scheduler-nccztsjb-node-23 1/1 Running 0 15m
[root@nccztsjb-node-23 ~]#

在新加的节点上也启动了calico-node的节点。

9、部署pod、测试节点间网络访问

kubectl create deployment nginx-test --image=172.20.58.152/middleware/nginx:1.21.4 --replicas=4

本次创建4个副本。

[root@nccztsjb-node-23 ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-test-b76c7cb54-28j4l 1/1 Running 0 28s 172.39.21.66 nccztsjb-node-25 <none> <none>
nginx-test-b76c7cb54-pkpw2 1/1 Running 0 28s 172.39.157.194 nccztsjb-node-24 <none> <none>
nginx-test-b76c7cb54-rbfz8 1/1 Running 0 28s 172.39.157.193 nccztsjb-node-24 <none> <none>
nginx-test-b76c7cb54-wtch5 1/1 Running 0 28s 172.39.21.65 nccztsjb-node-25 <none> <none>
[root@nccztsjb-node-23 ~]#

测试主机和pod间的访问:

[root@nccztsjb-node-23 ~]# ping 172.39.21.66
PING 172.39.21.66 (172.39.21.66) 56(84) bytes of data.
64 bytes from 172.39.21.66: icmp_seq=1 ttl=63 time=0.652 ms
64 bytes from 172.39.21.66: icmp_seq=2 ttl=63 time=0.492 ms
^C
--- 172.39.21.66 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.492/0.572/0.652/0.080 ms
[root@nccztsjb-node-23 ~]# ping 172.39.157.194
PING 172.39.157.194 (172.39.157.194) 56(84) bytes of data.
64 bytes from 172.39.157.194: icmp_seq=1 ttl=63 time=0.557 ms
64 bytes from 172.39.157.194: icmp_seq=2 ttl=63 time=0.422 ms
^C
--- 172.39.157.194 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.422/0.489/0.557/0.071 ms
[root@nccztsjb-node-23 ~]#

测试容器间的访问

[root@nccztsjb-node-23 ~]# kubectl exec -it nginx-test-b76c7cb54-28j4l -- bash
root@nginx-test-b76c7cb54-28j4l:/# wget 172.39.157.194
bash: wget: command not found
root@nginx-test-b76c7cb54-28j4l:/# curl 172.39.157.194
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p> <p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p>
</body>
</html>

是可以通过curl访问到的。

说明,节点和容器、容器和容器之间的网络都是通的。

OK,至此通过kubeadm工具部署k8s集群完成。