一、机器环境
3台4C8G50GB磁盘虚拟机
角色 |
IP |
主机名 |
master |
10.101.14.148 |
k8s-master-10-101-14-148 |
node |
10.101.14.19 |
node1-10-101-14-19 |
node |
10.101.14.192 |
node2-10-101-14-192 |
二、虚拟机系统版本和k8s版本
操作系统/内核版本 |
AnliOS7.9/4.19.91-25.3 |
kubernetes |
v1.24.3 |
containerd |
v1.6.7 |
本次环境使用的AnliOS7.9为国内龙蜥社区开源版本,在CentOS停止更新之后,可以有效的替换,该系统完全兼容CentOS感兴趣可以了解下龙蜥系统
三、集群环境部署
3.1、集群环境准备(所有节点)
3.1.1、修改主机名
#修改主机名:10.101.14.148
hostnamectl set-hostname k8s-master-10-101-14-148
#修改主机名:10.101.14.19
hostnamectl set-hostname node1-10-101-14-19
#修改主机名:10.101.14.192
hostnamectl set-hostname node2-10-101-14-192
3.1.2 、关闭防火墙和selinux
每个节点关闭
systemctl stop firewalld && systemctl disable firewall
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'
3.1.3、配置hosts
每个节点配置hosts信息
cat > /etc/hosts << EOF
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.101.14.13 k8s-master-10-101-14-13
10.101.14.19 node1-10-101-14-19
10.101.14.192 node2-10-101-14-192
EOF
3.1.4 加载内核模块并开启
每个节点开启内核模块
modprobe br_netfilter
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p
安装ipvs模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
yum install ipset -y
yum install ipvsadm -y
3.1.5 同步服务器时间
yum install chrony -y
systemctl enable chronyd
systemctl start chronyd
chronyc sources
3.2 安装containerd(所有节点)
3.2.1 配置yum源和依赖包
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install containerd.io -y
3.2.2 安装、配置和启动containerd
配置和启动containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
#替换配置文件(国内因为防火墙的原因)
sed -i "s#k8s.gcr.io#registry.cn-hangzhou.aliyuncs.com/google_containers#g" /etc/containerd/config.toml
sed -i "s#https://registry-1.docker.io#https://registry.cn-hangzhou.aliyuncs.com#g" /etc/containerd/config.toml
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#g' /etc/containerd/config.toml
systemctl daemon-reload
systemctl enable containerd
systemctl restart containerd
3.3 配置、安装kubeadm(所有节点)
当Containerd安装启动完成后,现在可以来安装 Kubeadm 了,我们这里是通过指定使用阿里云的yum源进行安装
3.3.1 配置阿里云的k8s yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
3.3.2 安装kubeadm、kubelet、kubectl,并配置crictl
# 安装kubeadm kubelet kubectl包
yum install kubelet-1.24.3 kubeadm-1.24.3 kubectl-1.24.3 -y
#生成 crictl 的配置文件,用于连接到containerd
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
#启动kubelet服务
systemctl daemon-reload
systemctl enable kubelet && systemctl start kubelet
四、初始化集群
4.1.1 master节点执行
kubeadm config print init-defaults > kubeadm.yaml
根据自己的需求修改配置,包括 imageRepository 的值,kube-proxy模式等,我们使用的containerd作为运行时,在初始化节点的时候需要指定cgroupDriver为systemd
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.101.14.148 # 修改指定Master的地址
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: k8s-master-10-101-14-148 # 修改为master的主机名
taints: # 给master节点打上污点,不允许调度应用容器到上面
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs # 修改kube-proxy模式为ipvs
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers # 修改获取镜像的地址
kind: ClusterConfiguration
kubernetesVersion: 1.24.3 # 指定k8s的版本
networking:
dnsDomain: cluster.local
podSubnet: 172.16.0.0/16 # 指定pod的CIDR
serviceSubnet: 10.96.0.0/12 # 指定service的CIDR
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd # 指定cgroupDrive为systemd
使用上述生成的配置文件进行初始化:
kubeadm init --config=kubeadm.yaml
[init] Using Kubernetes version: v1.24.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.5]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.0.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[apiclient] All control plane components are healthy after 56.001862 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.101.14.13:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:55c5c0593f8e07a51aaa6dac6c0187a289dee30998dc9cf58516a2195cde8fb1
根据安装提示开始拷贝kubeconfig文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4.1.2 node节点执行
kubeadm join 10.101.14.13:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:55c5c0593f8e07a51aaa6dac6c0187a289dee30998dc9cf58516a2195cde8fb1
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
4.1.3 master节点calico网络插件安装
Calico网络安装方法
kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.24.3/manifests/tigera-operator.yaml
#获取网络插件修改pod地址
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.3/manifests/custom-resources.yaml
编辑custom-resources.yaml文件,修改cidr地址和kubeadm.yaml中pod cidr保持一致
# This section includes base Calico installation configuration.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 172.16.0.0/16 #修改该地址
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
---
# This section configures the Calico API server.
# For more information, see: https://projectcalico.docs.tigera.io/v3.23/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
执行custom-resources.yaml文件
kubectl create -f custom-resources.yaml
至此基本上安装成功没有问题了,稍等几分钟,可以通过命令查看状态,
#查看集群状态
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
可以看到集群中调度服务,控制器服务,etcd服务均是正常的,说明没有问题,再看看calico服,也是running状态,至此k8s服务搭建起来了
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7f74c56694-2hsxl 1/1 Running 0 77d
coredns-7f74c56694-rssvf 1/1 Running 0 77d
etcd-k8s-master-10-101-14-148 1/1 Running 0 77d
kube-apiserver-k8s-master-10-101-14-148 1/1 Running 0 77d
kube-controller-manager-k8s-master-10-101-14-148 1/1 Running 0 77d
kube-proxy-d2896 1/1 Running 0 77d
kube-proxy-tlqrt 1/1 Running 0 77d
kube-proxy-wvsmg 1/1 Running 0 77d
kube-scheduler-k8s-master-10-101-14-148 1/1 Running 0 77d