CentOS8 搭建Kubernetes

时间:2022-11-04 16:01:51

主机名

IP

组件

k8s-master

192.168.40.128/24

kubeadm、kubelet、kubectl、docker-ce

k8s-node1

192.168.40.129/24

kubeadm、kubelet、kubectl、docker-ce

k8s-node2

192.168.40.130/24

kubeadm、kubelet、kubectl、docker-ce

CentOS8 搭建Kubernetes

 



系统:CentOS8

配置:master:8核,4G  node1:8核  2G  node1:8核  2G

网络环境:IPv6、IPv4、VPN

网络环境有VPN下文部分资源使用了官方源

 

1.更新软件包

[root@localhost ~]# dnf update


 

2.三台机器安装Docker

 

[root@localhost ~]# vim /etc/hosts
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.40.128 k8s-master master
192.168.40.129 k8s-node1 node1
192.168.40.130 k8s-node2 node2

[root@localhost ~]# dnf install yum-utilsdevice-mapper-persistent-data lvm2
[root@localhost ~]# dnf remove docker \
docker-client \
docker-client-latest\
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost~]# wget https://download.docker.com/linux/centos/docker-ce.repo
[root@localhost~]# dnf update
[root@localhost~]# dnf install docker-ce –nobest


 

3.开机自启docker并启动

[root@localhost~]# systemctl enable docker
[root@localhost~]# systemctl start docker
[root@localhost~]# systemctl status docker
●docker.service - Docker Application Container Engine
Loaded: loaded(/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Wed2019-12-18 10:25:46 EST; 6s ago
Docs: https://docs.docker.com
Main PID: 73169 (dockerd)
Tasks: 32 (limit: 5935)
Memory: 105.9M
CGroup: /system.slice/docker.service
├─73169 /usr/bin/dockerd
└─73180 docker-containerd --config/var/run/docker/containerd/containerd.toml


4.配置内核参数

[root@localhost~]# vim /etc/sysctl.d/k8s.conf

[root@localhost~]# cat /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables= 1
net.bridge.bridge-nf-call-iptables= 1
net.ipv4.ip_nonlocal_bind= 1
net.ipv4.ip_forward= 1
vm.swappiness=0

[root@localhost~]# sysctl –system


 

5.添加必要内核模块

[root@localhost~]# vim /etc/sysconfig/modules/ipvs.modules
[root@localhost~]# cat /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe-- ip_vs
modprobe-- ip_vs_rr
modprobe-- ip_vs_wrr
modprobe-- ip_vs_sh
modprobe-- nf_conntrack_ipv4
[root@localhost~]# chmod +x /etc/sysconfig/modules/ipvs.modules
[root@localhost~]# ./etc/sysconfig/modules/ipvs.modules
-bash:./etc/sysconfig/modules/ipvs.modules: 没有那个文件或目录
[root@localhost~]# /etc/sysconfig/modules/ipvs.modules


 

6添加kubeadm的yum源

[root@localhost~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
>[kubernetes]
>name=Kubernetes
>baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
>enabled=1
>gpgcheck=1
>repo_gpgcheck=1
>gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpghttps://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
>exclude=kube*
> EOF


 

7.配置master节点

[root@localhost~]# cd /etc/yum.repos.d/
[root@localhost~]# mv docker-ce.repo{,.back}
[root@localhost~]# dnf update –nobest
[root@localhost~]# dnf install ipvsadm
[root@localhost~]# yum install -y kubelet kubeadm kubectl --disableexcludes=Kubernetes
[root@localhost~]# systemctl enable kubelet && systemctl start kubelet

[root@localhost~]# kubeadm config print init-defaults > kubeadm-init.yaml


[root@localhost~]# vim kubeadm-init.yaml
[root@localhost~]# cat kubeadm-init.yaml
apiVersion:kubeadm.k8s.io/v1beta2
bootstrapTokens:
-groups:
-system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind:InitConfiguration
localAPIEndpoint:
advertiseAddress: 1.2.3.4
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: localhost.localdomain
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion:kubeadm.k8s.io/v1beta2
certificatesDir:/etc/kubernetes/pki
clusterName:kubernetes
controllerManager:{}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository:k8s.gcr.io
kind:ClusterConfiguration
kubernetesVersion:v1.17.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler:{}
[root@localhost~]# vim kubeadm-init.yaml
[root@localhost~]# cat kubeadm-init.yaml
apiVersion:kubeadm.k8s.io/v1beta2
bootstrapTokens:
-groups:
-system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind:InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.40.128
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: localhost.localdomain
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion:kubeadm.k8s.io/v1beta2
certificatesDir:/etc/kubernetes/pki
clusterName:kubernetes
controllerManager:{}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository:k8s.gcr.io
kind:ClusterConfiguration
kubernetesVersion:v1.17.0
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler:{}
---
apiVersion:kubeproxy.config.k8s.io/v1alpha1
kind:KubeProxyConfiguration
mode:"ipvs"


 

8.拉取镜像

[root@localhost~]# kubeadm config images pull --config kubeadm-init.yaml
W121810:48:44.641505 75319 validation.go:28]Cannot validate kube-proxy config - no validator is available
W121810:48:44.641691 75319 validation.go:28]Cannot validate kubelet config - no validator is available
[config/images]Pulled k8s.gcr.io/kube-apiserver:v1.17.0
[config/images]Pulled k8s.gcr.io/kube-controller-manager:v1.17.0
[config/images]Pulled k8s.gcr.io/kube-scheduler:v1.17.0
[config/images]Pulled k8s.gcr.io/kube-proxy:v1.17.0
[config/images]Pulled k8s.gcr.io/pause:3.1
[config/images]Pulled k8s.gcr.io/etcd:3.4.3-0
[config/images]Pulled k8s.gcr.io/coredns:1.6.5

[root@localhost~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.17.0 7d54289267dc 10 days ago 116MB
k8s.gcr.io/kube-apiserver v1.17.0 0cae8d5cc64c 10 days ago 171MB
k8s.gcr.io/kube-controller-manager v1.17.0 5eb3b7486872 10 days ago 161MB
k8s.gcr.io/kube-scheduler v1.17.0 78c190f736b1 10 days ago 94.4MB
k8s.gcr.io/coredns 1.6.5 70f311871ae1 6 weeks ago 41.6MB
k8s.gcr.io/etcd 3.4.3-0 303ce5db0e90 7 weeks ago 288MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 24 months ago 742kB


 

9.添加开机自启并启动kublet

[root@localhost~]# systemctl enable kubelet
[root@localhost~]# systemctl start kubelet
Kubelet无法启动可能是您的交换分区没有关闭
[root@localhost~]# swapoff -a


 

10.初始化master节点

[root@localhost~]# kubeadm init --pod-network-cidr=10.244.0.0/16

YourKubernetes control-plane has initialized successfully!

To startusing your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config
sudo chown $(id -u):$(id -g)$HOME/.kube/config

Youshould now deploy a pod network to the cluster.
Run"kubectl apply -f [podnetwork].yaml" with one of the options listedat:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then youcan join any number of worker nodes by running the following on each as root:

kubeadmjoin 192.168.40.128:6443 --token eitxr6.l7que99ui33phdts \
--discovery-token-ca-cert-hashsha256:2b65bf29e32c1906391b66796f3cd5cf79bce239b43ff82fefb73ace984ac294


 

11.根据提示准备kubeconfig配置文件

[root@localhost ~]# mkdir -p $HOME/.kube
[root@localhost ~]# sudo cp -i /etc/kubernetes/admin.conf$HOME/.kube/config
[root@localhost ~]# sudochown $(id -u):$(id -g) $HOME/.kube/config


 

12.查看master的组件是否正常

[root@localhost ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
[root@localhost ~]# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-q42ch 1/1 Running 0 3m11s
kube-system coredns-6955765f44-xld2q 1/1 Running 0 3m11s
kube-system etcd-localhost.localdomain 1/1 Running 0 3m27s
kube-system kube-apiserver-localhost.localdomain 1/1 Running 0 3m27s
kube-system kube-controller-manager-localhost.localdomain 1/1 Running 0 3m27s
kube-system kube-proxy-zb4dq 1/1 Running 0 3m11s
kube-system kube-scheduler-localhost.localdomain 1/1 Running 0 3m26s

[root@localhost ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
localhost.localdomain Ready master 4m38s v1.17.0


 

13.配置俩台node节点

这行是在master节点初始化完成后提示的
[root@localhost ~]# kubeadm join 192.168.40.128:6443 --tokeneitxr6.l7que99ui33phdts \
> --discovery-token-ca-cert-hashsha256:2b65bf29e32c1906391b66796f3cd5cf79bce239b43ff82fefb73ace984ac294
W1218 23:48:20.344418    4134join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will beignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNINGIsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroupdriver. The recommended driver is "systemd". Please follow the guideat https://kubernetes.io/docs/setup/cri/
[WARNINGService-Kubelet]: kubelet service is not enabled, please run 'systemctl enablekubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -nkube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the"kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file"/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file"/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLSBootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a responsewas received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node jointhe cluster.


 在master节点插看是否有子节点的信息

[root@localhost ~]# kubectl get nodes