【Kubernetes安装】从零开始使用kubeadm命令工具部署K8S v1.28.2 集群

时间:2024-07-09 06:58:54

文章目录

    • 一、虚拟机配置参数说明
    • 二、kubernetes v1.28.2版本介绍
    • 三、CentOS 7.9 系统初始化配置
      • 3.1 配置CentOS系统基础环境
        • 3.1.1 配置hosts
        • 3.1.2 永久关闭selinux
        • 3.1.3 关闭swap分区
        • 3.1.4 所有节点全部关闭防火墙
        • 3.1.5 配置ntp server同步时间
        • 3.1.6 添加kubernetes镜像源
      • 3.2 内核升级
        • 3.2.1 查看当前内核版本
        • 3.2.2 导入ELPepo仓库公共密钥
        • 3.2.3 安装ELPepo的仓库yum源
        • 3.2.4 选择lt版本安装
        • 3.2.5 设置内核默认启动
        • 3.2.6 重启系统
        • 3.2.7 查看内核版本
      • 3.3 配置Kubernetes运行环境
        • 3.3.1 配置内核参数
        • 3.3.2 配置核模块
        • 3.3.3 重启系统并检查
        • 3.3.4 安装 containerd
        • 3.3.5 配置 containerd
    • 四、kubernetes v1.28.2集群安装步骤
      • 4.1 安装kubeamd、kubelet、kubectl
      • 4.2 部署control-plane节点
      • 4.3 部署calico
        • 4.3.1 安装calico网络插件
        • 4.3.2 修改CALICO的IP地址块
        • 4.3.3 指定网卡
        • 4.3.4 部署calico
      • 4.4 将worker节点加入集群
      • 4.5 Kubernetes dashboard 安装
    • 五、Kubernetes metrics-server 插件部署
      • 5.1 下载yaml文件
      • 5.2 编辑yaml文件
      • 5.3 部署yaml文件
      • 5.4 测试验证
    • 六、参考链接

一、虚拟机配置参数说明

host hostname os role hardware
11.0.1.10 master01 centos7.9 control-plane CPU:2c 内存: 8G 硬盘:50G
11.0.1.11 node01 centos7.9 worker CPU:2c 内存: 8G 硬盘:50G

二、kubernetes v1.28.2版本介绍

  1. 资源管理优化:Kubernetes v1.28 引入了一些新的资源管理特性,例如节点级别的资源限制和配额管理。这些特性可以帮助管理员更好地管理和控制集群中的资源使用情况,提高集群的稳定性和性能。
  2. 网络性能改进:在网络方面,Kubernetes v1.28 对网络策略进行了改进,提高了网络的性能和安全性。此外,还引入了新的网络插件,支持更多的网络拓扑和插件集成。
  3. 安全加固:Kubernetes v1.28 在安全性方面也做了很多改进,例如增加了对身份验证和授权的支持,以及对密钥和证书的管理。这些特性可以帮助管理员更好地保护集群的安全,防止未经授权的访问和攻击。
  4. 简化操作:Kubernetes v1.28 还引入了一些新的工具和特性,旨在简化集群的部署和管理操作。例如,增加了对多集群部署的支持,以及对自动修复和自我保护机制的改进。
  5. 可扩展性提升:Kubernetes v1.28 对可扩展性做了很多改进,例如增加了对自定义资源对象的支持,以及对事件数据的改进。这些特性可以帮助开发者更好地扩展集群的功能和性能。

三、CentOS 7.9 系统初始化配置

[root@master01 ~]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
[root@master01 ~]# uname -a
Linux master01 5.4.277-1.el7.elrepo.x86_64 #1 SMP Sun May 26 13:12:21 EDT 2024 x86_64 x86_64 x86_64 GNU/Linux

3.1 配置CentOS系统基础环境

该部分所有节点均需要操作;

3.1.1 配置hosts
[root@master01 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
11.0.1.10       master01
11.0.1.11       node01
3.1.2 永久关闭selinux
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

setenforce 0
3.1.3 关闭swap分区
sed -i '/swap/s/^/#/g' /etc/fstab

swapoff -a 
3.1.4 所有节点全部关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
systemctl is-enabled firewalld
3.1.5 配置ntp server同步时间
ntpdate ntp1.aliyun.com
vi /etc/crontab
1 * * * * root /usr/sbin/ntpdate ntp1.aliyun.com && /sbin/hwclock -w
3.1.6 添加kubernetes镜像源
cat >/etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

3.2 内核升级

3.2.1 查看当前内核版本
[root@master01 ~]# uname -a
Linux localhost.localdomain 3.10.0-957.el7.x86_64 #1 SMP Thu Nov 8 23:39:32 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
3.2.2 导入ELPepo仓库公共密钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
3.2.3 安装ELPepo的仓库yum源
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
3.2.4 选择lt版本安装
yum -y  --enablerepo=elrepo-kernel install kernel-lt
类型 版本
ml 最新稳定版
lt 长期维护版
3.2.5 设置内核默认启动
sudo awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
grub2-set-default 0
3.2.6 重启系统
reboot
3.2.7 查看内核版本
[root@master01 ~]# uname -a
Linux localhost.localdomain 4.4.244-1.el7.elrepo.x86_64 #1 SMP Tue Nov 17 18:57:10 EST 2020 x86_64 x86_64 x86_64 GNU/Linux

3.3 配置Kubernetes运行环境

说明:下面的操作所有节点全部执行,后面如果要给集群新增节点也要做这个操作。

3.3.1 配置内核参数
cat > /etc/sysctl.d/Kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
EOF
# 使配置生效
sysctl --system
3.3.2 配置核模块
yum -y install conntrack ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
# 相关内核模块
cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
EOF
# 启动服务
systemctl enable --now systemd-modules-load
3.3.3 重启系统并检查
[root@master01 ~]# reboot
# 检查是否加载成功
lsmod |egrep "ip_vs|nf_conntrack_ipv4"
nf_conntrack_ipv4      15053  26
nf_defrag_ipv4         12729  1 nf_conntrack_ipv4
ip_vs_sh               12688  0
ip_vs_wrr              12697  0
ip_vs_rr               12600  0
ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  10 ip_vs,nf_nat,nf_nat_ipv4,nf_nat_ipv6,xt_conntrack,nf_nat_masquerade_ipv4,nf_nat_masquerade_ipv6,nf_conntrack_netlink,nf_conntrack_ipv4,nf_conntrack_ipv6
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack
3.3.4 安装 containerd
yum -y install yum-utils device-mapper-persistent-data lvm2
# 添加阿里源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# 配置 containerd
cat >>/etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
# 立刻加载 overlay模块
modprobe overlay
# 立刻加载 br_netfilter模块
modprobe br_netfilter
# 安装containerd
yum install containerd.io -y
3.3.5 配置 containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
# 使用systemd管理cgroups
sed -i '/SystemdCgroup/s/false/true/g' /etc/containerd/config.toml
# 配置sadnbox image从阿里云拉取
sed -i '/sandbox_image/s/registry.k8s.io/registry.aliyuncs.com\/google_containers/g' /etc/containerd/config.toml
sed -i 's#sandbox_image = "registry.k8s.io/pause:3.6"#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"#' /etc/containerd/config.toml
# 启动containerd
systemctl enable containerd
systemctl start containerd

四、kubernetes v1.28.2集群安装步骤

4.1 安装kubeamd、kubelet、kubectl

# 查看可用版本
yum list kubelet --showduplicates |grep 1.28

# 开始安装版本为1.28.2
yum -y install kubectl-1.28.2 kubelet-1.28.2 kubeadm-1.28.2

# 启动
systemctl enable kubelet
systemctl start kubelet

4.2 部署control-plane节点

以下操作只在control-plane节点执行,使用kubeadm初始化。

# 查看所需镜像
[root@master01 ~]# kubeadm config images list --kubernetes-version=v1.28.2
registry.k8s.io/kube-apiserver:v1.28.2
registry.k8s.io/kube-controller-manager:v1.28.2
registry.k8s.io/kube-scheduler:v1.28.2
registry.k8s.io/kube-proxy:v1.28.2
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/coredns/coredns:v1.10.1

# 初始化
[root@master01 ~]# kubeadm init --kubernetes-version=1.28.2 \
--apiserver-advertise-address=192.168.31.200  \
--image-repository  registry.aliyuncs.com/google_containers \
--pod-network-cidr=172.16.0.0/16

4.3 部署calico

4.3.1 安装calico网络插件
wget https://docs.projectcalico.org/manifests/calico.yaml
[root@master01 ~]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS      RESTARTS      AGE
kube-system            calico-kube-controllers-658d97c59c-fzmmj     1/1     Running     5 (23m ago)   24d
kube-system            calico-node-5fdwc                            1/1     Running     5 (23m ago)   24d
kube-system            calico-node-k5smq                            1/1     Running     5 (23m ago)   24d
4.3.2 修改CALICO的IP地址块
# 改为10.244.0.0/16
- name: CALICO_IPV4POOL_CIDR
  value: "172.16.0.0/16"
4.3.3 指定网卡
# Cluster type to identify the deployment type
  - name: CLUSTER_TYPE
    value: "k8s,bgp"
  - name: IP_AUTODETECTION_METHOD
    value: "interface=ens32" # ens32为本地网卡名字
4.3.4 部署calico
# kubectl apply -f calico.yaml

[root@master01 ~]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-658d97c59c-fzmmj   1/1     Running   5 (31m ago)   24d
calico-node-5fdwc                          1/1     Running   5 (31m ago)   24d
calico-node-k5smq                          1/1     Running   5 (31m ago)   24d
coredns-66f779496c-5qxnw                   1/1     Running   5 (31m ago)   24d
coredns-66f779496c-tg9vb                   1/1     Running   5 (31m ago)   24d
etcd-master01                              1/1     Running   5 (31m ago)   24d
kube-apiserver-master01                    1/1     Running   5 (31m ago)   24d
kube-controller-manager-master01           1/1     Running   6 (30m ago)   24d
kube-proxy-thm89                           1/1     Running   5 (31m ago)   24d
kube-proxy-tpcdh                           1/1     Running   5 (31m ago)   24d
kube-scheduler-master01                    1/1     Running   5 (31m ago)   24d
metrics-server-84989b68d9-98bqn            1/1     Running   5 (31m ago)   24d

4.4 将worker节点加入集群

# 所有worker节点都执行
kubeadm join 11.0.1.10:6443 --token l906wz.0fydt3hcfbogwlo9 \
        --discovery-token-ca-cert-hash sha256:2604d3aab372a483b26bcbdafdb54d7746226975c3a317db07d94eccdfca51be

# 查看状态
[root@master01 ~]# kubectl get nodes
NAME          	  STATUS   		ROLES       AGE   VERSION
control-plane01   Ready    	control-plane   13d   v1.28.2
node01   		  Ready    		<none>      13d   v1.28.2
node02   		  Ready    		<none>      13d   v1.28.2
node03   		  Ready    		<none>      13d   v1.28.2

4.5 Kubernetes dashboard 安装

补全安装命令

yum -y install bash-completion
echo "source <(kubectl completion bash)" >> /etc/profile
source /etc/profile

kubernetes-dashboard安装

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v3.0.0-alpha0/charts/kubernetes-dashboard.yaml

修改如下内容

kind: Service
apiVersion: v1
metadata:
  labels:
    Kubernetes-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort       # 增加内容
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000  # 增加内容(端口范围30000-32767)
  selector:
    Kubernetes-app: kubernetes-dashboard

# 安装
kubectl apply -f recommended.yaml

# 查看进度
[root@master01 ~]# kubectl get all -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS      AGE
pod/dashboard-metrics-scraper-856cb79ffb-5txjq   1/1     Running   4 (41m ago)   24d
pod/kubernetes-dashboard-8ff59f48c-svt95         1/1     Running   3 (22d ago)   24d

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.99.106.70    <none>        8000/TCP        24d
service/kubernetes-dashboard        NodePort    10.98.125.159   <none>        443:30000/TCP   24d

NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dashboard-metrics-scraper   1/1     1            1           24d
deployment.apps/kubernetes-dashboard        1/1     1            1           24d

NAME                                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dashboard-metrics-scraper-856cb79ffb   1         1         1       24d
replicaset.apps/kubernetes-dashboard-8ff59f48c         1         1         1       24d

创建admin用户

[root@master01 ~]# vim admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard  
---
apiVersion: v1
kind: Secret
metadata:
  name: kubernetes-dashboard-admin
  namespace: kubernetes-dashboard
  annotations:
    kubernetes.io/service-account.name: "admin"
type: kubernetes.io/service-account-token


# 创建admin用户token
[root@master01 ~]# kubectl -n kubernetes-dashboard create token admin
eyJhbGciOiJSUzI1NiIsImtpZCI6IlgxVzZreFRlRXZldHNpMGoxUU9lTGxaQ050QnNDZHlmX1RpSk5Xc25BdjgifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzIwNDMwNjc2LCJpYXQiOjE3MjA0MjcwNzYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbiIsInVpZCI6ImU1NzgxNTI3LWU1MDktNGYwZC05NmU5LTk0OTRjNmJjODQ4ZiJ9fSwibmJmIjoxNzIwNDI3MDc2LCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4ifQ.cXKn-qz3QvQdPuxUgrfKws-GY0f958r_Lv_4XoFXhZkdROAErfETM2TuzU7mEOhAdc7f09t_D4HWETfLFoj-adbsD9sq4KSAVPjcFm-pFS1F_qPovIe1F3CyYCWBnGRWlxlSyvvtktgoUK3Zh8SHsniea5CMbBLDxd8PrH_KARu8jIoCXsqE2VHQE77r_EdPRIqLFgntpBoh4M9Xgg7lTEo1D9NoWmIjpJv67GJyD1jvORadB6H2Eq9M-Q6pH4qqFWnEeJUbG3ZabvyMVfnlE3vOpsaXMB2QfqFMrpj2WEmaalAVHxz74lEQLVapUDehCxNnYR9NgP3Wm4ZftNIQtQ

# 获取token
Token=$(kubectl -n kubernetes-dashboard get secret |awk '/kubernetes-dashboard-admin/ {print $1}')
kubectl describe secrets -n kubernetes-dashboard ${Token} |grep token |awk 'NR==NF {print $2}'

最后,使用token登陆,URL地址是 <集群任意节点IP>:30000

image-20240708162610737

五、Kubernetes metrics-server 插件部署

metrics server为Kubernetes自动伸缩提供一个容器资源度量源。metrics-server 从 kubelet 中获取资源指标,并通过 Metrics API 在 Kubernetes API 服务器中公开它们,以供 HPA 和 VPA 使用。

5.1 下载yaml文件

wget https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml -O metrics-server.yaml

5.2 编辑yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  # ...
  template:
  	spec:
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls     # 需要新加的一行
        image: registry.cn-hangzhou.aliyuncs.com/rainux/metrics-server:v0.6.4

5.3 部署yaml文件

# 部署yaml文件
[root@master01 ~]# kubectl apply -f metrics-server.yaml

# 查看是否在运行
[root@master01 ~]# kubectl get pods -n kube-system | grep metric
metrics-server-84989b68d9-98bqn            1/1     Running   5 (52m ago)   24d

# 获取集群的指标数据
[root@master01 ~]# kubectl get --raw /apis/metrics.k8s.io/v1beta1
{"kind":"APIResourceList","apiVersion":"v1","groupVersion":"metrics.k8s.io/v1beta1","resources":[{"name":"nodes","singularName":"","namespaced":false,"kind":"NodeMetrics","verbs":["get","list"]},{"name":"pods","singularName":"","namespaced":true,"kind":"PodMetrics","verbs":["get","list"]}]}

5.4 测试验证

[root@master01 ~]# kubectl top nodes
NAME       CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master01   232m         11%    1258Mi          16%
node01     93m          4%     940Mi           12%

六、参考链接

???? Kubernetes 部署集群1.28.2版本(无坑) - smx886 - 博客园 (cnblogs.com)

???? K8s持久化存储PV和PVC(通俗易懂)_kubectl get pv-****博客