k8s的adm方式部署

时间:2024-03-03 16:32:55

1 k8s kubeadm搭建

1.1 k8s kubeadm搭建步骤

kubeadm init

在使用kubeadm方式安装k8s集群是,可根据初始化配置文件或配置参数选项快速的初始化生成一个k8s的master管理平台

kubeadm join

根据kubadm init初始化的提示信息快速的将一个node节点或其他的master节点加入到k8s集群里

1)所有节点进行初始化,安装容器引擎、kubeadm、kubelet
2)执行 kubeadm config print init-defaults 命令生成K8S集群初始化配置文件并进行修改
3)执行 kubeadm init --config 指定初始化配置文件进行K8S集群的初始化,生成K8S的master管理控制节点
4)在其它节点执行 kubeadm join 命令将node节点或其它master节点加入到K8S集群里
5)安装 cni 网络插件(flannel或calico)

1.2 如何更新kubeadm搭建的K8S的证书有效期

1)备份好证书和kubeconfig配置文件

mkdir /root/k8s.bak cp -r /etc/kubernetes/pki /root/k8s.bak/ cp /etc/kubernetes/*.yaml /root/k8s.bak/

2)更新证书

kubeadm alpha certs renew all --config=/opt/k8s/kubeadm-config.yaml

3)更新kubeconfig配置文件

kubeadm init phase kubeconfig all --config=/opt/k8s/kubeadm-config.yaml

4)重启kubelet进程和其它K8S组件的Pod

systemctl restart kubelet mv /etc/kubernetes/manifests/.yaml /tmp mv /tmp/.yaml /etc/kubernetes/manifests/

5)查看证书有效期

kubeadm alpha certs check-expiration openssl x509 -noout -dates -in /etc/kubernetes/pki/XXX.crt

2 kubeadm搭建高可用k8s实际部署

Kubeadm - K8S1.20 - 高可用集群部署
注意事项:
master节点cpu核心数要求大于2
●最新的版本不一定好,但相对于旧版本,核心功能稳定,但新增功能、接口相对不稳
●学会一个版本的 高可用部署,其他版本操作都差不多
●宿主机尽量升级到CentOS 7.9
●内核kernel升级到 4.19+ 这种稳定的内核
●部署k8s版本时,尽量找 1.xx.5 这种大于5的小版本(这种一般是比较稳定的版本)

2.1 初始化环境准备

所有节点,关闭防火墙规则,关闭selinux,关闭swap交换
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab

修改主机名,所有节点修改hosts文件
hostnamectl set-hostname master01
hostnamectl set-hostname master02
hostnamectl set-hostname master03
hostnamectl set-hostname node01
hostnamectl set-hostname node02
hostnamectl set-hostname node03
vim /etc/hosts
192.168.111.5 master01
192.168.111.6 master02
192.168.111.7 master03
192.168.111.8 node01
192.168.111.9 node02
192.168.111.55 node03

所有节点时间同步
yum -y install ntpdate
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
ntpdate time2.aliyun.com
​
systemctl enable --now crond
​
crontab -e
*/30 * * * * /usr/sbin/ntpdate time2.aliyun.com

所有节点实现Linux的资源限制
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited

所有节点升级内核
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm -O /opt/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
​
cd /opt/
yum localinstall -y kernel-ml*

更改内核启动方式
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
grubby --default-kernel
reboot

调整内核参数
调整内核参数
cat > /etc/sysctl.d/k8s.conf <<EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
​
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF
​
#生效参数
sysctl --system  
​
​
加载 ip_vs 模块
for i in $(ls /usr/lib/modules/$(uname -r)/kernel/net/netfilter/ipvs|grep -o "^[^.]*");do echo $i; /sbin/modinfo -F filename $i >/dev/null 2>&1 && /sbin/modprobe $i;done

2.2 所有节点安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2 
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
yum install -y docker-ce docker-ce-cli containerd.io
​
mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "500m", "max-file": "3"
  }
}
EOF
​
systemctl daemon-reload
systemctl restart docker.service
systemctl enable docker.service 
​
docker info | grep "Cgroup Driver"
Cgroup Driver: systemd

2.3 所有节点安装kubeadm,kubelet和kubectl

定义kubernetes源
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubelet-1.20.15 kubeadm-1.20.15 kubectl-1.20.15

配置Kubelet使用阿里云的pause镜像
cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"
EOF


开机自启kubelet
systemctl enable --now kubelet

2.4 使用nginx结合keepalived实现四层代理高可用组件安装、配置

两台高可用节点安装配置nginx四层代理转发
yum -y install nginx

user  nginx;
worker_processes  auto;

vim /etc/nginx/nginx.conf

events {
    worker_connections  1024;
}
stream {
     upstream k8s-apiservers {
      server 192.168.111.5:6443;
      server 192.168.111.6:6443;
      server 192.168.111.7:6443;
}
     server {
        listen 6443;
        proxy_pass k8s-apiservers;
 }
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

systemctl enable --now nginx

安装配置keepalived实现高可用转发
yum -y install keepalived

编辑nginx服务健康检查脚本
vim check_nginx.sh 

#!/bin/bash
if ! killall -0 nginx
  then
  systemctl stop keepalived
fi

配置keepalived配置文件
vim keepalived.conf 

! Configuration File for keepalived
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   } 
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_02
}  
vrrp_script check_nginx {
   interval 2
   script "/etc/keepalived/check_nginx.sh"
}  
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }   
    virtual_ipaddress {
        192.168.111.100
    }   
    track_script {
       check_nginx
}      
}  
使用ip a检测maser地址是否生成master
ip a
关闭master节点的nginx服务查看vip地址是否会漂移到BACKUP
systemctl stpo nginx
从节点上查看
ip a

2.5 部署K8S集群

在 master01 节点上设置集群初始化配置文件
kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/
vim kubeadm-config.yaml
......
11 localAPIEndpoint:
12   advertiseAddress: 192.168.80.10		#指定当前master节点的IP地址
13   bindPort: 6443

21 apiServer:
22   certSANs:								#在apiServer属性下面添加一个certsSANs的列表,添加所有master节点的IP地址和集群VIP地址
23   - 192.168.111.100
24   - 192.168.111.5
25   - 192.168.111.6
26   - 192.168.111.7

30 clusterName: kubernetes
31 controlPlaneEndpoint: "192.168.80.100:6443"		#指定集群VIP地址
32 controllerManager: {}

38 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers			#指定镜像下载地址
39 kind: ClusterConfiguration
40 kubernetesVersion: v1.20.15				#指定kubernetes版本号
41 networking:
42   dnsDomain: cluster.local
43   podSubnet: "10.244.0.0/16"				#指定pod网段,10.244.0.0/16用于匹配flannel默认网段
44   serviceSubnet: 10.96.0.0/16			#指定service网段
45 scheduler: {}
#末尾再添加以下内容
--- 
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs									#把默认的kube-proxy调度方式改为ipvs模式

补充:可以配置外置的etcd网段,只能二选一
etcd:
  external:
    endpoints:
    - https://192.168.80.10:2379
    - https://192.168.80.11:2379
    - https://192.168.80.12:2379
    - caFile:/opt/etcd/ssl/ca.pem
    - certFile:/opt/etcd/ssl/server.pem
    - keyFile:/opt/etcd/ssl/server-key.pem

所有节点拉取镜像
for i in master02 master03; do scp /opt/kubeadm-config.yaml $i:/opt/; done
#node节点的镜像加入集群后会自动拉取
kubeadm config images pull --config /opt/kubeadm-config.yaml
docker images

master01 节点进行初始化
kubeadm init --config kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
kubeadm init --config kubeadm-config.yaml --upload-certs | tee kubeadm-init.log
[init] Using Kubernetes version: v1.20.15
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 25.0.3. Latest validated version: 19.03
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master01] and IPs [10.96.0.1 192.168.111.5 192.168.111.100 192.168.111.6 192.168.111.7]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master01] and IPs [192.168.111.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master01] and IPs [192.168.111.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.801874 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4e923d767813d1f53185722eca61fb44ee2ba4336c372c9cae4f96b2c3a66e7c
[mark-control-plane] Marking the node master01 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.111.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4f6e7fe908d8dcda265a26a95a5db9f0c8d64d04bd3a31b54964b3488daa8320 \
    --control-plane --certificate-key 4e923d767813d1f53185722eca61fb44ee2ba4336c372c9cae4f96b2c3a66e7c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.111.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4f6e7fe908d8dcda265a26a95a5db9f0c8d64d04bd3a31b54964b3488daa8320
#若初始化失败,进行的操作
kubeadm reset -f
ipvsadm --clear 
rm -rf ~/.kube
再次进行初始化

master01 节点进行环境配置
设定kubectl
kubectl需经由API server认证及授权后方能执行相应的管理操作,kubeadm 部署的集群为其生成了一个具有管理员权限的认证配置文件 /etc/kubernetes/admin.conf,它可由 kubectl 通过默认的 “$HOME/.kube/config” 的路径进行加载。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
修改controller-manager和scheduler配置文件
vim /etc/kubernetes/manifests/kube-scheduler.yaml 
vim /etc/kubernetes/manifests/kube-controller-manager.yaml

所有节点加入集群
master 节点加入集群
kubeadm join 192.168.111.100:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:4f6e7fe908d8dcda265a26a95a5db9f0c8d64d04bd3a31b54964b3488daa8320 \
    --control-plane --certificate-key 4e923d767813d1f53185722eca61fb44ee2ba4336c372c9cae4f96b2c3a66e7c

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
#node 节点加入集群
kubeadm join 192.168.80.100:16443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98

在 master01 查看集群信息
kubectl get nodes

部署cni网络插件flannel
所有节点上传 flannel 镜像 flannel.tar 和网络插件 cni-plugins-linux-amd64-v0.8.6.tgz 到 /opt 目录,master节点上传 kube-flannel.yml 文件
cd /opt
docker load < flannel.tar

mv /opt/cni /opt/cni_bak
mkdir -p /opt/cni/bin
tar zxvf cni-plugins-linux-amd64-v0.8.6.tgz -C /opt/cni/bin

kubectl apply -f kube-flannel.yml 

kubectl get pods -A  等所有组件状态都为running再查看nodes节点状态

测试 pod 资源创建
kubectl create deployment nginx --image=nginx
kubectl get pods -o wide

暴露端口提供服务
暴露端口提供服务
kubectl expose deployment nginx --port=80 --type=NodePort

kubectl get svc

测试访问
curl http://node01:32698

扩展3个副本,测试是否可以负载均衡
kubectl scale deployment nginx --replicas=3
kubectl get pods -o wide

2.6 kubeadm安装补充

初始化kubeadm的其他方法
kubeadm init \
--apiserver-advertise-address=192.168.111.10 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.20.15 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16 \
--token-ttl=0
--------------------------------------------------------------------------------------------
初始化集群需使用kubeadm init命令,可以指定具体参数初始化,也可以指定配置文件初始化。
可选参数:
--apiserver-advertise-address:apiserver通告给其他组件的IP地址,一般应该为Master节点的用于集群内部通信的IP地址,0.0.0.0表示节点上所有可用地址
--apiserver-bind-port:apiserver的监听端口,默认是6443
--cert-dir:通讯的ssl证书文件,默认/etc/kubernetes/pki
--control-plane-endpoint:控制台平面的共享终端,可以是负载均衡的ip地址或者dns域名,高可用集群时需要添加
--image-repository:拉取镜像的镜像仓库,默认是k8s.gcr.io
--kubernetes-version:指定kubernetes版本
--pod-network-cidr:pod资源的网段,需与pod网络插件的值设置一致。Flannel网络插件的默认为10.244.0.0/16,Calico插件的默认值为192.168.0.0/16;
--service-cidr:service资源的网段
--service-dns-domain:service全域名的后缀,默认是cluster.local
--token-ttl:默认token的有效期为24小时,如果不想过期,可以加上 --token-ttl=0 这个参数

方法二初始化后需要修改 kube-proxy 的 configmap,开启 ipvs
kubectl edit configmap kube-proxy -n kube-system

2.7 常见的k8s问题

1)加入集群的 Token 过期

注意:Token值在集群初始化后,有效期为 24小时 ,过了24小时过期。进行重新生成Token,再次加入集群,新生成的Token为 2小时。

1.1、生成Node节点加入集群的 Token

kubeadm token create --print-join-command kubeadm join 192.168.80.100:16443 --token menw99.1hbsurvl5fiz119n --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab865 12420215242c4313fb830a4eb98

1.2、生成Master节点加入集群的 --certificate-key

kubeadm init phase upload-certs --upload-certs I1105 12:33:08.201601 93226 version.go:254] remote version is much newer: v1.22.3; falling back to: stable-1.20 [upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

#master节点加入集群的命令 kubeadm join 192.168.80.100:16443 --token menw99.1hbsurvl5fiz119n --discovery-token-ca-cert-hash sha256:e76e4525ca29a9ccd5c24142a724bdb6ab86512420215242c4313fb830a4eb98 \ --control-plane --certificate-key 38dba94af7a38700c3698b8acdf8e23f273be07877f5c86f4977dc023e333deb

2)master节点 无法部署非系统Pod

解析:主要是因为master节点被加上污点,污点是不允许部署非系统 Pod,在 测试 环境,可以将污点去除,节省资源,可利用率。

2.1、查看污点

kubectl describe node -l node-role.kubernetes.io/master= | grep Taints Taints: node-role.kubernetes.io/master:NoSchedule Taints: node-role.kubernetes.io/master:NoSchedule Taints: node-role.kubernetes.io/master:NoSchedule

2.2、取消污点

kubectl taint node -l node-role.kubernetes.io/master node-role.kubernetes.io/master:NoSchedule- node/master01 untainted node/master02 untainted node/master03 untainted

kubectl describe node -l node-role.kubernetes.io/master= | grep Taints Taints: <none> Taints: <none> Taints: <none>

3)修改NodePort的默认端口

原理:默认k8s的使用端口的范围为30000左右,作为对外部提供的端口。我们也可以通过对配置文件的修改去指定默认的对外端口的范围。

报错

The Service "nginx-svc" is invalid: spec.ports[0].nodePort: Invalid value: 80: provided port is not in the valid range. The range of valid ports is 30000-32767

[root@k8s-master1 ~]# vim /etc/kubernetes/manifests/kube-apiserver.yaml

  • --service-cluster-ip-range=10.96.0.0/16

  • --service-node-port-range=1-65535 #找到后进行添加即可

无需重启,k8s会自动生效
4)外部 etcd 部署配置

kubeadm config print init-defaults > /opt/kubeadm-config.yaml

cd /opt/ vim kubeadm-config.yaml apiVersion: kubeadm.k8s.io/v1beta2 bootstrapTokens:

  • groups:

    • system:bootstrappers:kubeadm:default-node-token token: abcdef.0123456789abcdef ttl: 24h0m0s usages:

    • signing

    • authentication kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.80.14 bindPort: 6443 nodeRegistration: criSocket: /var/run/dockershim.sock name: master01 taints:

    • effect: NoSchedule key: node-role.kubernetes.io/master


apiServer: certSANs:

  • 10.96.0.1

  • 127.0.0.1

  • localhost

  • kubernetes

  • kubernetes.default

  • kubernetes.default.svc

  • kubernetes.default.svc.cluster.local

  • 192.168.80.100

  • 192.168.80.10

  • 192.168.80.11

  • 192.168.80.12

  • master01

  • master02

  • master03 timeoutForControlPlane: 4m0s apiVersion: kubeadm.k8s.io/v1beta2 certificatesDir: /etc/kubernetes/pki clusterName: kubernetes controlPlaneEndpoint: 192.168.80.100:16443 controllerManager: {} dns: type: CoreDNS etcd: external: #使用外部etcd的方式 endpoints:

    • https://192.168.80.10:2379

    • https://192.168.80.11:2379

    • https://192.168.80.12:2379 caFile: /opt/etcd/ssl/ca.pem #需要把etcd的证书都复制到所有master节点上 certFile: /opt/etcd/ssl/server.pem keyFile: /opt/etcd/ssl/server-key.pem imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers kind: ClusterConfiguration kubernetesVersion: v1.20.15 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/16 scheduler: {}


apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration mode: ipvs

3 k8s图形化界面

3.1 Dashboard安装

在 master01 节点上操作

#上传 recommended.yaml 文件到 /opt/k8s 目录中
cd /opt/k8s
vim recommended.yaml
#默认Dashboard只能集群内部访问,修改Service为NodePort类型,暴露到外部:
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30001     #添加
  type: NodePort          #添加
  selector:
    k8s-app: kubernetes-dashboard
	
kubectl apply -f recommended.yaml

kubectl get pods -A

创建service account并绑定默认cluster-admin管理员集群角色
kubectl create serviceaccount dashboard-admin -n kube-system
kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
kubectl describe secrets -n kube-system $(kubectl -n kube-system get secret | awk '/dashboard-admin/{print $1}')

浏览器访问测试
浏览器打不开的用360浏览器(很多浏览器证书不可用)