5.基于二进制部署kubernetes(k8s)集群

时间:2022-05-02 22:54:56

1 kubernetes组件

1.1 Kubernetes 集群图

官网集群架构图

5.基于二进制部署kubernetes(k8s)集群

1.2 组件及功能

1.2.1 控制组件(Control Plane Components)

控制组件对集群做出全局决策(例如,调度),以及检测和响应集群事件.

例如,当检测到一个deployment的replicas字段不满足设定值时就会启动一个新的pod.

kube-apiserver

k8s API Server提供了k8s各类资源对象(pod,RC,Service等)的增删改查及watch等HTTP Rest接口,是整个系统的数据总线和数据中心。

kubernetes API Server的功能:

  • 提供了集群管理的REST API接口(包括认证授权、数据校验以及集群状态变更);
  • 提供其他模块之间的数据交互和通信的枢纽(其他模块通过API Server查询或修改数据,只有API Server才直接操作etcd);
  • 是资源配额控制的入口;
  • 拥有完备的集群安全机制.

etcd

etcd 是兼具一致性和高可用性的键值数据库,可以作为保存 Kubernetes 所有集群数据的后台数据库。

kube-scheduler

从集群所有节点中,根据调度算法挑选出所有可以运行该pod的节点,再根据调度算法从上述node节点选择最优节点作为最终结果。

调度器运行在master节点,它的核心功能是监听apiserver来获取PodSpec.NodeName为空的pod,然后为pod创建一个binding指示pod应该调度到哪个节点上,调度结果写入apiserver。

kube-controller-manager

作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群始终处于预期的工作状态。

cloud-controller-manager

是kubernetes与云厂商提供的服务能力对接的关键组件。又称kubernetes cloudprovider. 通过这个组件,可以让用户在创建k8s LoadBalancer 类型的service的时候自动的为用户创建一个阿里云SLB,同时动态的绑定与解绑SLB后端,并且提供了丰富的配置允许用户自定义生成的LoadBalancer.

如果在自己的环境中运行 Kubernetes,或者在本地计算机中运行学习环境, 所部署的环境中不需要此组件。

1.2.2 节点组件(Node Components)

节点组件在每个节点上运行,维护运行的 Pod 并提供 Kubernetes 运行环境。

kubelet

一个在集群中每个节点(node)上运行的代理。 它保证容器(containers)都 运行在 Pod 中。

kubelet 接收一组通过各类机制提供给它的 PodSpecs,确保这些 PodSpecs 中描述的容器处于运行状态且健康。 kubelet 不会管理不是由 Kubernetes 创建的容器。

kube-proxy

kube-proxy 是集群中每个节点上运行的网络代理, 实现 Kubernetes 服务(Service) 概念的一部分。

kube-proxy 维护节点上的网络规则。这些网络规则允许从集群内部或外部的网络会话与 Pod 进行网络通信。

如果操作系统提供了数据包过滤层并可用的话,kube-proxy 会通过它来实现网络规则。否则, kube-proxy 仅转发流量本身。

Container runtime

容器运行环境是负责运行容器的软件。

Kubernetes 支持多个容器运行环境: Docker、 containerd、CRI-O 以及任何实现 Kubernetes CRI (容器运行环境接口)。

1.2.3 插件(Addons)

插件使用 Kubernetes 资源(DaemonSet、 Deployment等)实现集群功能。 因为这些插件提供集群级别的功能,插件中命名空间域的资源属于 kube-system 命名空间。

DNS

尽管其他插件都并非严格意义上的必需组件,但几乎所有 Kubernetes 集群都应该 有集群 DNS, 因为很多示例都需要 DNS 服务。

集群 DNS 是一个 DNS 服务器,和环境中的其他 DNS 服务器一起工作,它为 Kubernetes 服务提供 DNS 记录。

Kubernetes 启动的容器自动将此 DNS 服务器包含在其 DNS 搜索列表中。

比如:core-dns

网络用户界面(Dashboard)

Dashboard 是 Kubernetes 集群的通用的、基于 Web 的用户界面。可以提供简单的集群管理配置和集群运行状态监控查看。

容器资源监控

容器资源监控 将关于容器的一些常见的时间序列度量值保存到一个集中的数据库中,并提供用于浏览这些数据的界面。

集群级日志记录

集群层面日志 机制负责将容器的日志数据 保存到一个集中的日志存储中,该存储能够提供搜索和浏览接口。

2 kubernetes 创建Pod 的工作流

  1. kubectl 向 k8s api server 发起一个create pod 请求(即我们使用Kubectl敲一个create pod命令) 。

  2. k8s api server接收到pod创建请求后,不会去直接创建pod;

    而是生成一个包含创建信息的yaml。

  3. apiserver 将刚才的yaml信息写入etcd数据库。

    到此为止仅仅是在etcd中添加了一条记录, 还没有任何的实质性进展。

  4. scheduler 查看 k8s api ,类似于通知机制。

    首先判断:pod.spec.Node == null?若为null,表示这个Pod请求是新来的,需要创建;

    因此先进行调度计算,找到最“闲”的node。

    然后将信息在etcd数据库中更新分配结果:pod.spec.Node = nodeA (设置一个具体的节点)

    同样上述操作的各种信息也要写到etcd数据库中中。

  5. kubelet 通过监测etcd数据库(即不停地看etcd中的记录),发现 k8s api server 中有了个新的Node;

    如果这条记录中的Node与自己的编号相同(即这个Pod由scheduler分配给自己了);

    则调用node中的docker api,创建container。

    引用自:https://www.cnblogs.com/chaojiyingxiong/p/14146431.html

    另外关于"kube-scheduler原理介绍及分析" 这篇写的非常好,在此一并记录以备后用:https://blog.csdn.net/li_101357/article/details/89980217

3 基于二进制部署kubernetes集群

以下部署使用kubeasz工具并参考其部署文档完成。其项目托管与git https://github.com/easzlab/kubeasz

3.1 服务器规划及初始化配置

3.1.1 集群规划

角色 数量 描述
部署节点 1 运行ansible/ezctl命令
master节点 3 注意etcd集群需要1,3,5,...奇数个节点,一般复用master节点
node节点 3 高可用集群至少2个master节点
etcd节点 3 运行应用负载的节点,可根据需要提升机器配置/增加节点数

3.1.2 服务器规划

|hostname|IP|

|--|--|--|

|192.168.2.10| k8s-deploy |

|192.168.2.11| k8s-master1 |

|192.168.2.12| k8s-master2 |

|192.168.2.13| k8s-master3 |

|192.168.2.14| k8s-etcd1 |

|192.168.2.15| k8s-etcd2 |

|192.168.2.16| k8s-etcd3 |

|192.168.2.17| k8s-node1 |

|192.168.2.18| k8s-node2 |

|192.168.2.19| k8s-node3 |

3.1.3 硬件配置

master节点:4c/8g内存/100g硬盘

worker节点:建议8c/16g内存/100g硬盘

注意:默认配置下容器/kubelet会占用/var的磁盘空间,如果磁盘分区特殊,可以设置config.yml中的容器/kubelet数据目录:CONTAINERD_STORAGE_DIR DOCKER_STORAGE_DIR KUBELET_ROOT_DIR

3.1.4 deploy节点生成ssh密钥及分发至其他节点,使deploy节点可免密ssh登录到其他节点。

root@k8-deploy:~# apt install sshpass -y
root@k8s-deploy:~# for i in `seq 11 19`;do sshpass -p yan123.. ssh-copy-id 192.168.2.$i -o StrictHostKeyChecking=no; done

3.1.5 安装pip3,使用pip3 安装ansible

root@k8s-deploy:~# apt install python3-pip -y
root@k8s-deploy:~# pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/

3.2 kubeasz工具配置

3.2.1 kubeasz工具ezdown脚本下载

root@k8s-deploy:~# export release=3.1.0
root@k8s-deploy:~# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@k8s-deploy:~# chmod +x ./ezdown

3.2.2 ezdown脚本中修改docker和k8s版本

root@k8s-deploy:~# vim ezdown
DOCKER_VER=19.03.15
K8S_BIN_VER=v1.21.0

3.2.3 下载项目源码、二进制及离线镜像

root@k8-deploy:~# ./ezdown -D

3.3 集群安装

3.3.1 创建集群配置实例

root@k8-deploy:~# cd /etc/kubeasz/
root@k8-deploy:/etc/kubeasz# ./ezctl new k8s-fx01
2021-09-18 15:37:02 DEBUG generate custom cluster files in /etc/kubeasz/clusters/k8s-fx01
2021-09-18 15:37:02 DEBUG set version of common plugins
2021-09-18 15:37:03 DEBUG cluster k8s-fx01: files successfully created.
2021-09-18 15:37:03 INFO next steps 1: to config '/etc/kubeasz/clusters/k8s-fx01/hosts'
2021-09-18 15:37:03 INFO next steps 2: to config '/etc/kubeasz/clusters/k8s-fx01/config.yml'

3.3.2 修改集群配置文件

集群创建后会在/etc/kubeasz/clusters/创建以集群名命名的目录,里面有2个配置文件。

hosts文件修改的内容如下:其中master和node节点暂时各保留1台不部署,为后续测试单独增节点。

[etcd]
192.168.2.14
192.168.2.15
192.168.2.16 # master node(s)
[kube_master]
192.168.2.11
192.168.2.12 # work node(s)
[kube_node]
192.168.2.17
192.168.2.18 # 192.168.1.8 192.168.1.170为2个harbor的IP,192.168.1.110为harbor的代理vip
[ex_lb]
192.168.1.8 LB_ROLE=backup EX_APISERVER_VIP=192.168.1.110 EX_APISERVER_PORT=8443
192.168.1.170 LB_ROLE=master EX_APISERVER_VIP=192.168.1.110 EX_APISERVER_PORT=8443 CLUSTER_NETWORK="calico" # K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.0.0.0/16" # Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.100.0.0/16" NODE_PORT_RANGE="30000-65000" CLUSTER_DNS_DOMAIN="fx.local" bin_dir="/usr/local/bin"

config.yml文件修改的内容如下

# [containerd]基础容器镜像
SANDBOX_IMAGE: "192.168.1.110/k8s/easzlab-pause-amd64:3.4.1" # [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8","192.168.1.110"]' # node节点最大pod 数
MAX_PODS: 300 # coredns 自动安装
dns_install: "no"
ENABLE_LOCAL_DNS_CACHE: false # metric server 自动安装
metricsserver_install: "no" # dashboard 自动安装
dashboard_install: "no"

配置文件中需要从官方网站下载的easzlab-pause-amd64镜像可以先下载好传到本地内网的harbor上,并在配置文件中修改为本地harbor的地址

docker pull easzlab/pause-amd64:3.4.1
docker tag easzlab/pause-amd64:3.4.1 192.168.1.110/k8s/easzlab-pause-amd64:3.4.1
docker push 192.168.1.110/k8s/easzlab-pause-amd64:3.4.1

3.3.3开始分步骤安装

查看 ezctl 分布安装说明

root@k8-deploy:/etc/kubeasz# ./ezctl help setup
Usage: ezctl setup <cluster> <step>
available steps:
01 prepare to prepare CA/certs & kubeconfig & other system settings
02 etcd to setup the etcd cluster
03 container-runtime to setup the container runtime(docker or containerd)
04 kube-master to setup the master nodes
05 kube-node to setup the worker nodes
06 network to setup the network plugin
07 cluster-addon to setup other useful plugins
90 all to run 01~07 all at once
10 ex-lb to install external loadbalance for accessing k8s from outside
11 harbor to install a new harbor server or to integrate with an existed one examples: ./ezctl setup test-k8s 01 (or ./ezctl setup test-k8s prepare)
./ezctl setup test-k8s 02 (or ./ezctl setup test-k8s etcd)
./ezctl setup test-k8s all
./ezctl setup test-k8s 04 -t restart_master

修改01.prepare.yml配置文件

vim playbooks/01.prepare.yml
删除
- ex_lb
- chrony

01-创建证书和环境准备

本步骤主要完成:

  • (optional) role:os-harden,可选系统加固,符合linux安全基线,详见upstream
  • (optional) role:chrony,可选集群节点时间同步
  • role:deploy,创建CA证书、集群组件访问apiserver所需的各种kubeconfig
  • role:prepare,系统基础环境配置、分发CA证书、kubectl客户端安装
root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 01

02-安装etcd集群

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 02

etcd安装完成后可以查看状态是否正常

root@k8-etcd1:~# for i in `seq 14 16`;do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://192.168.2.${i}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health;done
https://192.168.2.14:2379 is healthy: successfully committed proposal: took = 23.942932ms
https://192.168.2.15:2379 is healthy: successfully committed proposal: took = 38.030463ms
https://192.168.2.16:2379 is healthy: successfully committed proposal: took = 25.813005ms

03-安装容器运行时(docker)

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 03

04-安装kube_master节点

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 04

执行 kubectl get componentstatus 验证 master节点的主要组件:

root@k8-deploy:/etc/kubeasz# kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health":"true"}
etcd-0 Healthy {"health":"true"}
etcd-1 Healthy {"health":"true"}

05-安装kube_node节点

kube_node 是集群中运行工作负载的节点,前置条件需要先部署好kube_master节点,它需要部署如下组件:

  • kubelet: kube_node上最主要的组件
  • kube-proxy: 发布应用服务与负载均衡
  • haproxy:用于请求转发到多个 apiserver,详见HA-2x 架构
  • calico: 配置容器网络 (或者其他网络组件)

修改配置文件,增加kube-proxy 代理模式ipvs

vim /etc/kubeasz/roles/kube-node/templates/kube-proxy-config.yaml.j2
...
mode: "{{ PROXY_MODE }}"
ipvs:
scheduler: rr

安装kube_node节点

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 04

验证node状态:

root@k8-deploy:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 Ready,SchedulingDisabled master 58m v1.21.0
192.168.2.12 Ready,SchedulingDisabled master 58m v1.21.0
192.168.2.17 Ready node 43s v1.21.0
192.168.2.18 Ready node 43s v1.21.0

06-安装网络组件

将网络组件需要的镜像先手动下载重新打tag并上传到本地内网harbor

root@k8s-deploy:/etc/kubeasz# docker pull calico/cni:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/cni:v3.15.3 192.168.1.110/k8s/calico-cni:v3.15.3
docer push 192.168.1.110/k8s/calico-cni:v3.15.3 root@k8s-deploy:/etc/kubeasz# docker pull calico/pod2daemon-flexvol:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/pod2daemon-flexvol:v3.15.3 192.168.1.110/k8s/calico-pod2daemon-flexvol:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker push 192.168.1.110/k8s/calico-pod2daemon-flexvol:v3.15.3 root@k8s-deploy:/etc/kubeasz# docker pull calico/node:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/node:v3.15.3 192.168.1.110/k8s/calico-node:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker push 192.168.1.110/k8s/calico-node:v3.15.3 root@k8s-deploy:/etc/kubeasz# docker pull calico/kube-controllers:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker tag calico/kube-controllers:v3.15.3 192.168.1.110/k8s/calico-kube-controllers:v3.15.3
root@k8s-deploy:/etc/kubeasz# docker push 192.168.1.110/k8s/calico-kube-controllers:v3.15.3

修改配置文件,将镜像地址替换为本地内网harbor地址,修改后如下:

root@k8-deploy:/etc/kubeasz# grep image roles/calico/templates/calico-v3.15.yaml.j2 -n
212: image: 192.168.1.110/k8s/calico-cni:v3.15.3
251: image: 192.168.1.110/k8s/calico-pod2daemon-flexvol:v3.15.3
262: image: 192.168.1.110/k8s/calico-node:v3.15.3
488: image: 192.168.1.110/k8s/calico-kube-controllers:v3.15.3

执行安装

root@k8-deploy:/etc/kubeasz# ./ezctl setup k8s-fx01 05

安装完成后验证:

root@k8-node1:~# calicoctl node status
Calico process is running. IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 192.168.2.11 | node-to-node mesh | up | 10:39:20 | Established |
| 192.168.2.18 | node-to-node mesh | up | 10:39:23 | Established |
| 192.168.2.12 | node-to-node mesh | up | 10:39:25 | Established |
+--------------+-------------------+-------+----------+-------------+ IPv6 BGP status
No IPv6 peers found.

手动创建三个pod进行网络测试

root@k8-deploy:~# kubectl run net-test1 --image 192.168.1.110/test/alpine:v1 sleep 30000
pod/net-test1 created
root@k8-deploy:~# kubectl run net-test2 --image 192.168.1.110/test/alpine:v1 sleep 30000
pod/net-test2 created
root@k8-deploy:~# kubectl run net-test3 --image 192.168.1.110/test/alpine:v1 sleep 30000
pod/net-test3 created root@k8-deploy:~# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default net-test1 1/1 Running 0 25s 10.100.224.65 192.168.2.18 <none> <none>
default net-test2 1/1 Running 0 18s 10.100.172.193 192.168.2.17 <none> <none>
default net-test3 1/1 Running 0 14s 10.100.224.66 192.168.2.18 <none> <none>
kube-system calico-kube-controllers-85f8dc6778-4cdk4 1/1 Running 0 3d19h 192.168.2.17 192.168.2.17 <none> <none>
kube-system calico-node-6zb7v 1/1 Running 0 3d19h 192.168.2.18 192.168.2.18 <none> <none>
kube-system calico-node-ffmv2 1/1 Running 0 3d19h 192.168.2.11 192.168.2.11 <none> <none>
kube-system calico-node-m4npt 1/1 Running 0 3d19h 192.168.2.12 192.168.2.12 <none> <none>
kube-system calico-node-qx9lf 1/1 Running 0 3d19h 192.168.2.17 192.168.2.17 <none> <none> root@k8-deploy:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 10.100.172.193
PING 10.100.172.193 (10.100.172.193): 56 data bytes
64 bytes from 10.100.172.193: seq=0 ttl=62 time=1.447 ms
64 bytes from 10.100.172.193: seq=1 ttl=62 time=1.234 ms
^C
--- 10.100.172.193 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 1.234/1.340/1.447 ms
/ # ping 10.100.224.66
PING 10.100.224.66 (10.100.224.66): 56 data bytes
64 bytes from 10.100.224.66: seq=0 ttl=63 time=0.310 ms
64 bytes from 10.100.224.66: seq=1 ttl=63 time=0.258 ms
^C
--- 10.100.224.66 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.258/0.284/0.310 ms

安装coredns

手动下载coredns镜像

root@k8-deploy:~# docker pull coredns/coredns:1.8.3

如果下载失败,可以通过其他方式先把镜像保存成文件,再导入集群服务器。

1.8.3 版本下载地址:https://hub.docker.com/layers/coredns/coredns/1.8.3/images/sha256-95552cb6e83c78034bf6112a8e014932fb58e617aacf602997b10e80228fd697?context=explore

root@k8-deploy:~# docker load -i coredns-image-v1.8.3.tar.gz
85c53e1bd74e: Loading layer [==================================================>] 43.29MB/43.29MB
Loaded image: k8s.gcr.io/coredns/coredns:v1.8.3

再把官方镜像打tag,上传到本地镜像harbor

root@k8s-deploy:~/k8s# docker tag k8s.gcr.io/coredns/coredns:v1.8.3 192.168.1.110/k8s/coredns:v1.8.3
root@k8s-deploy:~/k8s# docker push 192.168.1.110/k8s/coredns-coredns:v1.8.3

准备coredns.yml文件

root@k8-deploy:~# wget https://dl.k8s.io/v1.21.4/kubernetes.tar.gz

root@k8-deploy:~# tar xf kubernetes.tar.gz

root@k8-deploy:~# cd /kubernetes/cluster/addons/dns/coredns

root@k8-deploy:~/kubernetes/cluster/addons/dns/coredns# cp coredns.yaml.base coredns.yaml

修改coredns.yml文件以下配置:

63         kubernetes fx.local in-addr.arpa ip6.arpa {
67 forward . 223.5.5.5 {
120 image: 192.168.1.110/k8s/coredns:v1.8.3
124 memory: 256Mi
187 type: NodePort
201 targetPort: 9153
202 nodePort: 30009

安装coredns

root@k8-deploy:~# kubectl apply -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created root@k8-deploy:~# kubectl get pod -A -o wide
kube-system coredns-778bbd987f-g42q8 1/1 Running 0 2m26s 10.100.172.194 192.168.2.17 <none> <none>

进入pod进行域名解析测试:

root@k8-deploy:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping www.baidu.com
PING www.baidu.com (110.242.68.3): 56 data bytes
64 bytes from 110.242.68.3: seq=0 ttl=52 time=17.069 ms
64 bytes from 110.242.68.3: seq=1 ttl=52 time=17.331 ms

安装dashboar

先手动下载相关镜像,并上传到本地harbor

root@k8s-deploy:~/k8s# docker pull kubernetesui/dashboard:v2.3.1
root@k8s-deploy:~/k8s# docker tag kubernetesui/dashboard:v2.3.1 192.168.1.110/k8s/kubernetesui-dashboard:v2.3.1
root@k8s-deploy:~/k8s# docker push 192.168.1.110/k8s/kubernetesui-dashboard:v2.3.1 root@k8s-master1:~# docker pull kubernetesui/metrics-scraper:v1.0.6
root@k8s-master1:~# docker tag kubernetesui/metrics-scraper:v1.0.6 192.168.1.110/k8s/kubernetesui-metrics-scraper:v1.0.6
root@k8s-master1:~# docker push 192.168.1.110/k8s/kubernetesui-metrics-scraper:v1.0.6

修改配置文件

# 如果下载有问题,可以使用浏览器打开文件,复制内容到服务器上的文件中。
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
mv recommended.yaml dashboard-2.3.1.yaml vim dashboard-2.3.1.yaml 40 type: NodePort
41 ports:
42 - port: 443
43 targetPort: 8443
44 nodePort: 30002
192 image: 192.168.1.110/k8s/kubernetesui-dashboard:v2.3.1
277 image: 192.168.1.110/k8s/kubernetesui-metrics-scraper:v1.0.6

安装dashboard

root@k8-deploy:~# kubectl apply -f dashboard-v2.3.1.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created root@k8-deploy:~# kubectl get pod -A -o wide |grep dash
kubernetes-dashboard dashboard-metrics-scraper-7459c89f54-g27ls 1/1 Running 0 33s 10.100.224.69 192.168.2.18 <none> <none>
kubernetes-dashboard kubernetes-dashboard-dfcb6dcdb-2dzxd 1/1 Running 0 33s 10.100.224.68 192.168.2.18 <none> <none>

为dashboard webui登录准备token准备账号及权限的配置文件

root@k8-deploy:~# cat admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard ---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

创建账号

root@k8-deploy:~# kubectl apply -f admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

获取账号的token

root@k8-deploy:~# kubectl get secrets -A |grep admin
kubernetes-dashboard admin-user-token-7zrzk kubernetes.io/service-account-token 3 2m11s
root@k8-deploy:~# kubectl describe secrets admin-user-token-7zrzk -n kubernetes-dashboard
Name: admin-user-token-7zrzk
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: ebcd7707-19bf-45e5-96d4-d49c5fa4ac93 Type: kubernetes.io/service-account-token Data
====
ca.crt: 1350 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjhtdjZFNVdBVlZZWEJyODE0bDdZYy1hb1BJNUxldVNzWG9haVZIQXZraDAifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTd6cnprIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlYmNkNzcwNy0xOWJmLTQ1ZTUtOTZkNC1kNDljNWZhNGFjOTMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.qjhBK2i-d-IRktF-w4q5xLgmyVPe2qCiMHJ09nBPJLnUZyYUSQdhYogSi7Gdc3M5NoRspoLRX-fxwYEsYxK3bZYoePk8zEbx8_WS87H9KncjUCRLrxGXjwiVkbVg4DJc1ewziRaEFUKIPCneuVksHDAEu3CBkqYMCYROIj7MLHIJKT1EzrzVG5IWoov0t6exNJKFkpxRovF1WvpDU2qXbFgkCjf_alm7PdoxeU-ACwqjVc_-5eXqOwKPh1MKHQT2Z7ZzvrKZhSlyDWXLAryPw2klpjZezxo5-Q0JFBtCqCRSl2pLvnLPBN6NfdT32Ej139_cXrqgFG5h4k8FvpGgUg
root@k8-deploy:~#

使用token登录dashboard web页面

5.基于二进制部署kubernetes(k8s)集群

3.3.4 增加master节点

查看现有集群节点状态

root@k8-deploy:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.12 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.17 Ready node 8d v1.21.0
192.168.2.18 Ready node 8d v1.21.0

增加master节点

root@k8-deploy:/etc/kubeasz# ./ezctl add-master k8s-fx01 192.168.2.13

再次查看集群节点状态,验证是否添加成功

root@k8-deploy:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.12 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.13 Ready,SchedulingDisabled master 6m4s v1.21.0
192.168.2.17 Ready node 8d v1.21.0
192.168.2.18 Ready node 8d v1.21.0

3.3.5 增加node节点

增加node节点命令

root@k8-deploy:/etc/kubeasz# ./ezctl add-node k8s-fx01 192.168.2.19

查看集群节点状态,验证是否添加成功

root@k8-deploy:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.12 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.13 Ready,SchedulingDisabled master 20m v1.21.0
192.168.2.17 Ready node 8d v1.21.0
192.168.2.18 Ready node 8d v1.21.0
192.168.2.19 Ready node 4m21s v1.21.0