kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

时间:2022-02-05 01:40:43

一、部署环境

  • VMware Workstation 10
  • centos7

二、主机配置(每台主机都要做)

主机名 ip cpu ram
master 192.168.137.10 3G
node1 192.168.137.11 3G

1、每台主机在 /etc/hosts 添加以下内容:

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

2、关闭防火墙、selinux、swap

systemctl stop firewalld
systemctl disable firewalld

修改:vim /etc/selinux/config

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab

3、对2台主机进行免密设置

1)、CentOS7默认没有启动ssh无密登录,去掉/etc/ssh/sshd_config其中1行的注释,每台服务器都要设置

#PubkeyAuthentication yes

然后重启ssh服务

systemctl restart sshd

2)、在master机器的/root执行:ssh-keygen -t rsa命令,一直按回车。2台机器都要执行。

[root@master ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:aMUO8b/EkylqTMb9+71ePnQv0CWQohsaMeAbMH+t87M root@master
The key's randomart image is:
+---[RSA ]----+
| o ... . |
| = o= . o |
| + oo=. . . |
| =.Boo o . .|
| . OoSoB . o |
| =.+.+ o. ...|
| + o o .. +|
| . o . ..+.|
| E ....+oo|
+----[SHA256]-----+

3)、在master上合并公钥到authorized_keys文件

[root@master ~]# cd /root/.ssh/
[root@master .ssh]# cat id_rsa.pub>> authorized_keys

4)、将master的authorized_keys复制到node1和node2节点

scp /root/.ssh/authorized_keys root@192.168.137.11:/root/.ssh/

测试,master上可以用ip免密直接登录,但是用名字还需要输入一次yes,输入一次之后以后就可以了

[root@master]# ssh master
The authenticity of host 'master (192.168.137.10)' can't be established.
ECDSA key fingerprint is 5c:c6::::::7c:d0:c6::8d:ff:bd:5f:ef.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master,192.168.137.10' (ECDSA) to the list of known hosts.
Last login: Mon Dec :: from 192.168.137.1
[root@master]# ssh node1
The authenticity of host 'node1 (192.168.137.11)' can't be established.
ECDSA key fingerprint is 8f:::db:d8:3e:9e:::ba::7a:6b:aa:5e:e2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'node1' (ECDSA) to the list of known hosts.
Last login: Mon Dec :: from master

4、加载 modprobe bridge

modprobe bridge

5、配置内核参数

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables =
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0
EOF

使文件生效

sysctl -p /etc/sysctl.d/k8s.conf

6、修改Linux 资源配置文件,调高ulimit最大打开数和systemctl管理的服务文件最大打开数

echo "* soft nofile 655360" >> /etc/security/limits.conf
echo "* hard nofile 655360" >> /etc/security/limits.conf
echo "* soft nproc 655360" >> /etc/security/limits.conf
echo "* hard nproc 655360" >> /etc/security/limits.conf
echo "* soft memlock unlimited" >> /etc/security/limits.conf
echo "* hard memlock unlimited" >> /etc/security/limits.conf
echo "DefaultLimitNOFILE=1024000" >> /etc/systemd/system.conf
echo "DefaultLimitNPROC=1024000" >> /etc/systemd/system.conf

hard limits自AIX 4.1版本开始引入。hard limits 应由AIX系统管理员设置,只有security组的成员可以将此值增大,用户本身可以减小此限定值,但是其更改将随着该用户从系统退出而失效

soft limits 是AIX核心使用的限制进程对系统资源的使用的上限值。此值可由任何人更改,但不能超出hard limits值。这里要注意的是只有security组的成员可使更改永久生效普通用户的更改在其退出系统后将失效

1)soft nofile和hard nofile示,单个用用户的软限制为1000,硬限制为1200,即表示单用户能打开的最大文件数量为1000,不管它开启多少个shell。

2)soft nproc和hard nproc 单个用户可用的最大进程数量,软限制和硬限制

3)memlock 一个任务锁住的物理内存的最大值(这里设置成无限制)

7、配置国内 yum源地址、epel源地址、Kubernetes源地址

cp -r /etc/yum.repos.d/ /etc/yum-repos-d-bak
yum install -y wget
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-.repo
wget -O /etc/yum.repos.d/epel-.repo http://mirrors.aliyun.com/repo/epel-.repo
yum clean all
yum makecache
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

9、安装其他依赖包

yum install  -y conntrack ipvsadm ipset jq sysstat curl iptables libseccomp bash-completion yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-tools vim libtool-ltdl

10、配置时间同步

yum install chrony -y

修改vim /etc/chrony.conf

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.137.10  iburst

注释掉原有的server内容,把原有的时钟同步服务设置为master结点上的时钟同步

rm -rf /etc/localtime
/usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
systemctl enable chronyd.service
systemctl start chronyd.service
chronyc sources

三、安装docker(2台主机)

1、删除老docker

1)查询已安装的docker

yum list installed | grep docker

2)如果有,就yum remove

3) 删除docker文件

rm -rf /var/lib/docker

2、设置docker yum源

yum-config-manager  --add-repo  https://download.docker.com/linux/centos/docker-ce.repo

3、列出版本

yum list docker-ce --showduplicates | sort -r

4、安装18.06.1版本(注意,最好不要安装最新版本,特别是18.06.3,这个版本会导致后面初始化master的时候报错)

yum install -y docker-ce-18.06..ce-.el7

5、配置镜像加速器和docker数据存放路径

新建:/etc/docker/daemon.json

mkdir -p /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["https://s5klxlmp.mirror.aliyuncs.com"],
"graph": "/home/docker-data"
}
EOF

说明:https://s5klxlmp.mirror.aliyuncs.com   这个地址是登录阿里云后,拿到的

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

 6、启动docker

systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl status docker

如果报以下错误:

[root@node1 ~]# journalctl -xe
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21+08:00" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21+08:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd-debug.sock"
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21+08:00" level=info msg=serving... address="/var/run/docker/containerd/docker-containerd.sock"
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21+08:00" level=info msg="containerd successfully booted in 0.006065s"
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21.620543305+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc4203c3870, READY" module=grpc
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21.621314464+08:00" level=info msg="parsed scheme: \"unix\"" module=grpc
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21.621323002+08:00" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21.621345935+08:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///var/run/docker/containerd/docker-containerd.sock 0 <nil>}]" module=grpc
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21.621352865+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21.621374447+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42017e3c0, CONNECTING" module=grpc
Mar :: node1 dockerd[]: time="2019-03-04T21:22:21.621481017+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc42017e3c0, READY" module=grpc
Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.629882317+08:00" level=warning msg="Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man dockerd` to refer to dm.thinpooldev section." s
Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.775919807+08:00" level=info msg="Creating filesystem xfs on device docker-253:1-201421627-base, mkfs args: [-m crc=0,finobt=0 /dev/mapper/docker-253:1-201421627-base]" storage-driver=devicemapper
Mar 04 21:22:21 node1 dockerd[3925]: time="2019-03-04T21:22:21.776837868+08:00" level=info msg="Error while creating filesystem xfs on device docker-253:1-201421627-base: exit status 1" storage-driver=
devicemapper
Mar :: node1 dockerd[]: Error starting daemon: error initializing graphdriver: exit status
Mar :: node1 systemd[]: docker.service: main process exited, code=exited, status=/FAILURE
Mar :: node1 systemd[]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Mar :: node1 systemd[]: Unit docker.service entered failed state.
Mar :: node1 systemd[]: docker.service failed.
Mar :: node1 systemd[]: docker.service holdoff time over, scheduling restart.
Mar :: node1 systemd[]: Stopped Docker Application Container Engine.
-- Subject: Unit docker.service has finished shutting down
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has finished shutting down.
Mar :: node1 systemd[]: start request repeated too quickly for docker.service
Mar :: node1 systemd[]: Failed to start Docker Application Container Engine.
-- Subject: Unit docker.service has failed
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit docker.service has failed.
--
-- The result is failed.
Mar :: node1 systemd[]: Unit docker.service entered failed state.
Mar :: node1 systemd[]: docker.service failed.
Mar :: node1 systemd[]: Started Session of user root.
-- Subject: Unit session-.scope has finished start-up
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
--
-- Unit session-.scope has finished starting up.
--
-- The start-up result is done.
Mar :: node1 CROND[]: (root) CMD (/usr/lib64/sa/sa1 )

那么执行以下语句

yum update xfsprogs -y
systemctl start docker.service
systemctl enable docker.service
systemctl status docker.service

四、安装kubeadm、kubelet、kubectl(2台主机)

yum install -y kubelet-1.13. kubeadm-1.13. kubectl-1.13. --disableexcludes=kubernetes
--disableexcludes 指跳过特定安装包
修改kubelet配置文件
sed -i "s/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=\"--fail-swap-on=false\"/" /etc/sysconfig/kubelet
启动
systemctl enable kubelet
systemctl start kubelet

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

kubelet 服务会暂时启动不了,先不用管它

 五、下载镜像(只在master上执行)

1、生成默认配置

kubeadm config print init-defaults > /root/kubeadm.conf

2、修改 /root/kubeadm.conf,使用国内阿里的imageRepository: registry.aliyuncs.com/google_containers

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

3、下载镜像

kubeadm config images pull --config /root/kubeadm.conf
[root@master ~]# docker images|grep ali
registry.aliyuncs.com/google_containers/kube-proxy v1.13.3 8fa56d18961f months ago .2MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.13.3 9508b7d8008d months ago .6MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.3 d82530ead066 months ago 146MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.13.3 f1ff9b7e3d6e months ago 181MB
registry.aliyuncs.com/google_containers/coredns 1.2. f59dcacceff4 months ago 40MB
registry.aliyuncs.com/google_containers/etcd 3.2. 3cab8e1b9802 months ago 220MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 months ago 742kB

tag镜像为k8s.gcr.io的形式

docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3 k8s.gcr.io/kube-controller-manager:v1.13.3
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.3 k8s.gcr.io/kube-apiserver:v1.13.3
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.3 k8s.gcr.io/kube-scheduler:v1.13.3
docker tag registry.aliyuncs.com/google_containers/coredns:1.2. k8s.gcr.io/coredns:1.2.
docker tag registry.aliyuncs.com/google_containers/etcd:3.2. k8s.gcr.io/etcd:3.2.24
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker rmi -f registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-controller-manager:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-apiserver:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/kube-scheduler:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/coredns:1.2.6
docker rmi -f registry.aliyuncs.com/google_containers/etcd:3.2.24
docker rmi -f registry.aliyuncs.com/google_containers/pause:3.1

六、部署master(只在master上执行)

1、初始化master节点

kubeadm init --kubernetes-version=v1.13.3 --pod-network-cidr=10.244.0.0/

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

可以看到上面部署成功了

2、为了普通用户使用,需要执行下面

 mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、记住最后的一句语,后面将node加入master的时候用到

kubeadm join 192.168.137.10: --token v6zife.f06w6ub82vsmi0ql --discovery-token-ca-cert-hash sha256:29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76

上面这句,其实也不一定用记住,用下面的方法也可以获得token和hash值

1)获取token

[root@master ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
v6zife.f06w6ub82vsmi0ql 23h --12T20::26Z authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token

默认情况下 Token 过期是时间是24小时,如果 Token 过期以后,可以输入以下命令,生成新的 Token

kubeadm token create

2)获取hash值

[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der >/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76

4、验证

[root@master ~]# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-78d4cf999f-99fpq / Pending 22m
kube-system coredns-78d4cf999f-cz8b6 / Pending 22m
kube-system etcd-master / Running 21m
kube-system kube-apiserver-master / Running 21m
kube-system kube-controller-manager-master / Running 21m
kube-system kube-proxy-56pxn / Running 22m
kube-system kube-scheduler-master / Running 21m

发现 coredns pod处于Pending状态,先不管

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

七、部署calico网络(只在master上执行)

1、下载相关文件

1)下载rbac-kdd.yaml并部署

curl https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml -O

上面的方式下载的文件版本可能是最新的,不一定跟我安装的版本兼容,我将本版本的文件内容粘贴如下:

然后执行:

kubectl apply -f rbac-kdd.yaml

2)下载calico.yaml,并修改配置,然后部署

curl https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml -O

修改typha_service_name

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

calico网络,默认是ipip模式(在每台node主机创建一个tunl0网口,这个隧道链接所有的node容器网络,官网推荐不同的ip网段适合,比如aws的不同区域主机),

修改成BGP模式,它会以daemonset方式安装在所有node主机,每台主机启动一个bird(BGP client),它会将calico网络内的所有node分配的ip段告知集群内的主机,并通过本机的网卡eth0或者ens160转发数据;

修改replicas

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

修改pod的网段(和第五节的3小节的podSubnet一致)

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

2、下载calico网络需要的docker镜像,版本可以看calico.yaml里面的

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

docker pull calico/node:v3.3.4
docker pull calico/cni:v3.3.4
docker pull calico/typha:v3.3.4

3、部署calico.yaml

kubectl apply -f calico.yaml
[root@master ~]# kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-mnzxh / Running 5m51s
kube-system calico-typha-64f566d6c9-j4rwc / Pending 5m51s
kube-system coredns-86c58d9df4-67xbh / Running 36m
kube-system coredns-86c58d9df4-t9xgt / Running 36m
kube-system etcd-master / Running 35m
kube-system kube-apiserver-master / Running 35m
kube-system kube-controller-manager-master / Running 35m
kube-system kube-proxy-8xg28 / Running 36m
kube-system kube-scheduler-master / Running 35m

这里calico-typha 没起来,那是因为我们的node节点还没安装,这里先不管。

、部署node(只在node节点上执行)

1、下载node需要的镜像

docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3
docker pull registry.aliyuncs.com/google_containers/pause:3.1
docker pull calico/node:v3.3.4
docker pull calico/cni:v3.3.4
docker pull calico/typha:v3.3.4

2、tag镜像

docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3 k8s.gcr.io/kube-proxy:v1.13.3
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker rmi -f registry.aliyuncs.com/google_containers/kube-proxy:v1.13.3
docker rmi -f registry.aliyuncs.com/google_containers/pause:3.1

3、将node加入集群(命令请看第六大节的第3小节)

kubeadm join 192.168.137.10: --token v6zife.f06w6ub82vsmi0ql --discovery-token-ca-cert-hash sha256:29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76
[root@node1 ~]# kubeadm join 192.168.137.10:6443 --token v6zife.f06w6ub82vsmi0ql --discovery-token-ca-cert-hash sha256:29a613c18f8f9aa655de7f59149757b0ee844ae1a3650e9cdf4875fddc080c76
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server "192.168.137.10:6443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.137.10:6443"
[discovery] Requesting info from "https://192.168.137.10:6443" again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "192.168.137.10:6443"
[discovery] Successfully established connection with API Server "192.168.137.10:6443"
[join] Reading configuration from the cluster...
[join] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.13" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node1" as an annotation This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the master to see this node join the cluster.

出现上面的信息,表示node加入集群成功,去master执行以下命令:

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

状态都是Ready,说明集群部署成功了

九、部署 Dashboard(只在master节点上执行)

从版本1.7开始,仪表板不再具有默认授予的完全管理员权限。所有权限都被撤销,并且只授予了使 Dashboard 工作所需的最小权限。

1、部署dashboard之前,我们需要生成证书,不然后面会https访问登录不了。

mkdir -p /etc/kubernetes/certs
cd /etc/kubernetes/certs
[root@master certs]# openssl genrsa -des3 -passout pass:x -out dashboard.pass.key 2048
Generating RSA private key, bit long modulus
......+++
............+++
e is (0x10001)
[root@master certs]# openssl rsa -passin pass:x -in dashboard.pass.key -out dashboard.key
writing RSA key

下面这步一路回车就可以

[root@master certs]# openssl req -new -key dashboard.key -out dashboard.csr
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name ( letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []: Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:
An optional company name []:
[root@master certs]# openssl x509 -req -sha256 -days 365 -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Signature ok
subject=/C=XX/L=Default City/O=Default Company Ltd
Getting Private key

2、创建secret

kubectl create secret generic kubernetes-dashboard-certs --from-file=/etc/kubernetes/certs -n kube-system

3、下载kubernetes-dashboard.yaml

curl https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml -O

4、注释掉kubernetes-dashboard.yaml里面的Secret,因为我们上面自己创建了一个,不需要自带的了

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

5、修改yaml配置文件image部分,指定镜像从阿里云镜像仓库拉取

镜像:registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

6、修改yaml的service为NodePort方式

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

7、部署kubernetes-dashboard.yaml

kubectl apply -f kubernetes-dashboard.yaml

查看是否部署成功

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

查看svc

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

8、用google浏览器查看

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

Dashboard 支持 Kubeconfig 和 Token 两种认证方式,我们这里选择Token认证方式登录,为了能用Token登录,我们必须先创建一个叫admin-user的服务账号

1)在master节点上创建  dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system

然后执行

kubectl create -f dashboard-adminuser.yaml

说明:上面创建了一个叫admin-user的服务账号,并放在kube-system命名空间下,并将cluster-admin角色绑定到admin-user账户,这样admin-user账户就有了管理员的权限。默认情况下,kubeadm创建集群时已经创建了cluster-admin角色,我们直接绑定即可。

2)查看admin-user账户的token

kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

将上面的token放入浏览器里面的  “令牌”,登录即可

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

kubeadm 双节点部署k8s v1.13.3+calico v3.3.4

注意:出于安全考虑,默认配置下Kubernetes不会将Pod调度到Master节点。如果希望将k8s-master也当作Node使用,可以执行如下命令:

kubectl taint node master node-role.kubernetes.io/master-

如果要恢复 Master Only 状态,执行如下命令:

kubectl taint node master node-role.kubernetes.io/master="":NoSchedule