Kubernetes 1.7.5部署以及kubernates-dashboard 1.7安装

时间:2021-01-24 16:58:50

Kubernetes 1.7.5部署

环境

  • 系统 CentOS 7.2
  • CPU 8 cores
  • 内存 16G 64G 64G
  • 三台主机,hostnamehd-22 hd-26 hd-28 其中hd-22master节点

准备工作

系统设置

  • 关闭防火墙 systemctl stop firewalld && systemctl disable firewalld
  • 禁用交换内存 swapoff -a
  • 禁用SELinux setenforce 0

配置yum源

docker

# cat > /etc/yum.repo.d/docker-main.repo <<EOF
name=Docker main Repository
baseurl=https://get.daocloud.io/docker/yum-repo/main/centos/7
enabled=1
gpgcheck=1
gpgkey=https://get.daocloud.io/docker/yum/gpg
EOF

kubernetes

# cat > /etc/yum.repo.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
EOF

安装docker以及kubernetes

docker安装

# yum -y install docker-engine

# mkdir -p /etc/docker

# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": ["http://docker.mirrors.ustc.edu.cn"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

# systemctl enable docker && systemctl start docker

kubernetes安装

k8s集群分为master节点和node节点,master节点主要用于对于集群进行管理。k8s安装一般有两种安装方式,第一种为官方提供的工具kubeadm安装,第二种为二进制文件安装,此处主要介绍第一种

镜像下载

注意:由于使用kubeadm在安装的过程中会使用一些谷歌开源的镜像,但是国内无法访问到gcr.io,所以此处需要手动去pull镜像并进行docker tag操作。此处我已编译的k8s镜像为v1.7.5

# docker pull alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.4
# docker pull alleyj/k8s-dns-kube-dns-amd64:1.14.4
# docker pull alleyj/k8s-dns-sidecar-amd64:1.14.4
# docker pull alleyj/controller-manager-amd64:v1.7.5
# docker pull alleyj/kube-apiserver-amd64:v1.7.5
# docker pull alleyj/kube-scheduler-amd64:v1.7.5
# docker pull alleyj/kube-proxy-amd64:v1.7.5
# docker pull alleyj/kube-discovery-amd64:1.0
# docker pull alleyj/dnsmasq-metrics-amd64:1.0
# docker pull alleyj/etcd-amd64:3.0.17
# docker pull alleyj/exechealthz-amd64:1.2
# docker pull alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.1
# docker pull alleyj/k8s-dns-kube-dns-amd64:1.14.1
# docker pull alleyj/k8s-dns-sidecar-amd64:1.14.1
# docker pull alleyj/pause-amd64:3.0

# 下载后可将其push至自己的私服中方便其他节点使用,此处省略


# docker tag alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.4 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
# docker tag alleyj/k8s-dns-kube-dns-amd64:1.14.4 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
# docker tag alleyj/k8s-dns-sidecar-amd64:1.14.4 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
# docker tag alleyj/kube-controller-manager-amd64:v1.7.5 gcr.io/google_containers/controller-manager-amd64:v1.7.5
# docker tag alleyj/kube-apiserver-amd64:v1.7.5 gcr.io/google_containers/kube-apiserver-amd64:v1.7.5
# docker tag alleyj/kube-scheduler-amd64:v1.7.5 gcr.io/google_containers/kube-scheduler-amd64:v1.7.5
# docker tag alleyj/kube-proxy-amd64:v1.7.5 gcr.io/google_containers/kube-proxy-amd64:v1.7.5
# docker tag alleyj/kube-discovery-amd64:1.0 gcr.io/google_containers/kube-discovery-amd64:1.0
# docker tag alleyj/dnsmasq-metrics-amd64:1.0 gcr.io/google_containers/dnsmasq-metrics-amd64:1.0
# docker tag alleyj/etcd-amd64:3.0.17 gcr.io/google_containers/etcd-amd64:3.0.17
# docker tag alleyj/exechealthz-amd64:1.2 gcr.io/google_containers/exechealthz-amd64:1.2
# docker tag alleyj/k8s-dns-dnsmasq-nanny-amd64:1.14.1 gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.1
# docker tag alleyj/k8s-dns-kube-dns-amd64:1.14.1 gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.1
# docker tag alleyj/k8s-dns-sidecar-amd64:1.14.1 gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.1
# docker tag alleyj/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

安装kubeadm等

# yum -y install kubectl kubeadm kubelet kubernetes-cni

# systemctl enable kubelet && systemctl start kubelet

每个节点都需要安装,安装结束后启动kubelet后状态为loaded

Master节点安装

# kubeadm init --kubernetes-version=v1.7.5

执行此命令后会进行kubernetes安装,如果卡在一个步骤不动的话,可通过journalctl -xue | kubelet查看具体的错误,直到输出类似于:kubeadm join --token fa1219.d7b8db5b25685776 10.8.177.22:6443时候,安装完成,记录下次命令,此命令为后续节点交由master管理的指令。

接着执行:

# mkdir -p $HOME/.kube
# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# chown $(id -u):$(id -g) $HOME/.kube/config

# 说明,如果需要由普通用户执行kubectl命令,也可以进行以上操作

此时,执行kubectl get nodes

# kubectl get nodes
NAME STATUS AGE VERSION
hd-22 NoReady 1d v1.7.5

节点显示NoReady是因为没有安装网络插件,此处选用weave,在master上执行

# curl -O https://git.io/weave-kube-1.6

# kubectl apply -f weave-daemonset-k8s-1.6.yaml

安装node,加入集群

# yum -y install kubectl kubeadm kubelet kubernetes-cni

# systemctl enable kubelet && systemctl start kubelet

# kubeadm join --token fa1219.d7b8db5b25685776 10.8.177.22:6443

以上,安装完成。

验证

查看nodes信息:

# kubectl get nodes
NAME STATUS AGE VERSION
hd-22 Ready 1d v1.7.5
hd-26 Ready 23h v1.7.5
hd-28 Ready 23h v1.7.5
[k8s@hd-22 ~]$

查看pods信息

# kubectl get po -n=kube-system
NAME READY STATUS RESTARTS AGE
etcd-hd-22 1/1 Running 0 1d
kube-apiserver-hd-22 1/1 Running 0 1d
kube-controller-manager-hd-22 1/1 Running 0 1d
kube-dns-2425271678-nsfts 3/3 Running 0 1d
kube-proxy-0f4nd 1/1 Running 0 23h
kube-proxy-1q518 1/1 Running 0 23h
kube-proxy-943l5 1/1 Running 0 1d
kube-scheduler-hd-22 1/1 Running 0 1d
weave-net-f0nrb 2/2 Running 0 1d
weave-net-fxzl0 2/2 Running 0 23h
weave-net-lp448 2/2 Running 0 23h

如果发现某个pod状态不是running则执行kubectl describe <podName> -n=kube-system查看具体的报错信息.

安装kubernetes-dashboard 1.7.1

kubernetes-dashboard 1.6.x以前和1.7.x的差距比较大,主要增加了一些https的认证

镜像准备

# docker pull alleyj/kubernetes-dashboard-init-amd64:V1.0.1
# docker pull alleyj/kubernetes-dashboard-amd64:v1.7.1
# docker pull alleyj/heapster-influxdb-amd64:v1.3.3
# docker pull alleyj/heapster-grafana-amd64:v4.4.3
# docker pull alleyj/heapster-amd64:v1.4.0

# 下载后可将其push至自己的私服中方便其他节点使用,此处省略

# docker tag alleyj/kubernetes-dashboard-init-amd64:V1.0.1 gcr.io/google_containers/kubernetes-dashboard-init-amd64:v1.0.1
# docker tag alleyj/kubernetes-dashboard-amd64:v1.7.1 gcr.io/google_containers/kubernetes-dashboard-amd64:v1.7.1
# docker tag alleyj/heapster-influxdb-amd64:v1.3.3 gcr.io/google_containers/heapster-influxdb-amd64:v1.3.3
# docker tag alleyj/heapster-grafana-amd64:v4.4.3 gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
# docker tag alleyj/heapster-amd64:v1.4.0 gcr.io/google_containers/heapster-amd64:v1.4.0

认证文件准备

# openssl req -newkey rsa:4096 -nodes -sha256 -keyout alleyz.key -x509 -days 365 -out dashboard.crt

按照提示输入(最后一个需要输入master的主机名称):

Generating a 4096 bit RSA private key
............................................................................................................................................................................................++
......................................................................................++
writing new private key to 'alleyz.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:86
State or Province Name (full name)
[]:Beijing
Locality Name (eg, city)
[Default City]:Beijing
Organization Name (eg, company)
[Default Company Ltd]:hollycrm
Organizational Unit Name (eg, section)
[]:td
Common Name (eg, your name or your server's hostname)
[]:hd-22

最终会产生两个文件:

# ll
-rw-r--r--. 1 root root 2086 Nov 14 09:59 dashboard.crt
-rw-r--r--. 1 root root 3272 Nov 14 09:59 dashboard.key

# pwd
/root/k8s/dash-certs

## 必须得执行此句话
# kubectl create secret generic kubernetes-dashboard-certs --from-file=/root/k8s/dash-certs -n kube-system

dashboard YAML文件

# curl -O https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

启动dashboard

# kubectl apply -f kubernetes-dashboard.yaml

启动后并不能直接访问,如果当前Master服务可以打开浏览器的话需要执行:

# kubectl proxy
Starting to serve on 127.0.0.1:8001

在本机可使用http://127.0.0.1:8001访问,如果期望在其他机器上进行访问,则需要执行下边的命令:

# kubectl -n kube-system edit service kubernetes-dashboard

type: ClusterIP中的ClusterIP改为NodePort,接着查看开放的端口号:

NAME                   CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard 10.96.144.60 <nodes> 443:30686/TCP 8h

可以看到映射的端口号为30686此时,通过https://10.8.177.22:30686/即可成功访问!!!

后记

在dashboard登录的界面发现需要使用kubeconfig或者Token的方式登录,此处需要进行繁琐的SSL认证授权操作,所以,如果是测试环境可执行以下:

# cat > dashboard-admin.yaml <<EOF
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
EOF

# kubectl create -f dashboard-admin.yaml

然后登录的时候直接选择skip即可看到所有的集群信息了

总结

在集群安装中经常出现各种各样的错误,所以需要及时查看日志进行修复,常用的命令:

  • journalctl -xue | kubelet 根据输出解决安装的问题
  • kubectl describe <podName> -n=kube-system 查看内部pod启动错误的原因
  • kubectl logs -f <podName> -n=kube-system 查看容器内部的日志

生命在于折腾

生命不息,折腾不止