之前的docker和etcd已经部署好了,现在node节点要部署二个服务:kubelet、kube-proxy。
部署kubelet(Master 节点操作)
1.二进制包准备
[root@k8s-master bin]# cd /usr/local/src/kubernetes/server/bin/ [root@k8s-master bin]# cp kubelet kube-proxy /opt/kubernetes/bin/ [root@k8s-master bin]# scp kubelet kube-proxy 10.0.3.226:/opt/kubernetes/bin/ [root@k8s-master bin]# scp kubelet kube-proxy 10.0.3.227:/opt/kubernetes/bin/
2.创建角色绑定
[root@k8s-master bin]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created
3.创建 kubelet bootstrapping kubeconfig 文件 设置集群参数
[root@k8s-master bin]# cd /usr/local/src/ssl/ [root@k8s-master ssl]# kubectl config set-cluster kubernetes \ > --certificate-authority=/opt/kubernetes/ssl/ca.pem \ > --embed-certs=true \ > --server=https://10.0.3.225:6443 \ > --kubeconfig=bootstrap.kubeconfig Cluster "kubernetes" set.
4.设置客户端认证参数
[root@k8s-master ssl]# kubectl config set-credentials kubelet-bootstrap \ > --token=4c7d89749d1e1a15e5fe55eb5e8446ec \ > --kubeconfig=bootstrap.kubeconfig User "kubelet-bootstrap" set.
注意这个token是部署API Server时生成的
[root@k8s-master ssl]# grep 'token' /usr/lib/systemd/system/kube-apiserver.service --enable-bootstrap-token-auth \ --token-auth-file=/opt/kubernetes/ssl/bootstrap-token.csv \
[root@k8s-master ssl]# cat /opt/kubernetes/ssl/bootstrap-token.csv 4c7d89749d1e1a15e5fe55eb5e8446ec,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
5.设置上下文参数
[root@k8s-master ssl]# kubectl config set-context default \ > --cluster=kubernetes \ > --user=kubelet-bootstrap \ > --kubeconfig=bootstrap.kubeconfig Context "default" created.
6.选择默认上下文
[root@k8s-master ssl]# kubectl config use-context default --kubeconfig=bootstrap.kubeconfig Switched to context "default".
敲一堆命令就是为了生成 bootstrap.kubeconfig 这个文件,每加一个节点都需要把这个文件拷贝过去
[root@k8s-master ssl]# cat bootstrap.kubeconfig [root@k8s-master ssl]# cp bootstrap.kubeconfig /opt/kubernetes/cfg [root@k8s-master ssl]# scp bootstrap.kubeconfig 10.0.3.226:/opt/kubernetes/cfg [root@k8s-master ssl]# scp bootstrap.kubeconfig 10.0.3.227:/opt/kubernetes/cfg
部署kubelet(Node 节点操作)
1.设置CNI支持
[root@k8s-node1 ~]# mkdir -p /etc/cni/net.d [root@k8s-node1 ~]# vim /etc/cni/net.d/10-default.conf { "name": "flannel", "type": "flannel", "delegate": { "bridge": "docker0", "isDefaultGateway": true, "mtu": 1400 } }
2.创建kubelet数据目录
[root@k8s-node1 ~]# mkdir /var/lib/kubelet
3.创建kubelet服务配置
[root@k8s-node1 ~]# vim /usr/lib/systemd/system/kubelet.service [Unit] Description=Kubernetes Kubelet Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=docker.service Requires=docker.service [Service] WorkingDirectory=/var/lib/kubelet ExecStart=/opt/kubernetes/bin/kubelet \ --address=10.0.3.226 \ #注意修改IP地址 --hostname-override=10.0.3.226 \ #注意修改IP地址 --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.0 \ --experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \ --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \ --cert-dir=/opt/kubernetes/ssl \ --network-plugin=cni \ --cni-conf-dir=/etc/cni/net.d \ --cni-bin-dir=/opt/kubernetes/bin/cni \ --cluster-dns=10.1.0.2 \ --cluster-domain=cluster.local. \ --hairpin-mode hairpin-veth \ --allow-privileged=true \ --fail-swap-on=false \ --logtostderr=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5
4.启动Kubelet
[root@k8s-node1 ~]# systemctl daemon-reload [root@k8s-node1 ~]# systemctl enable kubelet [root@k8s-node1 ~]# systemctl start kubelet
#查看服务状态 [root@k8s-node1 ~]# systemctl status kubelet
#如果启动失败,执行journalctl -xefu kubelet 查看日志。
5.查看csr请求 注意是在Mastrt上执行。
[root@k8s-master ssl]# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-ZIu6TBO8uO4jf7siY840IaGWk5lPrgRBZvZz5vz2-OM 15m kubelet-bootstrap Pending
7.在Master管理节点 批准kubelet 的 TLS 证书请求
[root@k8s-master ~]# kubectl get csr|grep 'Pending' | awk 'NR>0{print $1}'| xargs kubectl certificate approve
通过请求后查看node状态。
[root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION 10.0.3.226 Ready <none> 4h v1.10.1 10.0.3.227 Ready <none> 22s v1.10.1
在Node节点会自动生成证书文件
[root@k8s-node2 cfg]# ls -l /opt/kubernetes/ssl/kubelet* -rw-r--r-- 1 root root 1042 Nov 14 15:34 /opt/kubernetes/ssl/kubelet-client.crt -rw------- 1 root root 227 Nov 14 15:31 /opt/kubernetes/ssl/kubelet-client.key -rw-r--r-- 1 root root 2169 Nov 14 15:31 /opt/kubernetes/ssl/kubelet.crt -rw------- 1 root root 1679 Nov 14 15:31 /opt/kubernetes/ssl/kubelet.key
部署Kubernetes Proxy
1.配置kube-proxy使用LVS(Node节点都要安装)
[root@k8s-node1 ssl]# yum install -y ipvsadm ipset conntrack
2.创建 kube-proxy 证书请求(在Master节点创建)
[root@k8s-master ~]# cd /usr/local/src/ssl/ [root@k8s-master ssl]# vim kube-proxy-csr.json { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] }
3.生成证书
[root@k8s-master ssl]# cfssl gencert -ca=/opt/kubernetes/ssl/ca.pem \ > -ca-key=/opt/kubernetes/ssl/ca-key.pem \ > -config=/opt/kubernetes/ssl/ca-config.json \ > -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
4.分发证书到所有Node节点
[root@k8s-master ssl]# cp kube-proxy*.pem /opt/kubernetes/ssl/ [root@k8s-master ssl]# scp kube-proxy*.pem 10.0.3.226:/opt/kubernetes/ssl/ [root@k8s-master ssl]# scp kube-proxy*.pem 10.0.3.227:/opt/kubernetes/ssl/
5.创建kube-proxy配置文件,(在Master节点创建,之后分发到Node节点)
[root@k8s-master ssl]# kubectl config set-cluster kubernetes \ --certificate-authority=/opt/kubernetes/ssl/ca.pem \ --embed-certs=true \ --server=https://10.0.3.225:6443 \ --kubeconfig=kube-proxy.kubeconfig Cluster "kubernetes" set.
[root@k8s-master ssl]# kubectl config set-credentials kube-proxy \ --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \ --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \ --embed-certs=true \ --kubeconfig=kube-proxy.kubeconfig User "kube-proxy" set.
[root@k8s-master ssl]# kubectl config set-context default \ --cluster=kubernetes \ --user=kube-proxy \ --kubeconfig=kube-proxy.kubeconfig Context "default" created.
[root@k8s-master ssl]# kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig Switched to context "default".
6.分发kubeconfig配置文件
[root@k8s-master ssl]# cp kube-proxy.kubeconfig /opt/kubernetes/cfg/ [root@k8s-master ssl]# scp kube-proxy.kubeconfig 10.0.3.226:/opt/kubernetes/cfg/ [root@k8s-master ssl]# scp kube-proxy.kubeconfig 10.0.3.227:/opt/kubernetes/cfg/
7.创建kube-proxy服务配置(Node节点操作)
[root@k8s-node1 ~]# mkdir /var/lib/kube-proxy [root@k8s-node1 ~]# vim /usr/lib/systemd/system/kube-proxy.service [Unit] Description=Kubernetes Kube-Proxy Server Documentation=https://github.com/GoogleCloudPlatform/kubernetes After=network.target [Service] WorkingDirectory=/var/lib/kube-proxy ExecStart=/opt/kubernetes/bin/kube-proxy \ --bind-address=10.0.3.226 \ --hostname-override=10.0.3.226 \ --kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig \ --masquerade-all \ --feature-gates=SupportIPVSProxyMode=true \ --proxy-mode=ipvs \ --ipvs-min-sync-period=5s \ --ipvs-sync-period=5s \ --ipvs-scheduler=rr \ --logtostderr=true \ --v=2 \ --logtostderr=false \ --log-dir=/opt/kubernetes/log Restart=on-failure RestartSec=5 LimitNOFILE=65536 [Install] WantedBy=multi-user.target
8.启动Kubernetes Proxy
[root@k8s-node1 ~]# systemctl daemon-reload [root@k8s-node1 ~]# systemctl enable kube-proxy [root@k8s-node1 ~]# systemctl start kube-proxy
#查看服务状态 [root@k8s-node1 ~]# systemctl status kube-proxy [root@k8s-node1 ~]# ipvsadm -L -n IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 10.1.0.1:443 rr persistent 10800 -> 10.0.3.225:6443 Masq 1 0 0
Node节点上 Kubernetes Proxy、kubelet都启动正常,K8S集群就部署完成了。接下来就是Flannel网络部署。