前言
搭建kubernetes时看文档以及资料走了很多弯路,so 整理了最后成功安装的过程已做记录。网上的搭建文章总是少一些步骤,想本人这样的小白总是部署不成功(^_^)。
准备两台或两台以上的虚拟机,系统centos7, 本文只准备了两个虚拟机(电脑风扇已转的飞起)。
多注意红色加粗的 代码以及文字 ------qingfeng
我开始整理这个文章的时候使用的是 kubernetes 1.13, 但是当我发布的时候阿里云的源已经更新到了 kubernetes 1.14 所以我想这个文章笔记对用kubeadm 安装 kubernetes 都会有帮助
基础环境准备
两台机器信息
10.211.55.6 k8s-master
10.211.55.7 k8s-node
#设置hostname 的方法 hostnamectl set-hostname k8s-master #在 10.211.55.6 上执行 hostnamectl set-hostname k8s-node #在 10.211.55.7 上执行 hostnamectl --static #查看设置结果
所有操作无特殊说明都需要在所有节点(k8s-master 和 k8s-node)上执行
关闭防火墙 :: 如果不想启用防火墙,设置可以参考这里看一下kubernetes需要开放的端口 https://kubernetes.io/docs/setup/independent/install-kubeadm/#check-required-ports
systemctl stop firewalld.service systemctl disable firewalld.service
yum upgrade
关闭swap :: kubernetes1.8开始不关闭swap无法启动
#去掉 /etc/fstab 里面这一行 /dev/mapper/centos-swap swap swap defaults 0 0 swapoff -a cp /etc/fstab /etc/fstab_bak cat /etc/fstab_bak |grep -v swap > /etc/fstab cat /etc/fstab
修改iptables参数 :: RHEL / CentOS 7上的一些用户报告了由于iptables被绕过而导致流量路由不正确的问题。创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
cat <<EOF > /etc/sysctl.d/k8s.conf vm.swappiness = 0 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward = 1 EOF #使配置生效 modprobe br_netfilter sysctl -p /etc/sysctl.d/k8s.conf
加载ipvs模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF #这条命令有点长 chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
安装docker :: 注意docker版本, 现在最高18.06版本做了验证
yum install -y yum-utils device-mapper-persistent-data lvm2 yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum makecache fast yum install -y --setopt=obsoletes=0 docker-ce-18.06.1.ce-3.el7 systemctl start docker systemctl enable docker #查看docker版本号 docker -v Docker version 18.06.1-ce, build e68fc7a
用kubeadm 部署 kubernetes
安装kubeadm, kubelet 注意:: yum install 安装的时候一定要看一下kubernetes的版本号后面kubeadm init 的时候需要用到
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg exclude=kube* EOF #安装 注意::这里一定要看一下版本号,因为 Kubeadm init 的时候 填写的版本号不能低于kuberenete版本 yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes #启动 kubelet systemctl enable kubelet.service && systemctl start kubelet.service
启动kubelet.service之后 我们查看一下kubelet状态是未启动状态,查看原因发现是 “/var/lib/kubelet/config.yaml”文件不存在,这里可以暂时先不用处理,当kubeadm init 之后会创建此文件
#查看 kubelet 状态 [root@centos2 ~]# systemctl status kubelet.service ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: activating (auto-restart) (Result: exit-code) since 日 2019-03-31 16:18:55 CST; 7s ago Docs: https://kubernetes.io/docs/ Process: 4564 ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS (code=exited, status=255) Main PID: 4564 (code=exited, status=255) 3月 31 16:18:55 k8s-node systemd[1]: Unit kubelet.service entered failed state. 3月 31 16:18:55 k8s-node systemd[1]: kubelet.service failed. [root@centos2 ~]# #查看出错信息 [root@centos2 ~]# journalctl -xefu kubelet 3月 31 16:19:46 k8s-node systemd[1]: kubelet.service holdoff time over, scheduling restart. 3月 31 16:19:46 k8s-node systemd[1]: Stopped kubelet: The Kubernetes Node Agent. -- Subject: Unit kubelet.service has finished shutting down -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished shutting down. 3月 31 16:19:46 k8s-node systemd[1]: Started kubelet: The Kubernetes Node Agent. -- Subject: Unit kubelet.service has finished start-up -- Defined-By: systemd -- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel -- -- Unit kubelet.service has finished starting up. -- -- The start-up result is done. 3月 31 16:19:46 k8s-node kubelet[4611]: F0331 16:19:46.989588 4611 server.go:193] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory 3月 31 16:19:46 k8s-node systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a 3月 31 16:19:46 k8s-node systemd[1]: Unit kubelet.service entered failed state. 3月 31 16:19:46 k8s-node systemd[1]: kubelet.service failed.
我们在 k8s-master上用kubeadm ini初始化kubernetes :: 注意::这里的kubernetes-version 一定要和上面安装的版本号一致 否则会报错,报错信息可以参考文章后面错误集锦
#只在 k8s-master上执行 node节点不执行 kubeadm init \ --apiserver-advertise-address=10.211.55.6 \ --image-repository registry.aliyuncs.com/google_containers \ --kubernetes-version v1.13.1 \ --pod-network-cidr=10.244.0.0/16
--apiserver-advertise-addres :: 填写 k8s-master ip
--image-repository :: 镜像地址
--kubernetes-version :: 关闭版本探测,因为它的默认值是stable-1,会从https://storage.googleapis.com/kubernetes-release/release/stable-1.txt下载最新的版本号,指定版本跳过网络请求,再次强调一定要和Kubernetes版本号一致
kubeadm init 初始化信息, 我们看一下初始化过程发现自动创建了 "/var/lib/kubelet/config.yaml" 这个文件
[init] Using Kubernetes version: v1.13.1 [preflight] Running pre-flight checks [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [centos kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.6] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [centos localhost] and IPs [10.211.55.6 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [centos localhost] and IPs [10.211.55.6 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 19.507714 seconds [uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "centos" as an annotation [mark-control-plane] Marking the node centos as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node centos as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: sfaff2.iet15233unw5jzql [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user:
#======这里是用时再使用集群之前需要执行的操作------qingfeng mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: #=====这是增加节点的方法 token过期 请参考问题集锦------qingfeng kubeadm join 10.211.55.6:6443 --token sfaff2.iet15233unw5jzql --discovery-token-ca-cert-hash sha256:f798c5be53416ca3b5c7475ee0a4199eb26f9e31ee7106699729c0660a70f8d7 [root@centos ~]#
初始化成功后会提示在使用之前需要再配置一下,配置方法已经给出,另外会生成一个临时token以及增加节点的方法
#普通用户要使用k8s 需要执行下面操作 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config #如果是root 可以直接执行 export KUBECONFIG=/etc/kubernetes/admin.conf # 以上两个二选一即可,这里我是直接用的root 所以直接执行 export KUBECONFIG=/etc/kubernetes/admin.conf
现在我们查看一下 kubelet 的状态 已经是 running 状态 ,启动成功
[root@k8s-master ~]# systemctl status kubelet.service ● kubelet.service - kubelet: The Kubernetes Node Agent Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Drop-In: /usr/lib/systemd/system/kubelet.service.d └─10-kubeadm.conf Active: active (running) since 日 2019-03-31 16:11:57 CST; 26min ago Docs: https://kubernetes.io/docs/ Main PID: 32083 (kubelet) Tasks: 16 Memory: 39.6M CGroup: /system.slice/kubelet.service └─32083 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --cgroup-driver=systemd --network-plugin=cni --pod-infra-... 3月 31 16:38:28 k8s-master kubelet[32083]: W0331 16:38:28.028997 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:28 k8s-master kubelet[32083]: E0331 16:38:28.752039 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:33 k8s-master kubelet[32083]: W0331 16:38:33.029684 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:33 k8s-master kubelet[32083]: E0331 16:38:33.754045 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:38 k8s-master kubelet[32083]: W0331 16:38:38.030077 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:38 k8s-master kubelet[32083]: E0331 16:38:38.756061 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:43 k8s-master kubelet[32083]: W0331 16:38:43.030827 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:43 k8s-master kubelet[32083]: E0331 16:38:43.757292 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized 3月 31 16:38:48 k8s-master kubelet[32083]: W0331 16:38:48.031403 32083 cni.go:213] Unable to update cni config: No networks found in /etc/cni/net.d 3月 31 16:38:48 k8s-master kubelet[32083]: E0331 16:38:48.758876 32083 kubelet.go:2170] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not read...fig uninitialized Hint: Some lines were ellipsized, use -l to show in full.
查看状态 ::确认每个 组件都是 Healthy 状态
[root@centos ~]kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok controller-manager Healthy ok etcd-0 Healthy {"health": "true"}
查看node状态 da
[root@centos ~]kubectl get node NAME STATUS ROLES AGE VERSION centos NotReady master 11m v1.13.4
安装port Network( flannel ) :: k8s cluster 工作 必须安装pod网络,否则pod之间无法通信,k8s支持多种方案,这里选择flannel
[root@centos ~]kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.extensions/psp.flannel.unprivileged created clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.extensions/kube-flannel-ds-amd64 created daemonset.extensions/kube-flannel-ds-arm64 created daemonset.extensions/kube-flannel-ds-arm created daemonset.extensions/kube-flannel-ds-ppc64le created daemonset.extensions/kube-flannel-ds-s390x created [root@centos ~]
检查pod状态,需要确保当前Pod 都是 running
[root@centos ~]kubectl get pod --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES kube-system coredns-78d4cf999f-6b5wq 1/1 Running 0 5h1m 10.244.0.2 centos <none> <none> kube-system coredns-78d4cf999f-clhkc 1/1 Running 0 5h1m 10.244.0.3 centos <none> <none> kube-system etcd-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> kube-system kube-apiserver-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> kube-system kube-controller-manager-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> kube-system kube-flannel-ds-amd64-lnp55 1/1 Running 0 3m41s 10.211.55.6 centos <none> <none> kube-system kube-proxy-xsnr8 1/1 Running 0 5h1m 10.211.55.6 centos <none> <none> kube-system kube-scheduler-centos 1/1 Running 0 5h 10.211.55.6 centos <none> <none> [root@centos ~]
再次查看node状态; pod状态变为 Ready
[root@centos ~]kubectl get nodes NAME STATUS ROLES AGE VERSION centos Ready master 5h2m v1.13.4 [root@centos ~]
至此k8s就安装完成了
安装完了赶快体验一下,写个简单的 deployment 测试一下
创建一个 nginx-deployment.yaml 内容如下
[root@k8s-master testnginx]# cat nginx-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: qf-test-nginx #namespace: qingfeng-deve spec: replicas: 1 template: metadata: labels: app: nginx spec: containers: - name: nginx image: qingfenglian/test_nginx ports: - containerPort: 80
创建 svc, pod, 查看pod状态 发现到这里发现 pod 状态一直是 Pending
[root@k8s-master ~]# mkdir -p k8s/testnginx [root@k8s-master ~]# cd k8s/testnginx/ [root@k8s-master testnginx]# vim nginx-deployment.yaml [root@k8s-master testnginx]# vim nginx-deployment.yaml [root@k8s-master testnginx]# kubectl create -f nginx-deployment.yaml deployment.extensions/qf-test-nginx created [root@k8s-master testnginx]# kubectl get svc,pod NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 19m NAME READY STATUS RESTARTS AGE pod/qf-test-nginx-56db997f77-gkvcz 0/1 Pending 0 8s
原因是:k8s-master 这台机器不允许创建pod ,想要在k8s-master上创建pod ,实现k8s单点部署可以执行下面操作
#先查看一下 k8s-master这台机器是否允许创建Pod,发现是 NoSchedule [root@k8s-master testnginx]# kubectl describe node k8s-master | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule #现在允许 k8s-master创建 pod [root@k8s-master testnginx]# kubectl taint nodes k8s-master node-role.kubernetes.io/master- node/k8s-master untainted #我们再来查看一下是否允许创建pod [root@k8s-master testnginx]# kubectl describe node k8s-master | grep Taint Taints: <none> ########--------------分割线------------######### #可能有的小伙伴测试完成后还想恢复到 k8s-master上禁止创建Pod,可以执行下面操作 kubectl taint nodes k8s-master node-role.kubernetes.io/master=:NoSchedule
当允许k8s-master允许创建pod 之后,我们再来看一下pod状态
[root@k8s-master testnginx]# kubectl describe node k8s-master | grep Taint Taints: node-role.kubernetes.io/master:NoSchedule [root@k8s-master testnginx]# kubectl taint nodes k8s-master node-role.kubernetes.io/master- node/k8s-master untainted [root@k8s-master testnginx]# kubectl describe node k8s-master | grep Taint Taints: <none> [root@k8s-master testnginx]# kubectl get pod NAME READY STATUS RESTARTS AGE qf-test-nginx-56db997f77-gkvcz 1/1 Running 0 8m50s ----------------这里可以发现pod 已经启动
增加node节点
执行 kubeadm join
[root@k8s-node ~]# kubeadm join 10.211.55.6:6443 --token uf2c4g.n7ibf1g8gxbkqz2z \ > --discovery-token-ca-cert-hash sha256:f01892c96cee8d02c373e34bed3a45c8f3f9888fdd19767e706ec09e8fb9c893 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "k8s-node" could not be reached [WARNING Hostname]: hostname "k8s-node": lookup k8s-node on 10.211.55.1:53: no such host [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
现在可以 用 kubectl get node 查看 有多少节点了 , 如果想在node节点上使用kubectl 命令需要把 k8s-master 上 /etc/kubernetes/admin.conf 文件copy到几点机器上并使用 export KUBECONFIG=/etc/kubernetes/admin.conf, 这个在初始化的时候已经提到,, 可以用scp 命令拷贝
[root@k8s-master testnginx]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 37m v1.14.0 k8s-node NotReady <none> 2m55s v1.14.0
copy admin.conf 到 节点机器:: 在节点机器上执行下面命令
[root@k8s-node ~]# kubectl get node -------------节点 使用kubectl The connection to the server localhost:8080 was refused - did you specify the right host or port? ----这是报错 [root@k8s-node ~]# scp root@10.211.55.6:/etc/kubernetes/admin.conf /etc/kubernetes/admin.conf -----从k8s-master 把admin.conf文件copy到当前节点机器上 The authenticity of host '10.211.55.6 (10.211.55.6)' can't be established. ECDSA key fingerprint is SHA256:ijx7s49ok7H8PMRY0tVKn7Be06G0OjArv/DpCNtHoIw. ECDSA key fingerprint is MD5:89:68:de:2f:fe:ca:3f:26:e2:28:30:87:2b:21:e9:3d. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.211.55.6' (ECDSA) to the list of known hosts. root@10.211.55.6's password: admin.conf 100% 5451 5.4MB/s 00:00 [root@k8s-node ~]# export KUBECONFIG=/etc/kubernetes/admin.conf --------导入 [root@k8s-node ~]# kubectl get node ------------再次使用kubectl 查看 node NAME STATUS ROLES AGE VERSION k8s-master Ready master 43m v1.14.0 k8s-node Ready <none> 8m15s v1.14.0
删除node
删除节点之后,节点想再次加入到集群中 需要先执行 kubeadm reset , 之后再执行 kubeadm join
[root@k8s-master testnginx]# kubectl delete node k8s-node ---k8s-node节点名称,当然不只这一种删除pod的方法,我这里不一一列出了
增加节点时token过期,重新生成token的方法, 直接上命令了
[root@k8s-master testnginx]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS uf2c4g.n7ibf1g8gxbkqz2z 23h 2019-04-03T15:28:40+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token [root@k8s-master testnginx]# kubeadm token create w0r09e.e5olwz1rlhwvgo9p [root@k8s-master testnginx]# kubeadm token list TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS uf2c4g.n7ibf1g8gxbkqz2z 23h 2019-04-03T15:28:40+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:default-node-token w0r09e.e5olwz1rlhwvgo9p 23h 2019-04-03T16:19:56+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token [root@k8s-master testnginx]#
由于我们允许k8s-master创建pod ,so现在我们有两个节点了,我们把刚才的 nginx-deployment.yaml 中pod个数改成2 看看结果
#nginx-deployment 修改之后的内容,,其实 只改了 spec.replicas 由1改成 2 [root@k8s-master testnginx]# cat nginx-deployment.yaml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: qf-test-nginx #namespace: qingfeng-deve spec: replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: qingfenglian/test_nginx ports: - containerPort: 80 #重新部署 [root@k8s-master testnginx]# kubectl apply -f nginx-deployment.yaml Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply deployment.extensions/qf-test-nginx configured #查看pod 状态 发现已经是2个pod的,但是一个还没起来,稍等一下 [root@k8s-master testnginx]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES qf-test-nginx-56db997f77-gkvcz 1/1 Running 0 35m 10.244.0.6 k8s-master <none> <none> qf-test-nginx-56db997f77-tx4wk 0/1 ContainerCreating 0 12s <none> k8s-node <none> <none> #这里再看 两个Pod都已经启动成功,,注意看 NODE 列 发现是在不同的NODE上面 [root@k8s-master testnginx]# kubectl get pod -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES qf-test-nginx-56db997f77-gkvcz 1/1 Running 0 37m 10.244.0.6 k8s-master <none> <none> qf-test-nginx-56db997f77-tx4wk 1/1 Running 0 107s 10.244.3.2 k8s-node <none> <none> [root@k8s-master testnginx]#
好了,这次记录就到此结束了。
遇到的问题
问题1:kubernetes源的问题
描述:开始是想通过命令行代理的方式解决源的问题,再实际使用中发现在 kubeadm init 的时候会报找不到 k8s-master主机
解决:用aliyum的原
问题2:k8s启动不成功之 docker版本问题
解决:写这篇文章的时候k8s对docker验证最高到 18.06,但此时docker版本已经到了18.09,开始本着测试体验最新版本结果...
其他错误
[root@k8s-master ~]# docker info Containers: 17 Running: 16 Paused: 0 Stopped: 1 Images: 8 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: systemd ######------主要看这里,没有修改之前应该是 cgroup #修改 docker文件驱动, 修改保存后 重新启动 docker , docker重新启动方法(systemctl restart docker) #修改或者创建 vim /etc/docker/daemon.json #添加如下内容 --这行就不要添加了这只是个注释(^_^)------qingfeng { "exec-opts": ["native.cgroupdriver=systemd"] }
官方文档:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/
域名访问服务请参考 :: kubernetes + istio进行流量管理
k8s使用中我遇到的问题之后会整理到这里 kubernetes 常见问题整理