kubeadm安装k8s测试环境

时间:2022-06-19 21:36:59

目标是搭建一个可测试的k8s环境,使用的工具 kubeadm, 最终一个master节点(非高可用),2个node节点。

环境以及版本

  • Centos7.3
  • kubeadm 1.11.1
  • kubelet 1.11.1
  • kubectl 1.11.1
  • docker-ce-18.06

说明 kubeadm默认的etcd是本地的,所以这里使用的是本地 etcd,而不是kubeadm中的

name ip role
master1 10.0.12.13 master
node1 10.0.12.10 node
node2 10.0.12.8 node

基础环境

三台主机都需要安装的有docker, kubeadm, kubelet 都是采用 yum 来安装,需要关闭swap,以及防火墙,禁用SELinux。

禁用 SELinux

sudo setenforce 0
  • 1

关闭swap

sudo swapoff -a
vim /etc/fstab  #swap一行注释掉
  • 1
  • 2

安装 docker

sudo yum install -y docker
  • 1

设置开机启动和启动服务

sudo systemctl enable docker
sudo systemctl start docker

看下docker的版本, 至少高于 1.12 版本

sudo docker version

安装 Kubernetes 包

配置 yum 源,并安装 kubeadm, kubectl, and kubelet

添加 k8s 源

sudo bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF'

如果无法访问google,那么采用国内源

sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装 k8s 需要的组件

sudo yum install -y kubelet kubeadm kubectl
sudo systemctl enable kubelet 
  • 1
  • 2

kubeadm v1.11 之前需要先启动 sudo systemctl start kubelet 服务,但是我在安装v1.11 发现并不需要先启动 kubelet服务,后面的 kubeadm init 汇总会自动启动

k8s 网络需要使用 网络转发,所以需要设置

sudo bash -c 'cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'

应用设置

sudo sysctl --system
  • 1

关闭防火墙

只是为了方面,不是可选,测试环境方面

sudo systemctl disable firewalld
sudo systemctl stop firewalld
sudo systemctl status firewalld

Master 操作

由于采用外部 etcd,所以要在 master 节点安装 etcd服务,这里也是etcd是单节点

安装etcd

不管是etcd 集群还是单机,或者是 http, https都可以,只要在 kubeadm 中配置好就行。 这部分先略过,大家可以找下 etcd的安装手册。

实验中是单机 etcd,然后监听地址为 https://10.0.12.13:2379

kubeadm

kubeadm配置 kubeadm.yaml (新建,任意目录下)

---
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
kubernetesVersion:www.baohuayule.net v1.11.1
apiServerCertSANs:
- 10.0.12.13
networking:
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
api:
  advertiseAddress: 10.0.12.13
etcd:
  endpoints:
  - https://10.0.12.13:2379
  caFile: www.chaoyueyule.com /etc/etcd/ssl/etcd-ca.pem
  certFile: www.feifanyule.cn /etc/etcd/ssl/etcd.pem
  keyFile: www.thd178.com /etc/etcd/ssl/etcd-key.pem

kubeadm的参考文档地址 可以根据需要自己更改

kubeadm 安装 k8s

下面是 kubeadm 表演的时间了

sudo kubeadm init --config kubeadm.yaml

# 所有阶段会打印出来
....
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 10.0.12.13:6443 --token 4b3m3i.hrzetk4qb5uh74e4 --discovery-token-ca-cert-hash sha256:357b0fec02af903e66022019cea82ff3a95264479cb5d222ea8e938df2db3d20

给出一些下面要做的步骤,配置 kubectl, 配置网络组件,以及 worker node 加入的命令。

下面就是跟着提示配置 kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看状态

$ kubectl get nodes
NAME      STATUS     ROLES     AGE       VERSION
master1   NotReady   master    5m        v1.11.

可以看到 status 为 NotReady, 因为网络组件还没有配置, 下面配置 flannel。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.ym

再来看状态

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   coredns-78fcdf6894-4744c          1/1       Running   0          5m
kube-system   coredns-78fcdf6894-jbvhd          1/1       Running   0          5m
kube-system   kube-apiserver-master1            1/1       Running   0          5m
kube-system   kube-controller-manager-master1   1/1       Running   0          5m
kube-system   kube-flannel-ds-amd64-kp7cr       1/1       Running   0          11s
kube-system   kube-proxy-6778v                  1/1       Running   0          5m
kube-system   kube-scheduler-master1            1/1       Running   0          5m

$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master1   Ready     master    6m        v1.11.1

]$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}

master 节点的配置基本完成了。

Node 配置

就是执行之前 init 最后打出的那个命令

sudo  kubeadm join 10.0.12.13:6443 --token 4b3m3i.hrzetk4qb5uh74e4 --discovery-token-ca-cert-hash sha256:357b0fec02af903e66022019cea82ff3a95264479cb5d222ea8e938df2db3d20

2个node操作完之后,再来 master 节点看看

$ kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
master1   Ready     master    7m        v1.11.1
node1     Ready     <none>    18s       v1.11.1
node2     Ready     <none>    10s       v1.11.1

用一个简单pod测试下

kubectl run -i --tty busybox --image=busybox --restart=Never -- sh

# 正常执行,再看看调度情况
$ kubectl get pod --show-all -o wide
NAME      READY     STATUS      RESTARTS   AGE       IP           NODE
busybox   0/1       Completed   0          48s       10.244.1.2   node1

其他

查看 kubeadm 的配置

sudo kubeadm config view