Kubernetes系列二: 使用kubeadm安装k8s环境

时间:2023-12-13 09:18:44

环境

三台主机,一台master,两台node

192.168.31.11  k8s- 作为master
192.168.31.12 k8s- 作为node节点
192.168.31.13 k8s- 作为node节点

每台主机Centos版本使用 CentOS Linux release 7.6.1810 (Core)

软件版本信息:

docker-ce-selinux-17.03..ce-.el7.noarch
docker-ce-17.03..ce-.el7.centos.x86_64
kubelet-1.11.-.x86_64
kubeadm-1.11.-.x86_64
kubernetes-cni-0.6.-.x86_64
kubectl-1.11.-.x86_64
1.主机的基本设置

修改主机名

# 192.168.31.11
hostnamectl set-hostname k8s-
# 192.168.31.12
hostnamectl set-hostname k8s-
# 192.168.31.13
hostnamectl set-hostname k8s-

修改每台主机的hosts文件添加如下内容

192.168.31.11 k8s-
192.168.31.12 k8s-
192.168.31.13 k8s-

每台主机都系统时间同步

# 修改时区
ln -svf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime yum -y install ntpdate
ntpdate cn.pool.ntp.org

每台主机都关闭防火墙及selinux

systemctl disable firewalld
systemctl stop firewalld
setenforce # 修改文件设置selinux永久关闭
vi /etc/sysconfig/selinux
SELINUX=enforcing

关闭swap分区(如果创建主机时删除swap分区,这里不用修改)

swapoff -a
# 修改文件防止下次启动挂载swap
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
2.配置安装前环境

以下操作在每台主机上都需要操作

每台主机都设置docker-ce源信息

# 下载wget
yum -y install wget # 下载源repo
wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/docker-ce.repo # 修改源到
sed -i 's@download.docker.com@mirrors.tuna.tsinghua.edu.cn/docker-ce@g' /etc/yum.repos.d/docker-ce.repo

每台主机都设置Kubernetes仓库

# vi /etc/yum.repos.d/kubernetes.repo 添加如下内容
[kubernetes]
name=Kubernetes Repo
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgcheck=
enable=

更新yum仓库

yum clean all
yum repolist

每台主机都需要安装docker-ce包(master上面有些服务是需要运行在docker容器中)

# 需要先升级docker-ce-selinux,版本太低
yum -y install https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-selinux-17.03.3.ce-1.el7.noarch.rpm # 安装docker-ce
yum -y install docker-ce-17.03..ce

设置并启动docker服务

# 配置加速器到配置文件
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://registry.docker-cn.com"]
}
EOF # 启动服务
systemctl daemon-reload
systemctl enable docker
systemctl start docker # 打开iptables内生的桥接相关功能
echo > /proc/sys/net/bridge/bridge-nf-call-ip6tables
echo > /proc/sys/net/bridge/bridge-nf-call-iptables
3.master上面安装并启动服务

安装需要的rpm包

# 先安装 kubelet-1.10.
# 原因是因为直接安装kubelet1..1的话会导致kubernetes-cni版本过高,
# 这里安装kubelet1..10会依赖安装kubernetes-cni-0.6.0版本
# 暂时没有找到好的办法解决这个问题
yum -y install kubelet-1.10. # 安装需要的版本1.11.10
yum -y install kubeadm-1.11. kubelet-1.11. kubectl-1.11.

配置启动kubelet服务

vi /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY=MODE=ipvs # 配置为开机启动
systemctl enable kubelet

提前拉取镜像(kubeadm在初始化过程会拉取镜像,但是镜像都是Google提供,无法下载)

docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1
docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker pull xiyangxixia/k8s-scheduler:v1.11.1
docker tag xiyangxixia/k8s-scheduler:v1.11.1 k8s.gcr.io/kube-scheduler-amd64:v1.11.1 docker pull xiyangxixia/k8s-controller-manager:v1.11.1
docker tag xiyangxixia/k8s-controller-manager:v1.11.1 k8s.gcr.io/kube-controller-manager-amd64:v1.11.1 docker pull xiyangxixia/k8s-apiserver-amd64:v1.11.1
docker tag xiyangxixia/k8s-apiserver-amd64:v1.11.1 k8s.gcr.io/kube-apiserver-amd64:v1.11.1 docker pull xiyangxixia/k8s-etcd:3.2.
docker tag xiyangxixia/k8s-etcd:3.2. k8s.gcr.io/etcd-amd64:3.2. docker pull xiyangxixia/k8s-coredns:1.1.
docker tag xiyangxixia/k8s-coredns:1.1. k8s.gcr.io/coredns:1.1. docker pull xiyangxixia/k8s-pause:3.1
docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1 docker pull xiyangxixia/k8s-flannel:v0.10.0-s390x
docker tag xiyangxixia/k8s-flannel:v0.10.0-s390x quay.io/coreos/flannel:v0.10.0-s390x docker pull xiyangxixia/k8s-flannel:v0.10.0-ppc64le
docker tag xiyangxixia/k8s-flannel:v0.10.0-ppc64le quay.io/coreos/flannel:v0.10.0-ppc64l docker pull xiyangxixia/k8s-flannel:v0.10.0-arm
docker tag xiyangxixia/k8s-flannel:v0.10.0-arm quay.io/coreos/flannel:v0.10.0-arm docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64
docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

初始化Kubernetes master节点

kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/ --service-cidr=10.96.0.0/ --ignore-preflight-errors=Swap

# --kubernetes-version=v1.11.1 因为无法连接Google服务所以需要手动指定版本信息
# --pod-network-cidr=10.244.0.0/ pod获取的ip的网段,默认即可
# --service-cidr=10.96.0.0/ 指定service网段,不要和物理服务节点同一个网段
# --ignore-preflight-errors=Swap/all 忽略错误

初始化成功后,根据提示信息执行操作

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

等待所有服务启动(docker中运行)后,使用命令查询信息

# 查询组件信息
kubectl get cs
# 查询节点信息(flannel还没有部署,提示为NotReady)
kubectl get nodes
# 查询命名空间
kubectl get ns

部署网络插件flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

# 查看下载的images
docker images | grep flannel

再次验证节点信息,显示Ready

kubectl get nodes
4.节点node上安装并启动服务

两个节点操作完全相同

安装需要的rpm包,确保防火墙,selinux关闭并设置永久关闭,docker-ce安装并启动

# 先安装 kubelet-1.10.
# 原因是因为直接安装kubelet1..1的话会导致kubernetes-cni版本过高,
# 这里安装kubelet1..10会依赖安装kubernetes-cni-0.6.0版本
# 暂时没有找到好的办法解决这个问题
yum -y install kubelet-1.10. # kubectl 其实没有必要安装
yum -y install kubeadm-1.11. kubelet-1.11. kubectl-1.11.

配置启动kubelet服务

vi /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"
KUBE_PROXY=MODE=ipvs # 配置为开机启动
systemctl enable kubelet

在master上面操作,创建token

# 查看当前所有的token
kubeadm token list # 创建token,并记下
kubeadm token create [root@k8s- .kube]# kubeadm token create
8d5cbr.n84orohakj3o5ppd # 如果没有值--discovery-token-ca-cert-hash,可以通过在master节点上运行以下命令链来获取
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der >/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //' [root@k8s- .kube]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der >/dev/null | \
> openssl dgst -sha256 -hex | sed 's/^.* //'
febac84e25f527f8ee8770a35165164ea8f930929ae0d648405240b3850f5c53

提前下载镜像(初始化过程中会拉取镜像,但是下载不了)

docker pull xiyangxixia/k8s-pause:3.1
docker tag xiyangxixia/k8s-pause:3.1 k8s.gcr.io/pause:3.1 docker pull xiyangxixia/k8s-proxy-amd64:v1.11.1
docker tag xiyangxixia/k8s-proxy-amd64:v1.11.1 k8s.gcr.io/kube-proxy-amd64:v1.11.1 docker pull xiyangxixia/k8s-flannel:v0.10.0-amd64
docker tag xiyangxixia/k8s-flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64

根据创建的token初始化node并加入集群

kubeadm join 192.168.31.11: --token 8d5cbr.n84orohakj3o5ppd --discovery-token-ca-cert-hash sha256:febac84e25f527f8ee8770a35165164ea8f930929ae0d648405240b3850f5c53 --ignore-preflight-errors=Swap

# 192.168.31.11: 这里是master的ip地址,master必须关闭防火墙,关闭selinux
# --token 这里是kubeadm token create 输入的值
# --discovery-token-ca-cert-hash sha256: 这里是执行命令获取的sha256的值
5.验证集群是否初始化成功

查询节点镜像是否下载

[root@k8s- .kube]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 months ago 52.5 MB
k8s.gcr.io/kube-proxy-amd64 v1.11.1 d5c25579d0ff months ago 97.8 MB
xiyangxixia/k8s-proxy-amd64 v1.11.1 d5c25579d0ff months ago 97.8 MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.1 816332bd9d11 months ago MB
xiyangxixia/k8s-apiserver-amd64 v1.11.1 816332bd9d11 months ago MB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.1 52096ee87d0e months ago MB
xiyangxixia/k8s-controller-manager v1.11.1 52096ee87d0e months ago MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.1 272b3a60cd68 months ago 56.8 MB
xiyangxixia/k8s-scheduler v1.11.1 272b3a60cd68 months ago 56.8 MB
xiyangxixia/k8s-coredns 1.1. b3b94275d97c months ago 45.6 MB
k8s.gcr.io/coredns 1.1. b3b94275d97c months ago 45.6 MB
k8s.gcr.io/etcd-amd64 3.2. b8df3b177be2 months ago MB
xiyangxixia/k8s-etcd 3.2. b8df3b177be2 months ago MB
quay.io/coreos/flannel v0.10.0-s390x 463654e4ed2d months ago MB
xiyangxixia/k8s-flannel v0.10.0-s390x 463654e4ed2d months ago MB
quay.io/coreos/flannel v0.10.0-ppc64l e2f67d69dd84 months ago 53.5 MB
xiyangxixia/k8s-flannel v0.10.0-ppc64le e2f67d69dd84 months ago 53.5 MB
xiyangxixia/k8s-flannel v0.10.0-arm c663d02f7966 months ago 39.9 MB
quay.io/coreos/flannel v0.10.0-arm c663d02f7966 months ago 39.9 MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 months ago 44.6 MB
xiyangxixia/k8s-flannel v0.10.0-amd64 f0fad859c909 months ago 44.6 MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago kB
xiyangxixia/k8s-pause 3.1 da86e6ba6ca1 months ago kB

查询节点信息(master上执行)是否都是Ready

[root@k8s- .kube]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s- Ready master 14m v1.11.1
k8s- Ready <none> 4m v1.11.1
k8s- Ready <none> 4m v1.11.1

查询kube-system命名空间下的关于node节点的pod信息

[root@k8s- .kube]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE
coredns-78fcdf6894-44qf5 / Running 14m 10.244.0.2 k8s-
coredns-78fcdf6894-bxb2m / Running 14m 10.244.0.3 k8s-
etcd-k8s- / Running 13m 192.168.31.11 k8s-
kube-apiserver-k8s- / Running 13m 192.168.31.11 k8s-
kube-controller-manager-k8s- / Running 14m 192.168.31.11 k8s-
kube-flannel-ds-amd64-cr8j8 / Running 6m 192.168.31.11 k8s-
kube-flannel-ds-amd64-kxk5w / Running 4m 192.168.31.12 k8s-
kube-flannel-ds-amd64-pk4zl / Running 4m 192.168.31.13 k8s-
kube-proxy-mxsrg / Running 4m 192.168.31.12 k8s-
kube-proxy-tp95q / Running 4m 192.168.31.13 k8s-
kube-proxy-twpvt / Running 14m 192.168.31.11 k8s-
kube-scheduler-k8s- / Running 14m 192.168.31.11 k8s-