基于kubeamd初始化安装kubernetes集群

时间:2022-09-28 05:19:05

环境:
master,etcd 172.16.1.5
node1 172.16.1.6
node2 172.16.1.7
前提:
1.基于主机名通信,/etc/hosts
2.时间同步
3.关闭firewalld和iptables.services
安装配置步骤:
1.etcd cluster,仅master节点
2.flannel,集群所有节点
3.k8s-master节点
apiserver,scheduler,controlle-manager
4.配置k8s的node节点
先设定docker,kube-proxy,kubelet

kubeadm
1.master和node:安装kubelet,docker,kubeadm
2.master:kubeadm init初始化master节点
3.nodes:kubeadm join
初始化参考地址:
https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md

[root@node1 ~]#cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.1.5 master.xiaolizi.com master
172.16.1.6 node1.xiaolizi.com node1
172.16.1.7 node2.xiaolizi.com node2

kubernetes镜像源:https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
https://mirrors.aliyun.com/kubernetes/apt/doc/yum-key.gpg
docker镜像源:wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

cd /etc/yum.repos.d/
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=
gpgcheck=
repo_gpgcheck=
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
setenforce
yum repolist

安装docker,kubeadm,kubectl,kubelet

yum install docker-ce kubeadm kubectl kubelet -y
systemctl enable kubelet

由于k8s安装有很多镜像国内下载不到,因为编辑如下的配置文件可以找到需要的镜像,启动docker前,在Service配置段里定义环境变量,Environment,表示通过这个代理去加载k8s所需的镜像,加载完成后,可以注释掉,仅使用国内的加速器来拉取非k8s的镜像,后续需要使用时,再开启。

# 配置这个代理地址的时候,是根据自己电脑的代理来设置的
vim /usr/lib/systemd/system/docker.service
[Services]
Environment="HTTPS_PROXY=http://192.168.2.208:10080" # 镜像是从国外拉取得,这里写的地址和端口是代理服务的,有些是将事先拉好的镜像推到自己的本地仓库
Environment="HTTP_PROXY=http://192.168.2.208:10080"
Environment="NO_PROXY=127.0.0.0/8,192.168.2.0/25" #保存退出后,执行
systemctl daemon-reload
#确保如下两个参数值为1,默认为1。
cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
cat /proc/sys/net/bridge/bridge-nf-call-iptables
#如果结果不是1,需要执行
vim /usr/lib/sysctl.d/-system.conf
bridge-nf-call-iptables =
bridge-nf-call-ip6tables =
sysctl --system
#启动docker-ce
systemctl start docker
#设置开机启动
systemctl enable docker.service
# 启动之前查看,安装了那些文件
[root@master ~]#rpm -ql kubelet
/etc/kubernetes/manifests # 清单目录
/etc/sysconfig/kubelet # 配置文件
/usr/bin/kubelet # 主程序
/usr/lib/systemd/system/kubelet.service # unit file # 早期版本不让启动swap,如果修改的话,在此配置文件定义参数
[root@master ~]#vim /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--fail-swap-on=false" # 启动kubelet
systemctl start kubelet # 此时kubelet并未启动成功,master节点还没有初始化完成
systemctl stop kubelet
systemctl enable kubelet

在master节点上使用kubeadm init进行初始化,该命令有很多参数
--apiserver-bind-port # apiserver监听的端口,默认是6443
--apiserver-advertise-address # apiserver监听的地址,默认是0.0.0.0
--cert-dir # 加载证书的相关目录,默认是/etc/kubernetes/pki
--config # kubeadm程序自身的配置文件路径
--ignore-preflight-errors # 预检查时,遇到错误忽略掉,忽略什么自己指定,Example: 'IsPrivilegedUser,Swap'
--kubernetes-version # k8s的版本是什么
--pod-network-cidr # 指定pod所属的网络
--service-cidr

kubeadm init \
--kubernetes-version=v1.15.1 \
--ignore-preflight-errors=Swap \
--pod-network-cidr=10.244.0.0/ \
--service-cidr=10.96.0.0/ [root@master ~]# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-apiserver v1.15.1 68c3eb07bfc3 weeks ago 207MB
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d weeks ago .4MB
k8s.gcr.io/kube-scheduler v1.15.1 b0b3c4c404da weeks ago .1MB
k8s.gcr.io/kube-controller-manager v1.15.1 d75082f1d121 weeks ago 159MB
k8s.gcr.io/coredns 1.3. eb516548c180 months ago .3MB
k8s.gcr.io/etcd 3.3. 2c4adeb21b4f months ago 258MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1

master节点初始化内容

[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.. Latest validated version: 18.09
[WARNING Hostname]: hostname "master" could not be reached
[WARNING Hostname]: hostname "master": lookup master on 223.5.5.5:: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [10.0.0.5 127.0.0.1 ::]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [10.0.0.5 127.0.0.1 ::]
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.0.5]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.503552 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: xfmp2o.rg9vt1jojg8rcb01
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 10.0.0.5: --token xfmp2o.rg9vt1jojg8rcb01 \
--discovery-token-ca-cert-hash sha256:8ce2a857cb3383cb3bf509335de43c78e8d569e091caadd74865e2179d625bbc

master上执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config kubectl get -help # 查看帮助
kubectl get cs # 查看组件状态信息 componentstatus
kubectl get nodes # 查看节点信息

node上执行

kubeadm join 10.0.0.5: --token xfmp2o.rg9vt1jojg8rcb01 \
--discovery-token-ca-cert-hash sha256:8ce2a857cb3383cb3bf509335de43c78e8d569e091caadd74865e2179d625bbc \
--ignore-preflight-errors=Swap [root@node1 ~]# docker image ls # 出现以下信息,完成
REPOSITORY TAG IMAGE ID CREATED SIZE
k8s.gcr.io/kube-proxy v1.15.1 89a062da739d weeks ago .4MB
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 months ago .6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 months ago 742kB

安装flannel网络插件
下载地址:
https://github.com/coreos/flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master ~]# docker image ls
# 下面这个镜像拉下来了算是下载完成了
quay.io/coreos/flannel v0.11.0-amd64 ff281650a721 months ago .6MB [root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 66m v1.15.1 [root@master ~]# kubectl get pods -n kube-system # 在kube-system这个命名空间下的pod
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-cg2rw / Running 66m
coredns-5c98db65d4-qqd2v / Running 66m
etcd-master / Running 65m
kube-apiserver-master / Running 65m
kube-controller-manager-master / Running 66m
kube-flannel-ds-amd64-wszr5 / Running 2m37s
kube-proxy-xw9gm / Running 66m
kube-scheduler-master / Running 65m [root@master ~]# kubectl get ns # 查看命名空间 namespace
NAME STATUS AGE
default Active 72m
kube-node-lease Active 73m
kube-public Active 73m
kube-system Active 73m