6.K8s集群升级、etcd备份和恢复、资源对象及其yaml文件使用总结、常用维护命令

时间:2022-09-22 17:56:16

1.K8s集群升级

  • 集群升级有一定的风险,需充分测试验证后实施
  • 集群升级需要停止服务,可以采用逐个节点滚动升级的方式

1.1 准备新版本二进制文件

查看现在的版本

root@k8-master1:~# /usr/local/bin/kube-apiserver --version
Kubernetes v1.21.0

1.1.1 从github上下载需要版本的二进制安装包,比如 1.21.5

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5## ll
总用量 462532
drwxr-xr-x 2 root root 4096 9月 27 15:28 ./
drwx------ 10 root root 4096 9月 27 15:26 ../
-rw-r--r-- 1 root root 29154332 9月 17 16:54 kubernetes-client-linux-amd64.tar.gz
-rw-r--r-- 1 root root 118151109 9月 17 16:54 kubernetes-node-linux-amd64.tar.gz
-rw-r--r-- 1 root root 325784034 9月 17 16:54 kubernetes-server-linux-amd64.tar.gz
-rw-r--r-- 1 root root 525314 9月 17 16:52 kubernetes.tar.gz

1.1.2 解压并查看二进制文件

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5# for i in `ls`;do tar xf $i;done 

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5# cd kubernetes-v1.21.5/kubernetes/server/bin

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ll
总用量 1075600
drwxr-xr-x 2 root root 4096 9月 16 05:22 ./
drwxr-xr-x 3 root root 4096 9月 16 05:27 ../
-rwxr-xr-x 1 root root 50790400 9月 16 05:22 apiextensions-apiserver*
-rwxr-xr-x 1 root root 44851200 9月 16 05:22 kubeadm*
-rwxr-xr-x 1 root root 48738304 9月 16 05:22 kube-aggregator*
-rwxr-xr-x 1 root root 122322944 9月 16 05:22 kube-apiserver*
-rw-r--r-- 1 root root 8 9月 16 05:21 kube-apiserver.docker_tag
-rw------- 1 root root 127114240 9月 16 05:21 kube-apiserver.tar
-rwxr-xr-x 1 root root 116359168 9月 16 05:22 kube-controller-manager*
-rw-r--r-- 1 root root 8 9月 16 05:21 kube-controller-manager.docker_tag
-rw------- 1 root root 121150976 9月 16 05:21 kube-controller-manager.tar
-rwxr-xr-x 1 root root 46645248 9月 16 05:22 kubectl*
-rwxr-xr-x 1 root root 55305384 9月 16 05:22 kubectl-convert*
-rwxr-xr-x 1 root root 118353264 9月 16 05:22 kubelet*
-rwxr-xr-x 1 root root 43360256 9月 16 05:22 kube-proxy*
-rw-r--r-- 1 root root 8 9月 16 05:21 kube-proxy.docker_tag
-rw------- 1 root root 105362432 9月 16 05:21 kube-proxy.tar
-rwxr-xr-x 1 root root 47321088 9月 16 05:22 kube-scheduler*
-rw-r--r-- 1 root root 8 9月 16 05:21 kube-scheduler.docker_tag
-rw------- 1 root root 52112384 9月 16 05:21 kube-scheduler.tar
-rwxr-xr-x 1 root root 1593344 9月 16 05:22 mounter*

1.1.3 验证新文件版本

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ./kube-apiserver --version
Kubernetes v1.21.5

1.2 master节点升级

1.2.1 配置并下线升级的节点

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# for i in  11 12 13 17 18 19;do ssh 192.168.2.${i} -C "sed -i 's/server 192.168.2.11/#server 192.168.2.11/g' /etc/kube-lb/conf/kube-lb.conf; systemctl restart kube-lb.service";done

1.2.2 升级的master节点停止服务

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-master1 -C 'systemctl stop kube-apiserver.service kube-lb.service kube-controller-manager.service kubelet.service kube-scheduler.service kube-proxy.service'

在其他节点查看集群状态中

(depoly节点因/root/.kube/config中配置的apiserver为master1的IP和端口 https://192.168.2.11:6443,所以此时提示连接apiserver失败)

root@k8-node1:~# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 NotReady,SchedulingDisabled master 8d v1.21.0
192.168.2.12 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.13 Ready,SchedulingDisabled master 112m v1.21.0
192.168.2.17 Ready node 8d v1.21.0
192.168.2.18 Ready node 8d v1.21.0
192.168.2.19 Ready node 96m v1.21.0

1.2.3 替换二进制文件

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl k8s-master1:/usr/local/bin/

1.2.4 启动服务

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-master1 -C 'systemctl start kube-apiserver.service kube-lb.service kube-controller-manager.service kubelet.service kube-scheduler.service kube-proxy.service'

1.2.5 查看已更新的master节点的状态,已经恢复,版本也已经升级为新版本v1.21.5

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 Ready,SchedulingDisabled master 8d v1.21.5
192.168.2.12 Ready,SchedulingDisabled master 8d v1.21.0
192.168.2.13 Ready,SchedulingDisabled master 171m v1.21.0
192.168.2.17 Ready node 8d v1.21.0
192.168.2.18 Ready node 8d v1.21.0
192.168.2.19 Ready node 155m v1.21.0

1.2.6 将升级完成的master1恢复上线

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# for i in  11 12 13 17 18 19;do ssh 192.168.2.${i} -C "sed -i -e 's/#server/server/g'  /etc/kube-lb/conf/kube-lb.conf; systemctl restart kube-lb.service";done

1.2.7 升级另外2个master节点

下线另外2个的master节点

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# for i in  11 12 13 17 18 19;do ssh 192.168.2.${i} -C "sed -i -e 's/server 192.168.2.12/#server 192.168.2.12/g' -e 's/server 192.168.2.13/#server 192.168.2.13/g'  /etc/kube-lb/conf/kube-lb.conf; systemctl restart kube-lb.service";done

停止另外2个master节点的服务

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-master2 -C 'systemctl stop kube-apiserver.service kube-lb.service kube-controller-manager.service kubelet.service kube-scheduler.service kube-proxy.service'

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-master3 -C 'systemctl stop kube-apiserver.service kube-lb.service kube-controller-manager.service kubelet.service kube-scheduler.service kube-proxy.service'

更新另外2个master节点的二进制文件

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl k8s-master2:/usr/local/bin/

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-scheduler kube-proxy kubelet kubectl k8s-master3:/usr/local/bin/

启动另外2个master节点的服务

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-master2 -C 'systemctl start kube-apiserver.service kube-lb.service kube-controller-manager.service kubelet.service kube-scheduler.service kube-proxy.service'

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-master3 -C 'systemctl start kube-apiserver.service kube-lb.service kube-controller-manager.service kubelet.service kube-scheduler.service kube-proxy.service'

恢复2个master节点上线

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# for i in  11 12 13 17 18 19;do ssh 192.168.2.${i} -C "sed -i -e 's/#server/server/g'  /etc/kube-lb/conf/kube-lb.conf; systemctl restart kube-lb.service";done

1.2.8 master节点全部升级完成

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 Ready,SchedulingDisabled master 8d v1.21.5
192.168.2.12 Ready,SchedulingDisabled master 8d v1.21.5
192.168.2.13 Ready,SchedulingDisabled master 177m v1.21.5
192.168.2.17 Ready node 8d v1.21.0
192.168.2.18 Ready node 8d v1.21.0
192.168.2.19 Ready node 161m v1.21.0

1.3 node节点升级

注意:

因为需要停止node节点的服务,所以如果是生成环境,生产升级之前需要将升级节点的pod驱逐到其他节点,否则如果部署的服务如果是单pod,会造成业务中断。

1.3.1 停止noed1的服务

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-node1 -C 'systemctl stop kubelet.service kube-proxy.service'

1.3.2 替换二进制文件

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# scp kubectl kube-proxy kubelet k8s-node1:/usr/local/bin/

1.3.3 启动noed1的服务

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# ssh k8s-node1 -C 'systemctl start kubelet.service kube-proxy.service'

1.3.4 按以上步骤升级其他node节点

ssh k8s-node2 -C 'systemctl stop kubelet.service kube-proxy.service'
scp kubectl kube-proxy kubelet k8s-node2:/usr/local/bin/
ssh k8s-node2 -C 'systemctl start kubelet.service kube-proxy.service' ssh k8s-node3 -C 'systemctl stop kubelet.service kube-proxy.service'
scp kubectl kube-proxy kubelet k8s-node2:/usr/local/bin/
ssh k8s-node3 -C 'systemctl start kubelet.service kube-proxy.service'

1.3.5 查看node状态

root@k8-deploy:~/k8s-update/kubernetes-v1.21.5/kubernetes/server/bin# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.2.11 Ready,SchedulingDisabled master 9d v1.21.5
192.168.2.12 Ready,SchedulingDisabled master 9d v1.21.5
192.168.2.13 Ready,SchedulingDisabled master 3h38m v1.21.5
192.168.2.17 Ready node 8d v1.21.5
192.168.2.18 Ready node 8d v1.21.5
192.168.2.19 Ready node 3h22m v1.21.5

2.etcdctl常用命令使用,etcd备份和恢复

2.1 etcdctl常用命令

可以使用help查看命令的其他参数和选项

root@k8-etcd1:~# etcdctl help

2.1.1 查看集群节点

root@k8-etcd1:~# etcdctl member list -w table
+------------------+---------+-------------------+---------------------------+---------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+-------------------+---------------------------+---------------------------+------------+
| 610581aec980ff7 | started | etcd-192.168.2.14 | https://192.168.2.14:2380 | https://192.168.2.14:2379 | false |
| d250eba9d73e634f | started | etcd-192.168.2.15 | https://192.168.2.15:2380 | https://192.168.2.15:2379 | false |
| de4be0d409c6cd2d | started | etcd-192.168.2.16 | https://192.168.2.16:2380 | https://192.168.2.16:2379 | false |
+------------------+---------+-------------------+---------------------------+---------------------------+------------+

2.1.2 查看集群健康状态

root@k8-etcd1:~# for i in `seq 14 16`;do ETCDCTL_API=3 /usr/local/bin/etcdctl --endpoints=https://192.168.2.${i}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health;done
https://192.168.2.14:2379 is healthy: successfully committed proposal: took = 24.70098ms
https://192.168.2.15:2379 is healthy: successfully committed proposal: took = 29.842108ms
https://192.168.2.16:2379 is healthy: successfully committed proposal: took = 25.316645ms
root@k8-etcd1:~#

2.1.3 查看集群节点状态

root@k8-etcd1:~# for i in `seq 14 16`;do ETCDCTL_API=3 /usr/local/bin/etcdctl -w table --endpoints=https://192.168.2.${i}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint status;done
+---------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.2.14:2379 | 610581aec980ff7 | 3.4.13 | 3.7 MB | false | false | 3 | 2001758 | 2001758 | |
+---------------------------+-----------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.2.15:2379 | d250eba9d73e634f | 3.4.13 | 3.8 MB | false | false | 3 | 2001758 | 2001758 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.2.16:2379 | de4be0d409c6cd2d | 3.4.13 | 3.8 MB | true | false | 3 | 2001758 | 2001758 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

2.2 etcd增删改查数据

2.2.1 查看所有数据的key

root@k8-etcd1:~# etcdctl get / --prefix --keys-only

/calico/ipam/v2/assignment/ipv4/block/10.100.109.192-26

/calico/ipam/v2/assignment/ipv4/block/10.100.112.0-26
....

2.2.2 查看收到创建的3个test pod

root@k8-etcd1:~# etcdctl get / --prefix --keys-only |grep test
/calico/resources/v3/projectcalico.org/workloadendpoints/default/k8--node1-k8s-net--test2-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/default/k8--node2-k8s-net--test1-eth0
/calico/resources/v3/projectcalico.org/workloadendpoints/default/k8--node2-k8s-net--test3-eth0
/registry/pods/default/net-test1
/registry/pods/default/net-test2
/registry/pods/default/net-test3

2.2.3 根据key 查看具体的数据

root@k8-etcd1:~# get /registry/pods/default/net-test1
... root@k8-etcd1:~# get /calico/resources/v3/projectcalico.org/workloadendpoints/default/k8--node2-k8s-net--test1-eth0
...

2.2.3 通过过滤统计key中pod关键字,统计集群总pod的数量

root@k8-etcd1:~# etcdctl get / --prefix --keys-only |grep pod |wc -l
29

2.2.4 增删改查数据

# 增加数据 put
root@k8-etcd1:~/etcd_bak# etcdctl put /test-key1 aaa
OK
root@k8-etcd1:~/etcd_bak# etcdctl put /test-key2 bbb
OK # 查看数据 get
root@k8-etcd1:~/etcd_bak# etcdctl get /test-key1
/test-key1
aaa
root@k8-etcd1:~/etcd_bak# etcdctl get /test-key2
/test-key2
bbb # 更改数据还是put
root@k8-etcd1:~/etcd_bak# etcdctl put /test-key1 ccc
OK
root@k8-etcd1:~/etcd_bak# etcdctl get /test-key1
/test-key1
ccc # 删除数据del
root@k8-etcd1:~/etcd_bak# etcdctl del /test-key1
1
root@k8-etcd1:~/etcd_bak# etcdctl del /test-key2
1

2.3 etcd watch机制测试

2.3.1 1个终端使用watch 监听一个key

root@k8-etcd1:~# etcdctl watch /test1

2.3.2 另外1个终端put 数据

root@k8-etcd1:~/etcd_bak# etcdctl put /test1 aaa
OK
root@k8-etcd1:~/etcd_bak# etcdctl put /test1 bbb
OK
root@k8-etcd1:~/etcd_bak# etcdctl put /test1 ccc
OK

2.3.2 watch的终端会显示put的数据

root@k8-etcd1:~# etcdctl watch /test1
PUT
/test1
aaa
PUT
/test1
bbb
PUT
/test1
ccc

2.4 etcd备份恢复

因为etcd集群是镜像集群,所有节点的中的数据都一直,所以只需备份1个节点的数据即可。

2.4.1 备份

备份命令

etcdctl snapshot save <filename> [flags]

root@k8-etcd1:~/etcd_bak# etcdctl snapshot save etcd_dbbak.db.`date +%Y%m%d_%H%M%S`
{"level":"info","ts":1632887285.483879,"caller":"snapshot/v3_snapshot.go:119","msg":"created temporary db file","path":"etcd_dbbak.db.20210929_114805.part"}
{"level":"info","ts":"2021-09-29T11:48:05.484+0800","caller":"clientv3/maintenance.go:200","msg":"opened snapshot stream; downloading"}
{"level":"info","ts":1632887285.4850876,"caller":"snapshot/v3_snapshot.go:127","msg":"fetching snapshot","endpoint":"127.0.0.1:2379"}
{"level":"info","ts":"2021-09-29T11:48:05.572+0800","caller":"clientv3/maintenance.go:208","msg":"completed snapshot read; closing"}
{"level":"info","ts":1632887285.6077516,"caller":"snapshot/v3_snapshot.go:142","msg":"fetched snapshot","endpoint":"127.0.0.1:2379","size":"3.7 MB","took":0.122942605}
{"level":"info","ts":1632887285.6081467,"caller":"snapshot/v3_snapshot.go:152","msg":"saved","path":"etcd_dbbak.db.20210929_114805"}
Snapshot saved at etcd_dbbak.db.20210929_114805 root@k8-etcd1:~/etcd_bak# ll
总用量 3672
drwxr-xr-x 2 root root 4096 9月 29 11:48 ./
drwx------ 8 root root 4096 9月 29 10:32 ../
-rw------- 1 root root 3747872 9月 29 11:48 etcd_dbbak.db.20210929_114805

2.4.2 恢复

USAGE:

etcdctl snapshot restore <filename> [options] [flags]

OPTIONS:

--data-dir="" Path to the data directory

root@k8-etcd1:~/etcd_bak# etcdctl snapshot restore etcd_dbbak.db.20210929_114805 --data-dir=/tmp/etcd
{"level":"info","ts":1632887880.7857609,"caller":"snapshot/v3_snapshot.go:296","msg":"restoring snapshot","path":"etcd_dbbak.db.20210929_114805","wal-dir":"/tmp/etcd/member/wal","data-dir":"/tmp/etcd","snap-dir":"/tmp/etcd/member/snap"}
{"level":"info","ts":1632887880.880982,"caller":"mvcc/kvstore.go:380","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1587726}
{"level":"info","ts":1632887880.9017625,"caller":"membership/cluster.go:392","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"0","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]}
{"level":"info","ts":1632887880.9249938,"caller":"snapshot/v3_snapshot.go:309","msg":"restored snapshot","path":"etcd_dbbak.db.20210929_114805","wal-dir":"/tmp/etcd/member/wal","data-dir":"/tmp/etcd","snap-dir":"/tmp/etcd/member/snap"} root@k8-etcd1:~/etcd_bak# ls /tmp/etcd/member/snap
0000000000000001-0000000000000001.snap db

2.4.3 修改etcd配置文件

修改etcd 配置文件/etc/systemd/system/etcd.service 修改数据目录

 --data-dir=/tmp/etcd

重启etcd 服务

root@k8-etcd1:~/etcd_bak# systemctl restart etcd.service

3.资源对象

3.1 deplyment pod控制器

3.1.1 创建 Deployment

root@k8-deploy:~/k8s-yaml/controllers/deployments# vim nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

在该例中:

  • 创建名为 nginx-deployment(由 .metadata.name 字段标明)的 Deployment。
  • 该 Deployment 创建三个(由 replicas 字段标明)Pod 副本。
  • selector 字段定义 Deployment 如何查找要管理的 Pods。 在这里,你选择在 Pod 模板中定义的标签(app: nginx), 在 API 版本 apps/v1 中,Deployment 标签选择算符在创建后是不可变的。

说明:

  • spec.selector.matchLabels 字段是 {key,value} 键值对映射。 在 matchLabels 映射中的每个 {key,value} 映射等效于 matchExpressions 中的一个元素, 即其 key 字段是 “key”,operator 为 “In”,values 数组仅包含 “value”。 在 matchLabels 和 matchExpressions 中给出的所有条件都必须满足才能匹配。

template 字段包含以下子字段:

  • Pod 被使用 labels 字段打上 app: nginx 标签。
  • Pod 模板规约(即 .template.spec 字段)指示 Pods 运行一个 nginx 容器, 该容器运行版本为 1.14.2 的 nginx Docker Hub镜像。
  • 创建一个容器并使用 name 字段将其命名为 nginx。

通过运行以下命令创建 Deployment

root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created

说明: 你可以设置 --record 标志将所执行的命令写入资源注解中。 这对于以后的检查是有用的。例如,要查看针对每个 Deployment 修订版本所执行过的命令。

检查 Deployment 是否已创建
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 3 0 9s

在检查集群中的 Deployment 时,所显示的字段有:

  1. NAME 列出了集群中 Deployment 的名称。
  2. READY 显示应用程序的可用的 副本 数。显示的模式是“就绪个数/期望个数”。
  3. UP-TO-DATE 显示为了达到期望状态已经更新的副本数。
  4. AVAILABLE 显示应用可供用户使用的副本数。
  5. AGE 显示应用程序运行的时间。
  6. 请注意期望副本数是根据 .spec.replicas 字段设置 3。

几秒钟后再次运行 kubectl get deployments

root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 45s
要查看 Deployment 创建的 ReplicaSet(rs)
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-66b6c48dd5 3 3 3 11m

ReplicaSet 输出中包含以下字段:

NAME 列出名字空间中 ReplicaSet 的名称;

  1. DESIRED 显示应用的期望副本个数,即在创建 Deployment 时所定义的值。 此为期望状态;
  2. CURRENT 显示当前运行状态中的副本个数;
  3. READY 显示应用中有多少副本可以为用户提供服务;
  4. AGE 显示应用已经运行的时间长度。
  5. 注意 ReplicaSet 的名称始终被格式化为[Deployment名称]-[随机字符串]。 其中的随机字符串是使用 pod-template-hash 作为种子随机生成的。
查看每个 Pod 自动生成的标签
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-66b6c48dd5-hdxlg 1/1 Running 0 44m app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-hnmgw 1/1 Running 0 44m app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-p28z2 1/1 Running 0 44m app=nginx,pod-template-hash=66b6c48dd5

说明:

  • 你必须在 Deployment 中指定适当的选择算符和 Pod 模板标签(在本例中为 app: nginx)。 标签或者选择算符不要与其他控制器(包括其他 Deployment 和 StatefulSet)重叠。 Kubernetes 不会阻止你这样做,但是如果多个控制器具有重叠的选择算符,它们可能会发生冲突 执行难以预料的操作。
  • Deployment 控制器将 pod-template-hash 标签添加到 Deployment 所创建或收留的 每个 ReplicaSet 。

    此标签可确保 Deployment 的子 ReplicaSets 不重叠。 标签是通过对 ReplicaSet 的 PodTemplate 进行哈希处理。 所生成的哈希值被添加到 ReplicaSet 选择算符、Pod 模板标签,并存在于在 ReplicaSet 可能拥有的任何现有 Pod 中。

3.1.2 更新 Deployment

说明:

  • 仅当 Deployment Pod 模板(即 .spec.template)发生改变时,例如模板的标签或容器镜像被更新, 才会触发 Deployment 上线。 其他更新(如对 Deployment 执行扩缩容的操作)不会触发上线动作。
更新nginx Pod的镜像版本,比如从nginx:1.14.2升级到nginx:1.16.1
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record
deployment.apps/nginx-deployment image updated
监控更新过程
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl rollout status deployment/nginx-deployment
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 2 out of 3 new replicas have been updated...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "nginx-deployment" rollout to finish: 1 old replicas are pending termination...
deployment "nginx-deployment" successfully rolled out
在上线成功后,查看deployment
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 72m
查看 Deployment控制的ReplicaSet

可以看出Deployment通过创建新的 ReplicaSet 并将其扩容到 3 个副本并将旧 ReplicaSet 缩容到 0 个副本完成了 Pod 的更新操作

root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-559d658b74 3 3 3 5m
nginx-deployment-66b6c48dd5 0 0 0 74m
查看更新后的pod
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get pod --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-deployment-559d658b74-4npbn 1/1 Running 0 6m29s app=nginx,pod-template-hash=559d658b74
nginx-deployment-559d658b74-4wqbp 1/1 Running 0 6m17s app=nginx,pod-template-hash=559d658b74
nginx-deployment-559d658b74-9wrpj 1/1 Running 0 6m41s app=nginx,pod-template-hash=559d658b74

说明:

  • Deployment 可确保在更新时仅关闭一定数量的 Pod。默认情况下,它确保至少所需 Pods 75% 处于运行状态(最大不可用比例为 25%)。
  • Deployment 还确保仅所创建 Pod 数量只可能比期望 Pods 数高一点点。 默认情况下,它可确保启动的 Pod 个数比期望个数最多多出 25%(最大峰值 25%)。
获取 Deployment 的更多信息
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl describe deployments
Name: nginx-deployment
Namespace: default
CreationTimestamp: Wed, 29 Sep 2021 16:01:55 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 2
kubernetes.io/change-cause: kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record=true
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.16.1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-559d658b74 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 9m11s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 1
Normal ScalingReplicaSet 8m59s deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 2
Normal ScalingReplicaSet 8m59s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 2
Normal ScalingReplicaSet 8m47s deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 1
Normal ScalingReplicaSet 8m47s deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 3
Normal ScalingReplicaSet 8m25s deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 0

3.1.3 回滚 Deployment

说明

  • Deployment 被触发上线时,系统就会创建 Deployment 的新的修订版本。 这意味着仅当 Deployment 的 Pod 模板(.spec.template)发生更改时,才会创建新修订版本 -- 例如,模板的标签或容器镜像发生变化。 其他更新,如 Deployment 的扩缩容操作不会创建 Deployment 修订版本。 这是为了方便同时执行手动缩放或自动缩放。 换言之,当你回滚到较早的修订版本时,只有 Deployment 的 Pod 模板部分会被回滚。
查看更新的历史记录
root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl rollout history deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record=true

查看某个版本的更详细的更加记录信息

root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl rollout history deployment.v1.apps/nginx-deployment --revision=2
deployment.apps/nginx-deployment with revision #2
Pod Template:
Labels: app=nginx
pod-template-hash=559d658b74
Annotations: kubernetes.io/change-cause: kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1 --record=true
Containers:
nginx:
Image: nginx:1.16.1
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
回滚到之前的版本
kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=1
deployment.apps/nginx-deployment rolled back

查看回滚后的deployment

root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl get deployment nginx-deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 91m

查看deploymne的详细信息

root@k8-deploy:~/k8s-yaml/controllers/deployments# kubectl describe deployment nginx-deployment
Name: nginx-deployment
Namespace: default
CreationTimestamp: Wed, 29 Sep 2021 16:01:55 +0800
Labels: app=nginx
Annotations: deployment.kubernetes.io/revision: 3
Selector: app=nginx
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=nginx
Containers:
nginx:
Image: nginx:1.14.2
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: nginx-deployment-66b6c48dd5 (3/3 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 1
Normal ScalingReplicaSet 22m deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 2
Normal ScalingReplicaSet 22m deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 2
Normal ScalingReplicaSet 21m deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 1
Normal ScalingReplicaSet 21m deployment-controller Scaled up replica set nginx-deployment-559d658b74 to 3
Normal ScalingReplicaSet 21m deployment-controller Scaled down replica set nginx-deployment-66b6c48dd5 to 0
Normal ScalingReplicaSet 17s deployment-controller Scaled up replica set nginx-deployment-66b6c48dd5 to 1
Normal ScalingReplicaSet 15s deployment-controller Scaled down replica set nginx-deployment-559d658b74 to 2
Normal ScalingReplicaSet 15s deployment-controller Scaled up replica set nginx-deployment-66b6c48dd5 to 2
Normal ScalingReplicaSet 13s (x2 over 91m) deployment-controller Scaled up replica set nginx-deployment-66b6c48dd5 to 3
Normal ScalingReplicaSet 13s deployment-controller Scaled down replica set nginx-deployment-559d658b74 to 1
Normal ScalingReplicaSet 11s deployment-controller Scaled down replica set nginx-deployment-559d658b74 to 0

3.1.4 扩缩容Deployment

root@k8-deploy:~# kubectl scale deployment.v1.apps/nginx-deployment --replicas=10
deployment.apps/nginx-deployment scaled root@k8-deploy:~# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 10/10 10 10 97m root@k8-deploy:~# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-559d658b74 0 0 0 27m
nginx-deployment-66b6c48dd5 10 10 10 97m root@k8-deploy:~# kubectl get pod -o wide --show-labels
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx-deployment-66b6c48dd5-7tl82 1/1 Running 0 6m33s 10.100.112.5 192.168.2.19 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-b8wvd 1/1 Running 0 69s 10.100.172.196 192.168.2.17 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-cqmzm 1/1 Running 0 6m31s 10.100.112.6 192.168.2.19 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-h9bhf 1/1 Running 0 69s 10.100.224.71 192.168.2.18 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-jzhkt 1/1 Running 0 69s 10.100.224.72 192.168.2.18 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-kdvss 1/1 Running 0 6m29s 10.100.112.7 192.168.2.19 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-kxsvq 1/1 Running 0 69s 10.100.172.197 192.168.2.17 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-tlckf 1/1 Running 0 69s 10.100.112.8 192.168.2.19 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-wfvmh 1/1 Running 0 69s 10.100.224.73 192.168.2.18 <none> <none> app=nginx,pod-template-hash=66b6c48dd5
nginx-deployment-66b6c48dd5-zmsfm 1/1 Running 0 69s 10.100.112.9 192.168.2.19 <none> <none> app=nginx,pod-template-hash=66b6c48dd5

3.1.5 暂停、恢复 Deployment

你可以在触发一个或多个更新之前暂停 Deployment,然后再恢复其执行。

这样做使得你能够在暂停和恢复执行之间应用多个修补程序,而不会触发不必要的上线操作

查看现有的deplymnet集群 rs
root@k8-deploy:~# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 4s
root@k8-deploy:~# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-66b6c48dd5 3 3 3 7s
执行暂停
root@k8-deploy:~# kubectl rollout pause deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment paused
对deploymnet执行多项修改中的一项,比如修改pod镜像版本
root@k8-deploy:~# kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.17.10
deployment.apps/nginx-deployment image updated

在暂停期间对deploymnet的修改不会立即触发更新

root@k8-deploy:~# kubectl rollout history deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
继续对deploymnet镜像修改,比如对pod使用的资源进行限制
root@k8-deploy:~# kubectl set resources deployment.v1.apps/nginx-deployment -c=nginx --limits=cpu=200m,memory=512Mi
deployment.apps/nginx-deployment resource requirements updated # 此时deployment仍然不会触发更新
root@k8-deploy:~# kubectl rollout history deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none
恢复 Deployment 执行
root@k8-deploy:~# kubectl rollout resume deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment resumed
观察新的 ReplicaSet 的创建过程,其中包含了所应用的所有更新
root@k8-deploy:~# kubectl get rs -w
NAME DESIRED CURRENT READY AGE
nginx-deployment-66b6c48dd5 2 2 2 6m26s
nginx-deployment-fcfbff66c 2 2 1 6s
nginx-deployment-fcfbff66c 2 2 2 16s
nginx-deployment-66b6c48dd5 1 2 2 6m36s
nginx-deployment-fcfbff66c 3 2 2 16s
nginx-deployment-66b6c48dd5 1 2 2 6m36s
nginx-deployment-fcfbff66c 3 2 2 16s
nginx-deployment-fcfbff66c 3 3 2 16s
nginx-deployment-66b6c48dd5 1 1 1 6m36s
nginx-deployment-fcfbff66c 3 3 3 28s
nginx-deployment-66b6c48dd5 0 1 1 6m48s
nginx-deployment-66b6c48dd5 0 1 1 6m48s
nginx-deployment-66b6c48dd5 0 0 0 6m48s root@k8-deploy:~# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-deployment-66b6c48dd5 0 0 0 7m55s
nginx-deployment-fcfbff66c 3 3 3 95s root@k8-deploy:~# kubectl rollout history deployment.v1.apps/nginx-deployment
deployment.apps/nginx-deployment
REVISION CHANGE-CAUSE
1 <none>
2 <none>

3.2 service

3.2.1 ClusterIP server

ClusterIP 类型的 service 是 kubernetes 集群默认的服务暴露方式,它只能用于集群内部通信,可以被各 pod 访问.

编写yaml文件
root@k8-deploy:~/k8s-yaml/service# vim clusterip-svc.yml
apiVersion: v1
kind: Service
metadata:
name: mysvc-nginx-80
spec:
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
type: ClusterIP
selector:
app: nginx --- apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
创建svc和depoyment
root@k8-deploy:~/k8s-yaml/service# kubectl apply -f clusterip-svc.yml
查看是否创建成功
root@k8-deploy:~/k8s-yaml/service# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
mysvc-nginx-80 ClusterIP 10.0.124.196 <none> 80/TCP 6s root@k8-deploy:~/k8s-yaml/service# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-67d445c4fc-2zscj 1/1 Running 0 57s
nginx-deployment-67d445c4fc-qk8gt 1/1 Running 0 51s
nginx-deployment-67d445c4fc-wqr48 1/1 Running 0 54s root@k8-deploy:~/k8s-yaml/service# kubectl get endpoints
NAME ENDPOINTS AGE
mysvc-nginx-80 10.100.112.26:80,10.100.172.204:80,10.100.224.83:80 23s
进入某个pod进行svcIP访问测试
root@k8-deploy:~/k8s-yaml/service# kubectl exec nginx-deployment-67d445c4fc-2zscj -it -- bash

root@nginx-deployment-67d445c4fc-2zscj:/# curl -I 10.0.124.196
HTTP/1.1 200 OK
Server: nginx/1.14.2
Date: Sat, 09 Oct 2021 09:53:03 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 04 Dec 2018 14:44:49 GMT
Connection: keep-alive
ETag: "5c0692e1-264"
Accept-Ranges: bytes

3.2.2 NodePort server

如果想要在集群外访问集群内部的服务,可以使用NodePort 类型的 service, 会在集群内部署了 kube-proxy 的节点打开一个指定的端口,之后所有的流量直接发送到这个端口,然后会被转发到 service 后端真实的服务进行访问。

编写NodePort server的yml文件
root@k8-deploy:~/k8s-yaml/service# vim nodeport-svc.yml
apiVersion: v1
kind: Service
metadata:
name: nodeport-svc-nginx-30080
spec:
ports:
- name: http
port: 80
targetPort: 80
nodePort: 30080
protocol: TCP
type: NodePort
selector:
app: nginx --- apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
创建service
root@k8-deploy:~/k8s-yaml/service# kubectl apply -f nodeport-svc.yml
查看是否创建成功
root@k8-deploy:~/k8s-yaml/service# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nodeport-svc-nginx-30080 NodePort 10.0.97.248 <none> 80:30080/TCP 7s root@k8-deploy:~/k8s-yaml/service# kubectl get endpoints
NAME ENDPOINTS AGE
nodeport-svc-nginx-30080 10.100.112.27:80,10.100.112.28:80,10.100.112.29:80 20s
通过访问nodeIP + port进行测试
root@k8-deploy:~/k8s-yaml/service# curl -I 'http://192.168.2.17:30080'
HTTP/1.1 200 OK
... root@k8-deploy:~/k8s-yaml/service# curl -I 'http://192.168.2.18:30080'
HTTP/1.1 200 OK
... root@k8-deploy:~/k8s-yaml/service# curl -I 'http://192.168.2.19:30080'
HTTP/1.1 200 OK
...

3.3 volume

3.3.1 emptyDir

当 Pod 分派到某个 Node 上时,emptyDir 卷会被创建,并且在 Pod 在该节点上运行期间,卷一直存在。 就像其名称表示的那样,卷最初是空的。 尽管 Pod 中的容器挂载 emptyDir 卷的路径可能相同也可能不同,这些容器都可以读写 emptyDir 卷中相同的文件。 当 Pod 因为某些原因被从节点上删除时,emptyDir 卷中的数据也会被永久删除。

  • 说明: 容器崩溃并不会导致 Pod 被从节点上移除,因此容器崩溃期间 emptyDir 卷中的数据是安全的。

emptyDir 的一些用途:

  1. 缓存空间,例如基于磁盘的归并排序。
  2. 为耗时较长的计算任务提供检查点,以便任务能方便地从崩溃前状态恢复执行。
  3. 在 Web 服务器容器服务数据时,保存内容管理器容器获取的文件。

3.3.2 emptyDir volume示例

带有emptyDir volume的deployment yaml文件
root@k8-deploy:~/k8s-yaml/volume# vim emptydir.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /cache
name: cache-volume
volumes:
- name: cache-volume
emptyDir: {}
创建deployment
root@k8-deploy:~/k8s-yaml/volume# kubectl apply -f emptydir.yml
deployment.apps/nginx-deployment created
查看创建的deployment
root@k8-deploy:~/k8s-yaml/volume# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-d9cc57b54-dmxcn 1/1 Running 0 87s 10.100.112.30 192.168.2.19 <none> <none>
进入pod中在挂载的目录创建文件测试
root@k8-deploy:~/k8s-yaml/volume# kubectl exec nginx-deployment-d9cc57b54-dmxcn -it -- bash
root@nginx-deployment-d9cc57b54-dmxcn:/# cd /cache/
root@nginx-deployment-d9cc57b54-dmxcn:/cache# ls
root@nginx-deployment-d9cc57b54-dmxcn:/cache# echo 'test 123' > a.txt
在pod所在的node节点上查找创建的文件,更改文件进行测试
root@k8-node3:/var/lib/docker# cd /var/lib/kubelet/pods/
root@k8-node3:/var/lib/kubelet/pods# ll
总用量 16
drwxr-x--- 4 root root 4096 10月 9 19:04 ./
drwxr-xr-x 7 root root 4096 9月 27 14:39 ../
drwxr-x--- 5 root root 4096 10月 9 19:04 05eb8d5d-0927-4bf2-aa96-ae30c2dc7025/
drwxr-x--- 5 root root 4096 9月 27 14:39 31c4e072-77a9-4336-8339-a4026bde119b/
root@k8-node3:/var/lib/kubelet/pods# find . -name a.txt
./05eb8d5d-0927-4bf2-aa96-ae30c2dc7025/volumes/kubernetes.io~empty-dir/cache-volume/a.txt
root@k8-node3:/var/lib/kubelet/pods# cat ./05eb8d5d-0927-4bf2-aa96-ae30c2dc7025/volumes/kubernetes.io~empty-dir/cache-volume/a.txt
test 123
root@k8-node3:/var/lib/kubelet/pods# echo 'test 456' >> ./05eb8d5d-0927-4bf2-aa96-ae30c2dc7025/volumes/kubernetes.io~empty-dir/cache-volume/a.txt
pod中的文件会被修改
root@nginx-deployment-d9cc57b54-dmxcn:/# cat /cache/a.txt
test 123
test 456
deployment删除后,pod被删除,volume也会被删除。
root@k8-deploy:~/k8s-yaml/volume# kubectl delete -f emptydir.yml
deployment.apps "nginx-deployment" delet
root@k8-node3:/var/lib/kubelet/pods# find . -name a.txt

root@k8-node3:/var/lib/kubelet/pods#  ll
总用量 12
drwxr-x--- 3 root root 4096 10月 9 19:11 ./
drwxr-xr-x 7 root root 4096 9月 27 14:39 ../
drwxr-x--- 5 root root 4096 9月 27 14:39 31c4e072-77a9-4336-8339-a4026bde119b/

3.3.3 HostPath volume

hostPath 卷能将主机节点文件系统上的文件或目录挂载到你的 Pod 中,pod被删除后,主机节点上的文件不会被删除。

3.3.4 HostPath volume 示例

ymal文件
root@k8-deploy:~/k8s-yaml/volume# vim hostpath.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html
name: nginx-html-volume
volumes:
- name: nginx-html-volume
hostPath:
path: /tmp/html
创建deployment
root@k8-deploy:~/k8s-yaml/volume# kubectl apply -f hostpath.yml
deployment.apps/nginx-deployment created
查看pod所在node节点
root@k8-deploy:~/k8s-yaml/volume# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-6b5bc8c669-xt5kz 1/1 Running 0 2m7s 10.100.112.32 192.168.2.19 <none> <none>
在pod所在的node节点的本地目录创建文件
root@k8-node3:~# echo 'nginx-html-volume test' > /tmp/html/test.html
查看pod中挂载目录中文件是否存在
root@k8-deploy:~/k8s-yaml/volume# kubectl exec nginx-deployment-6b5bc8c669-xt5kz -- cat /usr/share/nginx/html/test.html
nginx-html-volume test
进入pod中更改文件内容测试
root@k8-deploy:~/k8s-yaml/volume# kubectl exec nginx-deployment-6b5bc8c669-xt5kz -it -- bash

root@nginx-deployment-6b5bc8c669-xt5kz:/# echo "aaaa" >> /usr/share/nginx/html/test.html
root@nginx-deployment-6b5bc8c669-xt5kz:/# cat !$
cat /usr/share/nginx/html/test.html
nginx-html-volume test
aaaa
再次查看pod所在node节点本地文件内容是否更改
root@k8-node3:~# cat /tmp/html/test.html
nginx-html-volume test
aaaa
pod被删除后,pod所在node节点本地文件不会被删除
oot@k8-deploy:~/k8s-yaml/volume# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 17m root@k8-deploy:~/k8s-yaml/volume# kubectl delete deployment nginx-deployment
deployment.apps "nginx-deployment" deleted
root@k8-node3:~# cat /tmp/html/test.html
nginx-html-volume test
aaaa

3.3.5 nfs volume

fs 卷能将 NFS (网络文件系统) 挂载到你的 Pod 中。 不像 emptyDir 那样会在删除 Pod 的同时也会被删除,nfs 卷的内容在删除 Pod 时会被保存,卷只是被卸载。 这意味着 nfs 卷可以被预先填充数据,并且这些数据可以在 Pod 之间共享。

3.3.6 nfs volume 实例

安装配置nfs-server服务
apt install nfs-kernel-server

vim /etc/exports
/data/nfs_data *(rw,sync,no_root_squash) mkdir -p /data/nfs_data systemctl restart nfs-kernel-server
systemctl enable nfs-kernel-server root@k8-node3:~# showmount -e 192.168.2.10
Export list for 192.168.2.10:
/data/nfs_data *
nfs volume deployment yaml文件
root@k8-deploy:~/k8s-yaml/volume# vim nfs.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
volumeMounts:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
volumeMounts:
- mountPath: /usr/share/nginx/html/website
name: my-nfs-volume
- mountPath: /usr/share/nginx/html/website/js
name: my-nfs-js
volumes:
- name: my-nfs-volume
nfs:
server: 192.168.2.10
path: /data/nfs_data/website
- name: my-nfs-js
nfs:
server: 192.168.2.10
path: /data/nfs_data/website/js
---
apiVersion: v1
kind: Service
metadata:
name: nginx-svc
spec:
ports:
- name: http
port: 81
targetPort: 80
nodePort: 30016
protocol: TCP
type: NodePort
selector:
app: nginx
创建deployment 和service
root@k8-deploy:~/k8s-yaml/volume# kubectl apply -f nfs.yml
deployment.apps/nginx-deployment created
service/nginx-svc created
进入pod中查看nfs挂载是否成功
root@k8-deploy:~/k8s-yaml/volume# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-5669c4797f-9bwvp 1/1 Running 0 8s root@k8-deploy:~/k8s-yaml/volume# kubectl exec nginx-deployment-5669c4797f-9bwvp -it -- bash
root@nginx-deployment-5669c4797f-9bwvp:/# df -h
Filesystem Size Used Avail Use% Mounted on
overlay 98G 11G 83G 12% /
tmpfs 64M 0 64M 0% /dev
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/vda2 98G 11G 83G 12% /etc/hosts
shm 64M 0 64M 0% /dev/shm
192.168.2.10:/data/nfs_data/website 98G 17G 77G 19% /usr/share/nginx/html/website
tmpfs 3.9G 12K 3.9G 1% /run/secrets/kubernetes.io/serviceaccount
192.168.2.10:/data/nfs_data/website/js 98G 17G 77G 19% /usr/share/nginx/html/website/js
tmpfs 3.9G 0 3.9G 0% /proc/acpi
tmpfs 3.9G 0 3.9G 0% /proc/scsi
tmpfs 3.9G 0 3.9G 0% /sys/firmware
在共享的nfs目录创建测试文件
root@k8-deploy:/data/nfs_data/website# echo 'nfs volume test' > test.html

root@k8-deploy:/data/nfs_data/website/js# echo 'nfs volume js' > test.js
通过service服务器访问创建的测试文件进行测试
root@k8-deploy:/data/nfs_data/website/js# curl 192.168.2.17:30016/website/test.html
nfs volume test root@k8-deploy:/data/nfs_data/website/js# curl 192.168.2.17:30016/website/js/test.js
nfs volume js

4 常用维护命令

命令 描述
create 通过文件名或标准输入创建资源
expose 将一个资源公开为一个新的Kubernetes Service
run 在集群中运行一个指定的镜像
set 为 objects 设置一个指定的特征
get 显示一个或多个资源
explain 查看资源的文档
edit 在服务器上编辑一个资源
delete 通过文件名、标准输入、资源名称或标签选择器来删除资源

6.K8s集群升级、etcd备份和恢复、资源对象及其yaml文件使用总结、常用维护命令的更多相关文章

  1. 万级K8s集群背后etcd稳定性及性能优化实践

    背景与挑战 随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...

  2. 万级K8s集群背后 etcd 稳定性及性能优化实践

    1背景与挑战随着腾讯自研上云及公有云用户的迅速增长,一方面,腾讯云容器服务TKE服务数量和核数大幅增长, 另一方面我们提供的容器服务类型(TKE托管及独立集群.EKS弹性集群.edge边缘计算集群.m ...

  3. k8s集群升级

    集群升级 由于课程中的集群版本是 v1.10.0,这个版本相对有点旧了,最新版本都已经 v1.14.x 了,为了尽量保证课程内容的更新度,所以我们需要将集群版本更新.我们的集群是使用的 kubeadm ...

  4. 使用kubeadm进行k8s集群升级

    一.目标 操作系统:CentOS Linux release 7.6.1810 (Core) 安装软件: docker:18.06.3-ce 从v1.15.5升级到v1.16.15 当前版本: [ro ...

  5. 大规模 K8s 集群管理经验分享 &&num;183&semi; 上篇

    11 月 23 日,Erda 与 OSCHINA 社区联手发起了[高手问答第 271 期 -- 聊聊大规模 K8s 集群管理],目前问答活动已持续一周,由 Erda SRE 团队负责人骆冰利为大家解答 ...

  6. k8s集群Canal的网络控制 原

    1 简介 直接上干货 public class DispatcherServlet extends HttpServlet { private Properties contextConfigProp ...

  7. &lbrack;k8s&rsqb;jenkins配合kubernetes插件实现k8s集群构建的持续集成

    另一个结合harbor自动构建镜像的思路: 即code+baseimage一体的方案 - 程序员将代码提交到代码仓库gitlab - 钩子触发jenkins master启动一次构建 - jenkin ...

  8. 使用Kubeadm创建k8s集群之节点部署(三十一)

    前言 本篇部署教程将讲述k8s集群的节点(master和工作节点)部署,请先按照上一篇教程完成节点的准备.本篇教程中的操作全部使用脚本完成,并且对于某些情况(比如镜像拉取问题)还提供了多种解决方案.不 ...

  9. K8S集群搭建——基于CentOS 7系统

    环境准备集群数量此次使用3台CentOS 7系列机器,分别为7.3,7.4,7.5 节点名称 节点IPmaster 192.168.0.100node1 192.168.0.101node2 192. ...

随机推荐

  1. SQL Server 索引

    SQL Server 中数据存储的基本单位是页(Page).数据库中的数据文件(.mdf 或 .ndf)分配的磁盘空间可以从逻辑上划分成页(从 0 到 n 连续编号).磁盘 I/O 操作在页级执行.也 ...

  2. LSP&OpenCurlyDoubleQuote;浏览器劫持概念

    关于Winsock LSP“浏览器劫持”,中招者一直高居不下,由于其特殊性,直接删除而不恢复LSP的正常状态很可能会导致无法上网所以对其修复需慎重.   先说说什么是Winsock LSP“浏览器劫持 ...

  3. MySQL 授权详解

    (1)确认一下3306是否对外开放,mysql默认状态下是不开放对外访问功能的.查看的办法如下: 1 2 3 4 5 6 7 netstat -an | grep 3306 tcp        0  ...

  4. spring之AspectJ基于xml AOP编程

    一.引言: AspectJ框架不仅实现了面向切面编程,而且还支持注解,spring将它引入自己的规范之中. 二.需要了解: AspectJ是基于java语言的AOP框架 spring2.0之后支持As ...

  5. &lbrack;iOS&rsqb;创建界面方法的讨论

    以前在入门的时候,找的入门书籍上编写的 demo 都是基于 Storyboards 拖界面的.后来接触公司项目,发现界面都是用纯代码去写复杂的 autoLayout 的.再然后,领导给我发了个 Mas ...

  6. Best Cow Line(POJ3617)

    Description FJ is about to take his N (1 ≤ N ≤ 2,000) cows to the annual"Farmer of the Year&quo ...

  7. 步步为营-57-JQuery练习题

    01 点谁谁哭 <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head&g ...

  8. Android Hook框架adbi源码浅析(一)

    adbi(The Android Dynamic Binary Instrumentation Toolkit)是一个Android平台通用hook框架,基于动态库注入与inline hook技术实现 ...

  9. mypy 支持静态类型编程的python变种

    每种编程语言都有一群固定的用户,对于那些习惯将不同编程语言用成同样的感觉的人来说,最是难受.因为每种语言都有它独特的设计『哲学』和擅长的应用领域. PHP给大家的一贯的印象都是动态弱类型语言,Pyth ...

  10. &lbrack;吴恩达机器学习笔记&rsqb;12支持向量机4核函数和标记点kernels and landmark

    12.支持向量机 觉得有用的话,欢迎一起讨论相互学习~Follow Me 12.4 核函数与标记点- Kernels and landmarks 问题引入 如果你有以下的训练集,然后想去拟合其能够分开 ...