操作环境
网络拓扑
环境说明:
OS:
CentOS Linux release 7.4.1708docker:
Client: Version: 1.12.6 API version: 1.24 Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64 Go version: go1.8.3 Git commit: ec8512b/1.12.6 Built: Mon Dec 11 16:08:42 2017 OS/Arch: linux/amd64 Server: Version: 1.12.6 API version: 1.24 Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64 Go version: go1.8.3 Git commit: ec8512b/1.12.6 Built: Mon Dec 11 16:08:42 2017 OS/Arch: linux/amd64kubernetes:
Kubernetes v1.5.2glusterfs:
glusterfs 3.12.3
操作步骤
Kubernetes环境已经完成部署完成,部署步骤参照
Centos7 下Kubernetes集群安装部署。
部署Glusterfs
这里有5台服务器,10.10.200.226、10.10.200.227作为GlusterFS存储服务器,10.10.200.224、
10.10.200.229、10.10.200.230调用GlusterFS,同样也需要安装Gluster。
1.分别在5台服务器上安装Gluster
[root@k8s-master ~]#yum -y install centos-release-gluster [root@k8s-master ~]#yum -y install glusterfs glusterfs-fuse glusterfs-server2.启动Glusterfs
[root@k8s-master ~]#systemctl enable glusterd [root@k8s-master ~]#systemctl start glusterd3.创建GlusterFS Cluster
编辑/etc/hosts文件
[root@k8s-master ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 10.10.200.226 glusterfs-node1 10.10.200.227 glusterfs-node2 10.10.200.224 k8s-master 10.10.200.229 k8s-node1 10.10.200.230 k8s-node2选择1台服务器添加其他服务器
[root@gluster-node1 ~]#gluster peer probe k8s-master [root@gluster-node1 ~]#gluster peer probe k8s-node1 [root@gluster-node1 ~]#gluster peer probe k8s-node2 [root@gluster-node1 ~]#gluster peer probe gluster-node2查看gluster节点状态
[root@k8s-master yaml]# gluster peer status Number of Peers: 4 Hostname: glusterfs-node2 Uuid: 3c5c5994-0c05-4be0-9ec3-a431f193a1f0 State: Peer in Cluster (Connected) Hostname: glusterfs-node1 Uuid: 31ac3dff-94a3-4166-991b-bf4727113539 State: Peer in Cluster (Connected) Hostname: 10.10.200.229 Uuid: 7c2f03fe-e2cb-412e-a91f-2baf873610ef State: Peer in Cluster (Connected) Other names: 10.10.200.229 Hostname: 10.10.200.230 Uuid: 314a7676-d1eb-4c58-9617-8c510c93961d State: Peer in Cluster (Connected)
3.创建volume
这里只选择10.10.200.226/10.10.200.227来进行创建volume
[root@gluster-node1 brick1]# gluster volume create mysql-volume replica 2 glusterfs-node2:/data/brick1/mysql-volume/ glusterfs-node1:/data/brick1/mysql-volume/启动mysql-volume
[root@gluster-node1 brick1]# gluster volume start mysql-volume volume start: mysql-volume: success
查看volume状态
[root@gluster-node1 brick1]# gluster volume status Status of volume: mysql-volume Gluster process TCP Port RDMA Port Online Pid ------------------------------------------------------------------------------ Brick glusterfs-node2:/data/brick1/mysql-vo lume 49152 0 Y 2976 Brick glusterfs-node1:/data/brick1/mysql-vo lume 49152 0 Y 7835 Self-heal Daemon on localhost N/A N/A Y 7856 Self-heal Daemon on k8s-master N/A N/A Y 2025 Self-heal Daemon on glusterfs-node2 N/A N/A Y 2997 Task Status of Volume mysql-volume ------------------------------------------------------------------------------ There are no active volume tasks到这里就完成了2台Glusterfs存储服务器的配置
配置Kubernetes
可以在https://github.com/kubernetes/kubernetes/tree/master/examples/volumes/glusterfs上面找
到kuernetes配置gluster的文件glusterfs-endpoint.json/glusterfs-pod.json/glusterfs-services.json。
这里只需要glusterfs-endpoint.json,glusterfs-service.json。
修改glusterfs-endpoint.json文件如下,主要是ip地址的修改
[root@k8s-master yaml]# vi glusterfs-endpoints.json { "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "10.10.200.226" } ], "ports": [ { "port": 1 } ] }, { "addresses": [ { "ip": "10.10.200.227" } ], "ports": [ { "port": 1 } ] } ] }glusterfs-service.json文件如下,不需修改
[root@k8s-master yaml]# vi glusterfs-service.json { "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 1} ] } }启用上述两文件
[root@k8s-master yaml]# kubectl apply -f glusterfs-endpoints.json endpoints "glusterfs-cluster" created [root@k8s-master yaml]# kubectl apply -f glusterfs-service.json service "glusterfs-cluster" created
[root@k8s-master yaml]# kubectl get ep NAME ENDPOINTS AGE glusterfs-cluster 10.10.200.226:1,10.10.200.227:1 31s kubernetes 10.10.200.224:6443 6d [root@k8s-master yaml]# kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE glusterfs-cluster 172.17.62.135 <none> 1/TCP 33s kubernetes 10.254.0.1 <none> 443/TCP 6d创建presistent volume以及presistent volume claim yaml文件
[root@k8s-master yaml]# vi glusterfs-pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: gluster-dev-volume1 labels: name: mysql1 spec: capacity: storage: 10Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-cluster" path: "mysql-volume" readOnly: false
[root@k8s-master yaml]# vi glusterfs-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: glusterfs-mysql1 spec: accessModes: - ReadWriteMany resources: requests: storage: 10Gi selector: matchLabels: name: "mysql1"启动上述两文件
[root@k8s-master yaml]# kubectl apply -f glusterfs-pv.yaml persistentvolume "gluster-dev-volume1" created [root@k8s-master yaml]# kubectl apply -f glusterfs-pvc.yaml persistentvolumeclaim "glusterfs-mysql1" created [root@k8s-master yaml]# kubectl get pv NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM REASON AGE gluster-dev-volume1 10Gi RWX Retain Bound default/glusterfs-mysql1 8s [root@k8s-master yaml]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESSMODES AGE
glusterfs-mysql1 Bound gluster-dev-volume1 10Gi RWX 6s
以上就完成了kubernetes配置glusterfs的全过程,下面我们通过运行mysql测试一下,编写mysql deployment yaml如下
[root@k8s-master yaml]# vi mysql-deployment.yaml apiVersion: v1 kind: Service metadata: name: mysql labels: name: mysql spec: type: NodePort ports: - name: mysqlport port: 3306 nodePort: 32006 selector: name: mysql --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: mysql spec: replicas: 1 template: metadata: labels: name: mysql spec: containers: - name: mysqlcontainer image: docker.io/mysql imagePullPolicy: IfNotPresent env: - name: MYSQL_ROOT_PASSWORD value: root123456 ports: - containerPort : 3306 volumeMounts: - name: gluster-mysql-data mountPath: "/var/lib/mysql" volumes: - name: gluster-mysql-data persistentVolumeClaim: claimName: glusterfs-mysql1
启动该文件
[root@k8s-master yaml]# kubectl apply -f mysql-deployment.yaml service "mysql" created deployment "mysql" configured
查看状态
[root@k8s-master yaml]# kubectl get service NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE glusterfs-cluster 172.17.62.135 <none> 1/TCP 13m kubernetes 10.254.0.1 <none> 443/TCP 6d mysql 172.17.231.221 <nodes> 3306:32006/TCP 1m [root@k8s-master yaml]# kubectl get pods NAME READY STATUS RESTARTS AGE mysql-3736381796-5j11t 1/1 Running 0 1m nginx-controller-1hxkz 1/1 Running 3 3d nginx-controller-3xcl8 1/1 Running 3 3d [root@k8s-master yaml]# kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE mysql-3736381796-5j11t 1/1 Running 0 1m 172.17.99.3 10.10.200.230 nginx-controller-1hxkz 1/1 Running 3 3d 172.17.99.2 10.10.200.230 nginx-controller-3xcl8 1/1 Running 3 3d 172.17.17.2 10.10.200.229登录mysql
[root@k8s-node2 ~]# mysql -h 10.10.200.230 -P 32006 -uroot -proot123456 Welcome to the MariaDB monitor. Commands end with ; or \g. Your MySQL connection id is 3 Server version: 5.7.20 MySQL Community Server (GPL) Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MySQL [(none)]>