GlusterFS分布式文件系统部署及基本使用(CentOS 7.6)

时间:2021-10-20 15:33:32

          GlusterFS分布式文件系统部署及基本使用(CentOS 7.6)

                                         作者:尹正杰

版权声明:原创作品,谢绝转载!否则将追究法律责任。

  Gluster File System 是一款*软件,主要由ZRESEARCH公司负责开发,十几名开发者,最近非常活跃。 文档也比较齐全,不难上手。Gluster是一个分布式横向扩展文件系统,可根据您的存储消耗需求快速配置额外的存储。它将自动故障转移作为主要功能。官网快速入门文档:https://docs.gluster.org/en/latest/Install-Guide/Overview/

一.安装 Gluster

1>.什么是Gluster

  Gluster是一个可扩展的分布式文件系统,可将来自多个服务器的磁盘存储资源聚合到一个全局命名空间中。

2>.Gluster的好处

  • 可扩展到几PB
  • 处理成千上万的客户
  • POSIX兼容
  • 使用商品硬件
  • 可以使用任何支持扩展属性的ondisk文件系统
  • 可使用NFS和SMB等行业标准协议访问
  • 提供复制,配额,地理复制,快照和位置检测
  • 允许优化不同的工作负载
  • 开源

3>.查看当前最新版本,如下图所示,目前最新版本是Gluster 5

GlusterFS分布式文件系统部署及基本使用(CentOS 7.6)

4>.配置glusterfs的yum源并安装该服务

GlusterFS分布式文件系统部署及基本使用(CentOS 7.6)

[root@node101 ~]# cat /etc/yum.repos.d/glusterfs.repo
[myglusterfs]
name=glusterfs
baseurl=https://buildlogs.centos.org/centos/7/storage/x86_64/gluster-5/
enabled=1
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-mysql
[root@node101 ~]#  
[root@node101 yum.repos.d]# yum -y install glusterfs-server
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.jdcloud.com
* extras: mirrors.163.com
* updates: mirrors.shu.edu.cn
myglusterfs | 2.9 kB 00:00:00
myglusterfs/primary_db | 76 kB 00:00:03
Resolving Dependencies
--> Running transaction check
---> Package glusterfs-server.x86_64 0:5.3-1.el7 will be installed
--> Processing Dependency: glusterfs-libs = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: glusterfs-fuse = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: glusterfs-client-xlators = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: glusterfs-cli = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: glusterfs-api = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: glusterfs = 5.3-1.el7 for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: rpcbind for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_PRIVATE_3.7.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_PRIVATE_3.4.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_3.7.4)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_3.7.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_3.6.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_3.5.1)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_3.4.2)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0(GFAPI_3.4.0)(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: liburcu-cds.so.6()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: liburcu-bp.so.6()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libglusterfs.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfxdr.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfrpc.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfchangelog.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Processing Dependency: libgfapi.so.0()(64bit) for package: glusterfs-server-5.3-1.el7.x86_64
--> Running transaction check
---> Package glusterfs.x86_64 0:5.3-1.el7 will be installed
---> Package glusterfs-api.x86_64 0:5.3-1.el7 will be installed
---> Package glusterfs-cli.x86_64 0:5.3-1.el7 will be installed
---> Package glusterfs-client-xlators.x86_64 0:5.3-1.el7 will be installed
---> Package glusterfs-fuse.x86_64 0:5.3-1.el7 will be installed
--> Processing Dependency: psmisc for package: glusterfs-fuse-5.3-1.el7.x86_64
--> Processing Dependency: attr for package: glusterfs-fuse-5.3-1.el7.x86_64
---> Package glusterfs-libs.x86_64 0:5.3-1.el7 will be installed
---> Package rpcbind.x86_64 0:0.2.0-47.el7 will be installed
--> Processing Dependency: libtirpc >= 0.2.4-0.7 for package: rpcbind-0.2.0-47.el7.x86_64
--> Processing Dependency: libtirpc.so.1()(64bit) for package: rpcbind-0.2.0-47.el7.x86_64
---> Package userspace-rcu.x86_64 0:0.10.0-3.el7 will be installed
--> Running transaction check
---> Package attr.x86_64 0:2.4.46-13.el7 will be installed
---> Package libtirpc.x86_64 0:0.2.4-0.15.el7 will be installed
---> Package psmisc.x86_64 0:22.20-15.el7 will be installed
--> Finished Dependency Resolution Dependencies Resolved ===================================================================================================================================================================
Package Arch Version Repository Size
===================================================================================================================================================================
Installing:
glusterfs-server x86_64 5.3-1.el7 myglusterfs 1.4 M
Installing for dependencies:
attr x86_64 2.4.46-13.el7 base 66 k
glusterfs x86_64 5.3-1.el7 myglusterfs 668 k
glusterfs-api x86_64 5.3-1.el7 myglusterfs 106 k
glusterfs-cli x86_64 5.3-1.el7 myglusterfs 202 k
glusterfs-client-xlators x86_64 5.3-1.el7 myglusterfs 989 k
glusterfs-fuse x86_64 5.3-1.el7 myglusterfs 147 k
glusterfs-libs x86_64 5.3-1.el7 myglusterfs 415 k
libtirpc x86_64 0.2.4-0.15.el7 base 89 k
psmisc x86_64 22.20-15.el7 base 141 k
rpcbind x86_64 0.2.0-47.el7 base 60 k
userspace-rcu x86_64 0.10.0-3.el7 myglusterfs 92 k Transaction Summary
===================================================================================================================================================================
Install 1 Package (+11 Dependent packages) Total download size: 4.3 M
Installed size: 16 M
Downloading packages:
(1/12): attr-2.4.46-13.el7.x86_64.rpm | 66 kB 00:00:00
(2/12): glusterfs-api-5.3-1.el7.x86_64.rpm | 106 kB 00:00:05
(3/12): glusterfs-5.3-1.el7.x86_64.rpm | 668 kB 00:00:06
(4/12): glusterfs-cli-5.3-1.el7.x86_64.rpm | 202 kB 00:00:02
(5/12): glusterfs-fuse-5.3-1.el7.x86_64.rpm | 147 kB 00:00:01
(6/12): glusterfs-client-xlators-5.3-1.el7.x86_64.rpm | 989 kB 00:00:05
(7/12): libtirpc-0.2.4-0.15.el7.x86_64.rpm | 89 kB 00:00:00
(8/12): psmisc-22.20-15.el7.x86_64.rpm | 141 kB 00:00:00
(9/12): rpcbind-0.2.0-47.el7.x86_64.rpm | 60 kB 00:00:00
(10/12): glusterfs-libs-5.3-1.el7.x86_64.rpm | 415 kB 00:00:03
(11/12): userspace-rcu-0.10.0-3.el7.x86_64.rpm | 92 kB 00:00:01
(12/12): glusterfs-server-5.3-1.el7.x86_64.rpm | 1.4 MB 00:00:06
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 238 kB/s | 4.3 MB 00:00:18
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : glusterfs-libs-5.3-1.el7.x86_64 1/12
Installing : glusterfs-5.3-1.el7.x86_64 2/12
Installing : glusterfs-client-xlators-5.3-1.el7.x86_64 3/12
Installing : glusterfs-api-5.3-1.el7.x86_64 4/12
Installing : glusterfs-cli-5.3-1.el7.x86_64 5/12
Installing : libtirpc-0.2.4-0.15.el7.x86_64 6/12
Installing : rpcbind-0.2.0-47.el7.x86_64 7/12
Installing : psmisc-22.20-15.el7.x86_64 8/12
Installing : attr-2.4.46-13.el7.x86_64 9/12
Installing : glusterfs-fuse-5.3-1.el7.x86_64 10/12
Installing : userspace-rcu-0.10.0-3.el7.x86_64 11/12
Installing : glusterfs-server-5.3-1.el7.x86_64 12/12
Verifying : glusterfs-libs-5.3-1.el7.x86_64 1/12
Verifying : glusterfs-cli-5.3-1.el7.x86_64 2/12
Verifying : glusterfs-fuse-5.3-1.el7.x86_64 3/12
Verifying : rpcbind-0.2.0-47.el7.x86_64 4/12
Verifying : glusterfs-api-5.3-1.el7.x86_64 5/12
Verifying : glusterfs-5.3-1.el7.x86_64 6/12
Verifying : userspace-rcu-0.10.0-3.el7.x86_64 7/12
Verifying : glusterfs-server-5.3-1.el7.x86_64 8/12
Verifying : attr-2.4.46-13.el7.x86_64 9/12
Verifying : psmisc-22.20-15.el7.x86_64 10/12
Verifying : glusterfs-client-xlators-5.3-1.el7.x86_64 11/12
Verifying : libtirpc-0.2.4-0.15.el7.x86_64 12/12 Installed:
glusterfs-server.x86_64 0:5.3-1.el7 Dependency Installed:
attr.x86_64 0:2.4.46-13.el7 glusterfs.x86_64 0:5.3-1.el7 glusterfs-api.x86_64 0:5.3-1.el7 glusterfs-cli.x86_64 0:5.3-1.el7
glusterfs-client-xlators.x86_64 0:5.3-1.el7 glusterfs-fuse.x86_64 0:5.3-1.el7 glusterfs-libs.x86_64 0:5.3-1.el7 libtirpc.x86_64 0:0.2.4-0.15.el7
psmisc.x86_64 0:22.20-15.el7 rpcbind.x86_64 0:0.2.0-47.el7 userspace-rcu.x86_64 0:0.10.0-3.el7 Complete!
[root@node101 yum.repos.d]#

[root@node101 yum.repos.d]# yum -y install glusterfs-server

  官方的实验就是使用2台机器教学的,因此我们这里也需要开启2台虚拟机即可。2台都需要安装glusterfs-server服务。

5>.启动glusterfs服务

[root@node101 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
Active: inactive (dead)
[root@node101 ~]#
[root@node101 ~]# systemctl start glusterd
[root@node101 ~]#
[root@node101 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
Active: active (running) since Mon -- :: CST; 2s ago
Process: ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=/SUCCESS)
Main PID: (glusterd)
CGroup: /system.slice/glusterd.service
└─ /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Feb :: node101.yinzhengjie.org.cn systemd[]: Starting GlusterFS, a clustered file-system server...
Feb :: node101.yinzhengjie.org.cn systemd[]: Started GlusterFS, a clustered file-system server.
[root@node101 ~]#
[root@node101 ~]# systemctl enable glusterd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
[root@node101 ~]#

[root@node101 ~]# systemctl start glusterd

[root@node102 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
Active: inactive (dead)
[root@node102 ~]#
[root@node102 ~]#
[root@node102 ~]# systemctl start glusterd
[root@node102 ~]#
[root@node102 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
Loaded: loaded (/usr/lib/systemd/system/glusterd.service; disabled; vendor preset: disabled)
Active: active (running) since Mon 2019-02-18 16:22:09 CST; 2s ago
Process: 14000 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 14001 (glusterd)
CGroup: /system.slice/glusterd.service
└─14001 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO Feb 18 16:22:09 node102.yinzhengjie.org.cn systemd[]: Starting GlusterFS, a clustered file-system server...
Feb 18 16:22:09 node102.yinzhengjie.org.cn systemd[]: Started GlusterFS, a clustered file-system server.
[root@node102 ~]#
[root@node102 ~]#
[root@node102 ~]# systemctl enable glusterd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
[root@node102 ~]#
[root@node102 ~]#

[root@node102 ~]# systemctl start glusterd

二.GlusterFS 的配置于基本使用(https://docs.gluster.org/en/latest/Install-Guide/Configure/

1>.配置信任池

[root@node101 ~]# gluster peer probe node102.yinzhengjie.org.cn        #我们在node101.yinzhengjie.org.cn节点上配置和node102.yinzhengjie.org.cn的信任池。
peer probe: success.
[root@node101 ~]#
[root@node101 ~]# gluster peer status                        #查看node101.yinzhengjie.org.cn信任池的状态
Number of Peers: Hostname: node102.yinzhengjie.org.cn
Uuid: ec348557-e9c3-46c8-8ce9-bac6c1b4c298
State: Peer in Cluster (Connected)
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn                  #我们远程到其他的node102.yinzhengjie.org.cn节点上
Last login: Mon Feb :: from 172.30.1.2
[root@node102 ~]#
[root@node102 ~]#
[root@node102 ~]# gluster peer status                        #查看node102.yinzhengjie.org.cn以及有的信任池状态
Number of Peers: Hostname: node101.yinzhengjie.org.cn
Uuid: 9ed5663a-72ec-44c2-92f6-118c6f6cabed
State: Peer in Cluster (Connected)
[root@node102 ~]#
[root@node102 ~]# exit
logout
Connection to node102.yinzhengjie.org.cn closed.
[root@node101 ~]# 

2>.创建分布式卷

可以在存储环境中创建以下类型的卷:

分布式 - 分布式卷在卷中的块中分配文件。您可以使用需要扩展存储的分布式卷,并且冗余要么不重要,要么由其他硬件/软件层提供。

已复制 - 复制的卷跨卷中的块复制文件。您可以在高可用性和高可靠性至关重要的环境中使用复制卷。

分布式复制 - 分布式复制卷在卷中的复制块中分布文件。您可以在需要扩展存储且高可靠性至关重要的环境中使用分布式复制卷。分布式复制卷还可在大多数环境中提供更高的读取性能。

分散 - 分散卷基于擦除代码,提供节省空间的磁盘或服务器故障保护。它将原始文件的编码片段存储到每个块中,其方式是仅需要片段的子集来恢复原始文件。管理员在创建卷时配置可丢失而不会丢失数据访问权限的砖块数。

Distributed Dispersed - 分布式分散卷在分散的子卷上分发文件。这与分发复制卷具有相同的优点,但使用分散将数据存储到块中。

Striped [Deprecated] - 条带卷在卷中的砖块上划分数据。为获得最佳结果,只应在访问非常大的文件的高并发环境中使用条带卷。

Distributed Striped [已弃用] - 分布式条带卷跨群集中的两个或多个节点条带化数据。您应该使用需要扩展存储的分布式条带卷,而在高并发环境中访问非常大的文件至关重要。

分布式条带复制[已弃用] - 分布式条带复制卷可在群集中的复制块中分布条带化数据。为了获得最佳结果,您应该在高度并发的环境中使用分布式条带复制卷,在这些环境中,对非常大的文件和性能进行并行访 在此版本中,仅对Map Reduce工作负载支持此卷类型的配置。

Striped Replicated [已弃用] - 条带化复制卷会跨群集中的复制块条带化数据。为了获得最佳结果,您应该在高度并发的环境中使用条带化复制卷,在这些环境中可以并行访问非常大的文件,并且性能至关重要。在此版本中,仅对Map Reduce工作负载支持此卷类型的配置。

GlusterFS的卷类型简介

[root@node101 ~]#
[root@node101 ~]# mkdir -p /home/yinzhengjie/glusterfs/file1            #我们创建在2台服务器上分别创建一个存放glusterfs卷的路径
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb :: from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]#
[root@node102 ~]# mkdir -p /home/yinzhengjie/glusterfs/file1
[root@node102 ~]#
[root@node102 ~]# exit
logout
Connection to node102.yinzhengjie.org.cn closed.
[root@node101 ~]#
[root@node101 ~]# gluster volume create test-volume node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1/ node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1/        #创建分布式群
volume create: test-volume: success: please start the volume to access data
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# gluster volume info        #显示创建的卷信息 Volume Name: test-volume
Type: Distribute
Volume ID: d73f1306--4fea-8fe2-a37771b471d5
Status: Created
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node101 ~]#
[root@node101 ~]#  

3>.创建复制卷(类似于raid 1)

[root@node101 ~]# mkdir -p /home/yinzhengjie/glusterfs/file2
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb :: from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]# mkdir -p /home/yinzhengjie/glusterfs/file2
[root@node102 ~]#
[root@node102 ~]# exit
logout
Connection to node102.yinzhengjie.org.cn closed.
[root@node101 ~]#
[root@node101 ~]# gluster volume create replicated-volume replica transport tcp node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Replica volumes are prone to split-brain. Use Arbiter or Replica to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?          #注意,官方推荐使用3个副本来仲裁,因为使用2个副本会存在脑裂的风险!由于我实验只有2台虚拟机,因此我这里继续执行了我的命令
(y/n) y
volume create: replicated-volume: success: please start the volume to access data
[root@node101 ~]#
[root@node101 ~]# gluster volume info      #通过上面的方式,我们即创建来分布式卷也创建了复制卷,我们可以通过该条命令查看创建的卷信息。我们可以单独查看某个卷信息,如果没有指定的话默认查看所有已经创建的卷信息。 Volume Name: replicated-volume
Type: Replicate
Volume ID: abbcc657--40bc-b64f-a48af4c46e70
Status: Created
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off Volume Name: test-volume
Type: Distribute
Volume ID: d73f1306--4fea-8fe2-a37771b471d5
Status: Created
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node101 ~]# 
[root@node101 ~]# gluster volume info

Volume Name: replicated-volume
Type: Replicate
Volume ID: abbcc657--40bc-b64f-a48af4c46e70
Status: Created
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off Volume Name: test-volume
Type: Distribute
Volume ID: d73f1306--4fea-8fe2-a37771b471d5
Status: Created
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# gluster volume info replicated-volume Volume Name: replicated-volume
Type: Replicate
Volume ID: abbcc657--40bc-b64f-a48af4c46e70
Status: Created
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# gluster volume info test-volume Volume Name: test-volume
Type: Distribute
Volume ID: d73f1306--4fea-8fe2-a37771b471d5
Status: Created
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node101 ~]#

[root@node101 ~]# gluster volume info replicated-volume          #单独查看复制卷信息

4>.创建条带卷(类似于raid 0)

[root@node101 ~]# mkdir -p /home/yinzhengjie/glusterfs/file3
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb :: from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]# mkdir -p /home/yinzhengjie/glusterfs/file3
[root@node102 ~]#
[root@node102 ~]# exit
logout
Connection to node102.yinzhengjie.org.cn closed.
[root@node101 ~]#
[root@node101 ~]# gluster volume create raid0-volume stripe transport tcp node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3
volume create: raid0-volume: success: please start the volume to access data
[root@node101 ~]#
[root@node101 ~]# gluster volume info

Volume Name: raid0-volume
Type: Stripe
Volume ID: c40ad86c-adc4-42a7-9dd4-d9086755403b
Status: Created
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on Volume Name: replicated-volume
Type: Replicate
Volume ID: abbcc657--40bc-b64f-a48af4c46e70
Status: Created
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off Volume Name: test-volume
Type: Distribute
Volume ID: d73f1306--4fea-8fe2-a37771b471d5
Status: Created
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# gluster volume info raid0-volume Volume Name: raid0-volume
Type: Stripe
Volume ID: c40ad86c-adc4-42a7-9dd4-d9086755403b
Status: Created
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node101 ~]#

[root@node101 ~]# gluster volume info raid0-volume

  以上三种创建卷的方式是最常见的,这三种方式可以组合床卷,生产环境中我们推荐使用分布式复制卷。

5>.启动卷

  我们需要知道的是,我们创建卷后是无法直接使用的,而是在使用之前我们必须启动改卷。具体操作如下:

[root@node101 ~]# gluster volume status                  #我们查看卷的信息,发现之前创建的3个卷都没有启动
Volume raid0-volume is not started Volume replicated-volume is not started Volume test-volume is not started [root@node101 ~]#
[root@node101 ~]# gluster volume start raid0-volume           #既然没有启动,我们就分别启动这3个卷
volume start: raid0-volume: success
[root@node101 ~]#
[root@node101 ~]# gluster volume start replicated-volume
volume start: replicated-volume: success
[root@node101 ~]#
[root@node101 ~]# gluster volume start test-volume
volume start: test-volume: success
[root@node101 ~]#
[root@node101 ~]# gluster volume status                #再次查看卷的信息,我们会发现创建的卷启动成功了!
Status of volume: raid0-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node101.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file3 Y
Brick node102.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file3 Y Task Status of Volume raid0-volume
------------------------------------------------------------------------------
There are no active volume tasks Status of volume: replicated-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node101.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file2 Y
Brick node102.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file2 Y
Self-heal Daemon on localhost N/A N/A Y
Self-heal Daemon on node102.yinzhengjie.org
.cn N/A N/A Y Task Status of Volume replicated-volume
------------------------------------------------------------------------------
There are no active volume tasks Status of volume: test-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node101.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file1 Y
Brick node102.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file1 Y Task Status of Volume test-volume
------------------------------------------------------------------------------
There are no active volume tasks [root@node101 ~]#
[root@node101 ~]# gluster volume info

Volume Name: raid0-volume
Type: Stripe
Volume ID: c40ad86c-adc4-42a7-9dd4-d9086755403b
Status: Started
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file3
Options Reconfigured:
transport.address-family: inet
nfs.disable: on Volume Name: replicated-volume
Type: Replicate
Volume ID: abbcc657--40bc-b64f-a48af4c46e70
Status: Started
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file2
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off Volume Name: test-volume
Type: Distribute
Volume ID: d73f1306--4fea-8fe2-a37771b471d5
Status: Started
Snapshot Count:
Number of Bricks:
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file1
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# gluster volume status replicated-volume
Status of volume: replicated-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node101.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file2 Y
Brick node102.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file2 Y
Self-heal Daemon on localhost N/A N/A Y
Self-heal Daemon on node102.yinzhengjie.org
.cn N/A N/A Y Task Status of Volume replicated-volume
------------------------------------------------------------------------------
There are no active volume tasks [root@node101 ~]#

[root@node101 ~]# gluster volume info        #在启动卷之前和启动卷之后,大家可以分别对比一下两者的差别

6>.挂载我们刚刚启动的卷

[root@node101 ~]# mkdir /mnt/gluster1  /mnt/gluster2  /mnt/gluster3
[root@node101 ~]#
[root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/test-volume /mnt/gluster1
[root@node101 ~]#
[root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/replicated-volume /mnt/gluster2
[root@node101 ~]#
[root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/raid0-volume /mnt/gluster3
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup-lv_root 50G .8G 43G % /
devtmpfs .9G .9G % /dev
tmpfs .9G .9G % /dev/shm
tmpfs .9G 8.9M .9G % /run
tmpfs .9G .9G % /sys/fs/cgroup
/dev/sda1 477M 114M 335M % /boot
/dev/mapper/VolGroup-lv_home 12G 41M 11G % /home
Home 234G 182G 52G % /media/psf/Home
迅雷影音 79M 60M 20M % /media/psf/迅雷影音
tmpfs 379M 379M % /run/user/
node101.yinzhengjie.org.cn:/test-volume 23G 311M 22G % /mnt/gluster1          #这3个卷就是咱们刚刚挂载的卷。这是分布式卷
node101.yinzhengjie.org.cn:/replicated-volume 12G 156M 11G % /mnt/gluster2          #这是复制卷
node101.yinzhengjie.org.cn:/raid0-volume 23G 311M 22G % /mnt/gluster3          #这是条带卷
[root@node101 ~]#

7>.往分布式卷写入测试数据

[root@node101 ~]# yum -y install tree
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
* base: mirror.jdcloud.com
* extras: mirrors..com
* updates: mirrors.shu.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package tree.x86_64 :1.6.-.el7 will be installed
--> Finished Dependency Resolution Dependencies Resolved ===================================================================================================================================================================
Package Arch Version Repository Size
===================================================================================================================================================================
Installing:
tree x86_64 1.6.-.el7 base k Transaction Summary
===================================================================================================================================================================
Install Package Total download size: k
Installed size: k
Downloading packages:
tree-1.6.-.el7.x86_64.rpm | kB ::
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : tree-1.6.-.el7.x86_64 /
Verifying : tree-1.6.-.el7.x86_64 / Installed:
tree.x86_64 :1.6.-.el7 Complete!
[root@node101 ~]#

[root@node101 ~]# yum -y install tree            #安装该命令方便我们查看目录结构

[root@node101 ~]#
[root@node101 ~]# echo "https://www.cnblogs.com/yinzhengjie/" > /mnt/gluster1/blog.txt        #我们往分布式卷写入测试数据
[root@node101 ~]#
[root@node101 ~]# tree /home/yinzhengjie/glusterfs/*                            #查看当前节点,奇怪?数据跑哪去了?怎么在本地没有呢?不要慌,我们去node102.yinzhengjie.org.cn这个节点上去看看!
/home/yinzhengjie/glusterfs/file1
/home/yinzhengjie/glusterfs/file2
/home/yinzhengjie/glusterfs/file3 0 directories, 0 files
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb 18 17:03:32 2019 from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]# tree /home/yinzhengjie/glusterfs/*                            #发现了没有?我们在node101.yinzhengjie.org.cn写入的测试数据,实际上跑到node102.yinzhengjie.org.cn上存储啦!
/home/yinzhengjie/glusterfs/file1
└── blog.txt
/home/yinzhengjie/glusterfs/file2
/home/yinzhengjie/glusterfs/file3 0 directories, 1 file
[root@node102 ~]#
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file1/blog.txt                     #我们查看一下写入的数据,是不是觉得很奇怪?数据是完好无损的,直接就能查看,包括文件名都没有发生变化!
https://www.cnblogs.com/yinzhengjie/
[root@node102 ~]#
[root@node102 ~]#

8>.往复制卷写入测试数据

[root@node101 ~]#
[root@node101 ~]# echo "尹正杰到此一游!" > /mnt/gluster2/msg.log              #我们往复制卷写入测试数据
[root@node101 ~]#
[root@node101 ~]# tree /home/yinzhengjie/glusterfs/                     #很显然,写入的数据被存到本地了
/home/yinzhengjie/glusterfs/
├── file1
├── file2
│ └── msg.log
└── file3 directories, file
[root@node101 ~]#
[root@node101 ~]# cat /home/yinzhengjie/glusterfs/file2/msg.log               #查看文件的内容也是完好无损的
尹正杰到此一游!
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb :: from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]# tree /home/yinzhengjie/glusterfs/                      #小伙伴们,注意啦,发现复制卷不仅仅在node101.yinzhengjie.org.cn节点上存在,在node102.yinzhengjie.org.cn上也是有的!
/home/yinzhengjie/glusterfs/
├── file1
│ └── blog.txt
├── file2
│ └── msg.log
└── file3 directories, files
[root@node102 ~]#
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file2/msg.log               #我们在node102.yinzhengjie.org.cn上也可以查看到完整的数据哟!
尹正杰到此一游!
[root@node102 ~]#
[root@node102 ~]# exit
logout
Connection to node102.yinzhengjie.org.cn closed.
[root@node101 ~]#

9>.往条带卷写入测试数据 

[root@node101 ~]#
[root@node101 ~]# echo "Jason Yin 2019" > /mnt/gluster3/access.log           #我们往条带卷写入测试数据
[root@node101 ~]#
[root@node101 ~]# tree /home/yinzhengjie/glusterfs/                       #很显然,我们在node101.yinzhengjie.org.cn存在文件名
/home/yinzhengjie/glusterfs/
├── file1
├── file2
│   └── msg.log
└── file3
└── access.log directories, files
[root@node101 ~]#
[root@node101 ~]# cat /home/yinzhengjie/glusterfs/file3/access.log               #我们写入的测试数据被保存到了node101.yinzhengjie.org.cn到条带卷中了
Jason Yin
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb :: from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]# tree /home/yinzhengjie/glusterfs/                      #细心的小伙伴应该发现了,在node102.yinzhengjie.org.cn也存在该文件名。
/home/yinzhengjie/glusterfs/
├── file1
│   └── blog.txt
├── file2
│   └── msg.log
└── file3
└── access.log directories, files
[root@node102 ~]#
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file3/access.log             #我们查看node102.yinzhnegjie.org.cn节点文件名,里面的内容竟然是空的!这一点希望引起大家的注意哟!这就是为什么我们说条带卷是的原理和raid很相似的依据!
[root@node102 ~]#
[root@node102 ~]#

三.模拟生产环境中 分布式复制卷 的使用

1>.在各个节点创建存储目录

[root@node101 ~]# mkdir /home/yinzhengjie/glusterfs/file6 /home/yinzhengjie/glusterfs/file7
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb :: from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]# mkdir /home/yinzhengjie/glusterfs/file6 /home/yinzhengjie/glusterfs/file7
[root@node102 ~]#
[root@node102 ~]# exit
logout
Connection to node102.yinzhengjie.org.cn closed.
[root@node101 ~]#

2>.创建分布式复制卷并启动

[root@node101 ~]# gluster volume create my-distributed-replication-volume replica  transport tcp  node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6 node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7 node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7 force
volume create: my-distributed-replication-volume: success: please start the volume to access data
[root@node101 ~]#
[root@node101 ~]# gluster volume start my-distributed-replication-volume
volume start: my-distributed-replication-volume: success
[root@node101 ~]#
[root@node101 ~]# gluster volume status my-distributed-replication-volume
Status of volume: my-distributed-replication-volume
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick node101.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file6 Y
Brick node102.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file6 Y
Brick node101.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file7 Y
Brick node102.yinzhengjie.org.cn:/home/yinz
hengjie/glusterfs/file7 Y
Self-heal Daemon on localhost N/A N/A Y
Self-heal Daemon on node102.yinzhengjie.org
.cn N/A N/A Y Task Status of Volume my-distributed-replication-volume
------------------------------------------------------------------------------
There are no active volume tasks [root@node101 ~]#
[root@node101 ~]# gluster volume info my-distributed-replication-volume Volume Name: my-distributed-replication-volume
Type: Distributed-Replicate
Volume ID: 1c142bb6-0bdc-45ba-8de0-c6faadc871a1
Status: Started
Snapshot Count:
Number of Bricks: x =
Transport-type: tcp
Bricks:
Brick1: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6
Brick2: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file6
Brick3: node101.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7
Brick4: node102.yinzhengjie.org.cn:/home/yinzhengjie/glusterfs/file7
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
[root@node101 ~]#

3>.挂载分布式复制卷

[root@node101 ~]# mkdir /mnt/gluster10
[root@node101 ~]#
[root@node101 ~]# mount.glusterfs node101.yinzhengjie.org.cn:/my-distributed-replication-volume /mnt/gluster10
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# df -h | grep gluster
node101.yinzhengjie.org.cn:/test-volume 23G 312M 22G % /mnt/gluster1
node101.yinzhengjie.org.cn:/replicated-volume 12G 156M 11G % /mnt/gluster2
node101.yinzhengjie.org.cn:/raid0-volume 23G 312M 22G % /mnt/gluster3
node101.yinzhengjie.org.cn:/my-distributed-replication-volume 12G 156M 11G % /mnt/gluster10          #这就是咱们刚刚创建的分布式复制卷
[root@node101 ~]#
[root@node101 ~]#

4>.往分布式复制卷写入测试数据

[root@node101 ~]# echo "大王叫我来巡山" > /mnt/gluster10/test1.log
[root@node101 ~]# echo "大王叫我来巡山" > /mnt/gluster10/test2.log
[root@node101 ~]# echo "大王叫我来巡山" > /mnt/gluster10/test3.log
[root@node101 ~]# echo "大王叫我来巡山" > /mnt/gluster10/test4.log
[root@node101 ~]# echo "大王叫我来巡山" > /mnt/gluster10/test5.log
[root@node101 ~]#
[root@node101 ~]# tree /home/yinzhengjie/glusterfs/
/home/yinzhengjie/glusterfs/
├── file1
├── file2
│   └── msg.log
├── file3
│   └── access.log
├── file4
├── file5
├── file6
│   ├── test1.log
│   ├── test2.log
│   └── test4.log
└── file7
├── test3.log
└── test5.log directories, files
[root@node101 ~]#
[root@node101 ~]# tree /home/yinzhengjie/glusterfs/
/home/yinzhengjie/glusterfs/
├── file1
├── file2
│   └── msg.log
├── file3
│   └── access.log
├── file4
├── file5
├── file6
│   ├── test1.log
│   ├── test2.log
│   └── test4.log
└── file7
├── test3.log
└── test5.log directories, files
[root@node101 ~]#
[root@node101 ~]# cat /home/yinzhengjie/glusterfs/file6/test1.log
大王叫我来巡山
[root@node101 ~]# cat /home/yinzhengjie/glusterfs/file6/test2.log
大王叫我来巡山
[root@node101 ~]# cat /home/yinzhengjie/glusterfs/file6/test4.log
大王叫我来巡山
[root@node101 ~]#
[root@node101 ~]# cat /home/yinzhengjie/glusterfs/file7/test3.log
大王叫我来巡山
[root@node101 ~]#
[root@node101 ~]# cat /home/yinzhengjie/glusterfs/file7/test5.log
大王叫我来巡山
[root@node101 ~]#
[root@node101 ~]#
[root@node101 ~]# ssh node102.yinzhengjie.org.cn
Last login: Mon Feb :: from node101.yinzhengjie.org.cn
[root@node102 ~]#
[root@node102 ~]#
[root@node102 ~]# tree /home/yinzhengjie/glusterfs/
/home/yinzhengjie/glusterfs/
├── file1
│   └── blog.txt
├── file2
│   └── msg.log
├── file3
│   └── access.log
├── file4
├── file5
├── file6
│   ├── test1.log
│   ├── test2.log
│   └── test4.log
└── file7
├── test3.log
└── test5.log directories, files
[root@node102 ~]#
[root@node102 ~]#
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file6/test1.log
大王叫我来巡山
[root@node102 ~]#
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file6/test2.log
大王叫我来巡山
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file6/test4.log
大王叫我来巡山
[root@node102 ~]#
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file7/test3.log
大王叫我来巡山
[root@node102 ~]#
[root@node102 ~]# cat /home/yinzhengjie/glusterfs/file7/test5.log
大王叫我来巡山
[root@node102 ~]#
[root@node102 ~]# exit
logout
Connection to node102.yinzhengjie.org.cn closed.
[root@node101 ~]#
[root@node101 ~]#