glusterfs安装并分别挂载

时间:2022-05-27 12:45:21

我用的redhat6.4

安装glusterfs直接yum

    # wget -P /etc/yum.repos.d http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo 

  Server端安装:

         #yum install -y glusterfs glusterfs-server glusterfs-fuse

    Client端安装:

        #yum install -y glusterfs glusterfs-server

完事了,简单吧。。。。。。


    先说说原本的架构是什么样的,我们是图片服务器,存储的都是图片。两台server共享同一个目录,如s1和s2两台机器,两台glusterfs配置一样,都提供/var/test/目录共享,多个client挂载s1和s2共享的目录,如client挂载到本地的/test目录,这样,在client端往/test目录写数据的时候,就会写到两台server,两个server内容相同,起互备作用,防止硬盘坏掉。当然,每天也会自动把数据备份到另一台备份机上。

    现在新加了个项目,需要共享存储。就是glusterfs的server还用s1和s2,但是要新建个目录,假设为/newtest吧。


直接上配置文件

    Server:

    #vim /etc/glusterfs/glusterd.vol

    

volume brick

  type storage/posix

  option directory /var/test/

end-volume


volume locker

  type features/posix-locks

  subvolumes brick

end-volume


volume server

  type protocol/server

  option transport-type tcp/server

  option listen-port 24000

  subvolumes locker

  option auth.addr.brick.allow *

  option auth.addr.locker.allow *

end-volume


volume brick1

  type storage/posix

  option directory /var/newtest/

end-volume


volume locker1

  type features/posix-locks

  subvolumes brick1

end-volume


volume server1

  type protocol/server

  option transport-type tcp/server

  option listen-port 24001

  subvolumes locker1

  option auth.addr.brick1.allow *

  option auth.addr.locker1.allow *

end-volume


启动服务:

#/etc/init.d/glusterd restart


注:首先s1和s2上要先有/var/test和/var/newtest目录,启动后查看下上面共享的两个端口启动没有,s1和s2上是完全一样的


Client:


# vim  /etc/glusterfs/photo.vol

    

volume client

  type      protocol/client

  option    transport-type  tcp/client

  option    remote-host  x.x.x.x  #s1的ip

  option    transport.socket.remote-port 24000

  option    remote-subvolume locker

end-volume


volume client2

  type      protocol/client

  option    transport-type tcp/client

  option    remote-host x.x.x.x #s2的ip

  option    transport.socket.remote-port 24000

  option    remote-subvolume locker

end-volume


volume bricks

  type cluster/replicate

  subvolumes client1 client2

end-volume


### Add IO-Cache feature

volume iocache

  type performance/io-cache

  option page-size 8MB

  option page-count 2

  subvolumes bricks

end-volume


### Add writeback feature

volume writeback

  type performance/write-behind

  option aggregate-size 8MB

  option window-size 8MB

  option flush-behind off

  subvolumes iocache

end-volume


挂载: glusterfs -f /etc/glusterfs/photo.vol -l /tmp/photo.log /test


在/test里面创建文件或目录,就可以在s1和s2上的/var/test目录里也生成同样的数据了


下面配置新的目录


New-Clinet:


# vim  /etc/glusterfs/photo1.vol


volume client1

  type      protocol/client

  option    transport-type  tcp/client

  option    remote-host  x.x.x.x #s1的ip

  option    transport.socket.remote-port 24001

  option    remote-subvolume locker1

end-volume


volume client2

  type      protocol/client

  option    transport-type tcp/client

  option    remote-host x.x.x.x #s2的ip

  option    transport.socket.remote-port 24001

  option    remote-subvolume locker1

end-volume


volume bricks

  type cluster/replicate

  subvolumes client1 client2

end-volume


### Add IO-Cache feature

volume iocache

  type performance/io-cache

  option page-size 8MB

  option page-count 2

  subvolumes bricks

end-volume


### Add writeback feature

volume writeback

  type performance/write-behind

  option aggregate-size 8MB

  option window-size 8MB

  option flush-behind off

  subvolumes iocache

end-volume




挂载: glusterfs -f /etc/glusterfs/photo1.vol -l /tmp/photo1.log /newtest


在/newtest里面创建文件或目录,就可以在s1和s2上的/var/newtest目录里也生成同样的数据了


本文出自 “我的博客” 博客,谢绝转载!