准备工作:
一、创建虚拟机:
使用镜像-网络安装-进入安装界面安装,安装完毕开始封装。
1.配置网络:/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
2.更改主机名:/etc/sysconfig/network
NETWORKING=yes
HOSTNAME=host.explame.com
3.搭建yum源:/etc/yum.repos.d/rhel-source.repo
[rhel6.5]
name=rhel6.5
baseurl=http://172.25.254.46/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
4.修改selinux:/etc/sysconfig/selinux
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
5.yum下载lftp、ssh、vim
6.清理缓存:rm -fr /tmp/* rm -fr /cache/*
7.关闭防火墙:chkconfig iptables off(防火墙开机不启动)
/etc/init.d/iptable stop(防火墙关闭)
封装完毕后创建快照:qemu-img create -f qcow2 -b rhel6.5.qcow2 vm1
qemu-img create -f qcow2 -b rhel6.5.qcow2 vm2
qemu-img create -f qcow2 -b rhel6.5.qcow2 vm3
二、制作集群(vm1、2、3都修改)。
1.修改ip:vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
NAME="eth0"
ONBOOT="yes"
BOOTPROTO="none"
IPADDR0=172.25.254.111/222/44
PREFIX0=24
2.修改主机名:vim /etc/sysconfig/network
3.增加域名解析:vim /etc/hosts
172.25.254.111 vm1.wang.com
172.25.254.222 vm2.wang.com
172.25.254.44 vm3.wang.com
172.25.254.46 faoundation46.ilt.example.com
4.搭建yum源:
[Server]
name=Red Hat Enterprise Linux Server
baseurl=http://172.25.254.46/rhel6.5/Server
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rmp-gpg/RPM-GPG-KEY-redhat-release
[HighAvailability]
name=Red Hat Enterprise Linux HighAvailability
baseurl=http://172.25.254.46/rhel6.5/HighAvailability
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rmp-gpg/RPM-GPG-KEY-redhat-release
[LoadBalancer]
name=Red Hat Enterprise Linux LoadBalancer
baseurl=http://172.25.254.46/rhel6.5/LoadBalancer
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rmp-gpg/RPM-GPG-KEY-redhat-release
[ResilientStorage]
name=Red Hat Enterprise Linux ResilientStorage
baseurl=http://172.25.254.46/rhel6.5/ResilientStorage
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rmp-gpg/RPM-GPG-KEY-redhat-release
[ScalableFileSystem]
name=Red Hat Enterprise Linux ScalableFileSystem
baseurl=http://172.25.254.46/rhel6.5/ScalableFileSystem
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rmp-gpg/RPM-GPG-KEY-redhat-release
4.管理机vm3安装luci,启动luci服务
yum install luci -y
/etc/init.d/luci start
chkconfig luic on
5.服务机vm1、vm2安装ricci并且修改密码
yum install ricci -y
/etc/init.d/ricci start
chkconfig ricci on
passwd ricci
6.管理机vm3访问httpd://vm3.wang.com:8084
三、配置fence设备
1.在虚拟机所在的真机操作
[root@foundation46 ~]#yum install fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virt.x86_64 -y
[root@foundation46 ~]#fence_virtd -c
Module search path [/usr/lib64/fence-virt]:
Listener module [multicast]:
Multicast IP Address [225.0.0.12]:
Multicast IP Port [1229]:
Interface [virbr0]: br0
Key File [/etc/cluster/fence_xvm.key]:
Backend module [libvirt]:
Configuration complete.
Replace /etc/fence_virt.conf with the above [y/N]? y
[root@foundation46 ~]# netstat -antulpe | grep 1229
udp 0 0 0.0.0.0:1229 0.0.0.0:* 0
49828 15017/fence_virtd
[root@foundation46 ~]# mkdir /etc/cluster
[root@foundation46 ~]# cd /etc/cluster/
[root@foundation46 cluster]# dd if=/dev/random of=/etc/cluster/fence_xvm.key bs=128 count=1
[root@foundation46 cluster]# file /etc/cluster/fence_xvm.key
/etc/cluster/fence_xvm.key: data
[root@foundation46 cluster]# scp fence_xvm.key root@172.25.254.111:/etc/cluster
[root@foundation46 cluster]# scp fence_xvm.key root@172.25.254.222:/etc/cluster
[root@foundation45 ~]# systemctl start fence_virtd
[root@foundation45 ~]# systemctl enable fence_virtd
四、在建立好的 fence+集群中添加集群服务 eg:httpd 服务
1.vm1/2 上安装 httpd 服务---均不用开启httpd服务,集群管理器管理
[root@vm1/2~]# yum install httpd -y
[root@vm1/2~]# echo `hostname` > /var/www/html/index.html
2.网页页面设置
3.测试 fence 和集群是否成功
浏览器在客户端访问 172.25.45.88,根据两节点的优先级,首先访问到vm1的网页页面,此时,vm1端的httpd服务自动打开;当vm1端的httpd服务或主机出现故障时,会自动切换到vm2主机上,此时vm2端的httpd服务会自动打开;当vm1端的httpd服务或主机出现的故障修复好时,除非vm2端出现故障,否则不会切换回去
(1)在 server1 端 fence 掉 server2 主机的命令:fence_node server2.example.com
(2)将系统自动崩溃的命令:echo c > /proc/sysrq-trigger
(3)将 server1 端的 www 移动到 server2 上的命令:clusvcadm -r www -m server2.example.com
五、集群中通过 iscsi 共享块设备
1.iscsi 要共享磁盘端---vm3
[root@vm3 ~]# yum install scsi-target-utils.x86_64 -y
[root@vm3 ~]# fdisk -l /dev/vdb
Disk /dev/vdb: 4294 MB, 4294967296 bytes
[root@vm3 ~]# fdisk /dev/vdb
Command (m for help): n
Command action p
Partition number (1-4): 1
First cylinder (1-8322, default 1): 直接 enter 回车
Last cylinder, +cylinders or +size{K,M,G} (1-8322, default 8322):直接 enter 回车
Command (m for help): wq
The partition table has been altered!
[root@vm3 ~]# vim /etc/tgt/targets.conf
@@@@@@
38 <target iqn.2017-02.com.wang:vm3.disk1>
39 backing-store /dev/vdb1
40 initiator-address 172.25.254.111
41 initiator-address 172.25.254.222
42 </target>
##在文件中配置 iscsi 的参数,rhel7 中直接用命令配置
@@@@@@
[root@vm3 ~]# /etc/init.d/tgtd start
[root@vm3 ~]# tgt-admin -s
##查看 iscsi 信息,输出部分省略
Backing store path: /dev/vdb1
Account information:
- 15 -ACL information:
172.25.254.111
172.25.254.222
2.iscsi 远程共享 server3 端的设备---vm1
[root@vm1 ~]# clusvcadm -d www ##使 www 服务 disabled
Local machine disabling service:www...Success
[root@vm1 ~]# yum install iscsi* -y
[root@vm1 ~]# iscsiadm -m discovery -t st -p 172.25.254.46
[root@vm1 ~]# iscsiadm -m node -l
[root@vm1 ~]# fdisk -l
/dev/sda
Disk /dev/sda: 4294 MB, 4294918656 bytes
[root@vm1 ~]# fdisk -cu /dev/sda
Command (m for help): n
Command action
p
Partition number (1-4): 1
First sector (2048-8388512, default 2048): 直接按 enter 回车
Last sector, +sectors or +size{K,M,G} (2048-8388512, default 8388512): 直接按 enter 回车
Command (m for help): t
Selected partition 1
Hex code (type L to list codes): 8e
Command (m for help): wq
The partition table has been altered!
[root@vm1 ~]# partprobe
[root@vm1 ~]# cat /proc/partitions | grep sda1
major minor #blocks name
8 1 4193232 sda1
[root@vm1 ~]# /etc/init.d/clvmd start ##启动集群的逻辑卷管理器
[root@vm1 ~]# vim /etc/lvm/lvm.conf
- 16 -##此处有 lvm 的锁
[root@vm1 ~]# pvcreate /dev/sda1
##创建物理卷,只需在集群任意一节点创建即可,其他节点可自动发现
[root@server1 ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/sda1 lvm2 a-- 4.00g 4.00g
[root@vm1 ~]# vgcreate clustervg /dev/sda1
##创建逻辑卷组,只需在集群任意一节点创建即可,其他节点可自动发现
[root@server1 ~]# vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz--n- 7.51g 0
clustervg 1 0 0wz--nc 4.00g 4.00g
##在vm2 的结点上执行此命令,发现同步
[root@vm1 ~]# lvcreate -L 2G -n clusterlv clustervg
##创建逻辑卷,只需在集群任意一节点创建即可,其他节点可自动发现
3.iscsi 远程共享 vm3 端的设备---vm2
[root@vm2 ~]# clusvcadm -d www
Local machine disabling service:www...Success
[root@vm2 ~]# yum install iscsi-initiator-utils.x86_64 -y
[root@vm2 ~]# iscsiadm -m discovery -t st -p 172.25.254.44
172.25.254.44:3260,1 iqn.2016-01.com.example:server.disk1
[root@vm2 ~]# iscsiadm -m node -l
[root@vm2 ~]# partprobe
[root@vm2 ~]# cat /proc/partitions | grep sda1
##此节点无需操作,直接同步 vm1 端逻辑卷的操作
[root@vm2 ~]# vgs
[root@vm2 ~]# vgs
[root@vm2 ~]# lvs
六、格式化为 ext4 格式的文件系统
1.vm1端的操作
[root@vm1 ~]# mkfs.ext4 /dev/mapper/clustervg-clusterlv
##将逻辑卷设备格式化为 ext4 格式
[root@vm1 ~]#mount /dev/mapper/clustervg-clusterlv /var/www/html/
2.配置网页
七、格式化为 gfs2 格式的文件系统
重新格式化分享的/dev/sda1 为 gfs2 格式,当之前的文件系统里有数据时,先备份,重新格式化后再导入红帽 GFS2 文件系统包含在弹性存储附加组件中,是固有文件系统,直接和 Linux 内核文件系统界面(VFS 层)相接。当作为集群文件系统使用时,GFS2 采用分布式元数据和多个日志(multiplejournal)。红帽只支持将 GFS2 文件系统作为在高可用性附加组件中的部署使用。红帽不支持使用 GFS2 部署超过 16 个节点的集群文件系统。
[root@vm1 ~]# clusvcadm -d www
[root@vm1 ~]# mkfs.gfs2 -p lock_dlm -t wang:mygfs2 -j 3 /dev/mapper/clustervg-clusterlv
##-p 指定锁定协议。lock_dlm 是 GFS2 用来在节点间进行沟通的锁定协议。
##-j 指定在该文件系统中生成的日志数。这个示例使用有三个节点的集群,因此我们为每个节点生成一个日志。
##-t 指定锁定表名称,格式为 cluster_name:fs_name。
This will destroy any data on /dev/clustervg/demo.
It appears to contain: symbolic link to `../dm-2'
Are you sure you want to proceed? [y/n] y
Device: /dev/clustervg/demo
Blocksize: 4096
Device Size 4.00 GB (1048576 blocks)
Filesystem Size: 4.00 GB (1048575 blocks)
Journals: 3
Resource Groups: 16
Locking Protocol: "lock_dlm"
Lock Table: "wang:mygfs2"
UUID: 2a1631e7-25c8-1387-0508-338819de25ca
修改名称:
[root@vm1 mapper]# gfs2_tool sb /dev/clustervg/clusterlv all
mh_magic = 0x01161970
mh_type = 1
mh_format = 100
sb_fs_format = 1801
sb_multihost_format = 1900
sb_bsize = 4096
sb_bsize_shift = 12
no_formal_ino = 2
no_addr = 23
no_formal_ino = 1
no_addr = 22
sb_lockproto = lock_dlm
sb_locktable = mycluster:megfs2
uuid = 2a1631e7-25c8-1387-0508-338819de25ca
[root@vm1 mapper]# gfs2_tool sb /dev/clustervg/clusterlv table wang:mygfs2
[root@vm1 mapper]# gfs2_tool sb /dev/clustervg/clusterlv all
mh_magic = 0x01161970
mh_type = 1
mh_format = 100
sb_fs_format = 1801
sb_multihost_format = 1900
sb_bsize = 4096
sb_bsize_shift = 12
no_formal_ino = 2
no_addr = 23
no_formal_ino = 1
no_addr = 22
sb_lockproto = lock_dlm
sb_locktable = wang:mygfs2
uuid = 2a1631e7-25c8-1387-0508-338819de25ca
[root@vm1 ~]# df -h
[root@vm1 mysql]# blkid
[root@server1 mysql]# vim /etc/fstab
@@@@@@
16 UUID=2a1631e7-25c8-1387-0508-338819de25ca /var/www/html gfs2 _netdev 0 0
@@@@@@
[root@vm1 html]# mount -a
[root@vm1 html]# clusvcadm -e www ##启动www
Local machine trying to enable service:www...Success
service:www is now running on vm1.wang.com
[root@vm1 html]# lvextend -L +2G /dev/clustervg/clusterlv
[root@vm1 html]# gfs2_grow /dev/clustervg//clusterlv
FS: Mount Point: /var/www/html
FS: Device: /dev/dm-2
FS: Size: 1048575 (0xfffff)
FS: RG size: 65533 (0xfffd)
DEV: Size: 1572864 (0x180000)
The file system grew by 2048MB.
gfs2_grow complete.
[root@vm1 html]# df -h
同步[root@vm2 html]# vim /etc/fstab
[root@vm2 html]# mount -a
[root@vm2 html]# df -h(如果没有变化,一直刷新)
本文出自 “元小光” 博客,请务必保留此出处http://yuanxiaoguang.blog.51cto.com/11338250/1897822