使用ceph-deploy部署ceph环境

时间:2021-09-22 03:16:43

uname -r
3.10.0-123.el7.x86_64
ceph -v
ceph version 0.94.7 (d56bdf93ced6b80b07397d57e3fa68fe68304432)


(1) 准备5台机器并修改/et/hosts文件。
 192.168.0.2       ceph-client
 192.168.0.3       ceph-admin
 192.168.0.4        ceph-monitor
 192.168.0.5       ceph2
 192.168.0.6       ceph1


(2) 在每个节点上添加ceph节点并设置权限

 adduser -d /home/ceph -m ceph
 passwd ceph
 echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
 chmod 0440 /etc/sudoers.d/ceph
 sed -i s'/Defaults    requiretty/#Defaults    requiretty'/g /etc/sudoers


(3) 配置admin-node与其他节点ssh无密码root权限访问其它节点

 ssh-keygen
 ssh-copy-id ceph@ceph2
 ssh-copy-id ceph@ceph-monitor
 ssh-copy-id ceph@ceph-client
 ssh-copy-id ceph@ceph-admin
 ssh-copy-id ceph@ceph1
 在~/.ssh/config(没有的话增加一个)文件中添加以下内容:
Host    ceph2
  Hostname   ceph2
  User              ceph
 
Host    ceph1
  Hostname   ceph1
  User              ceph
Host    ceph-monitor
  Hostname   ceph-monitor
  User              ceph
 
Host    ceph-client
  Hostname   ceph-client
  User              ceph
Host    ceph-admin
  Hostname   ceph-admin
  User              ceph


(4) 关闭防火墙并安装配置ntp

sudo systemctl disable firewalld
sudo systemctl stop firewalld

sudo yum install -y ntp ntpdate ntp-doc
sudo ntpdate 0.cn.pool.ntp.org
sudo hwclock -w
sudo systemctl enable ntpd.service
sudo systemctl start ntpd.service 

(5)为admin-node节点安装ceph-deploy
第一步:增加 yum配置文件
sudo vim /etc/yum.repos.d/ceph.repo
添加以下内容:
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-firefly/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
 第二步:更新软件源并按照ceph-deploy
sudo yum update && sudo yum install ceph-deploy
sudo yum install yum-plugin-priorities
(6)用root用户创建 myceph 目录


(7) 创建以ceph-monitor为监控节点的集群

ceph-deploy   new  ceph-monitor
echo “osd pool default size = 2”>>ceph.conf
ceph   install  ceph-admin  ceph-monitor ceph1 ceph2
ceph-deploy mon create-initial

(8)为存储节点osd进程分配磁盘空间:
ssh ceph1
sudo mkfs.xfs /dev/vda5 -f (不要忘记格式化)
exit
ssh ceph2
sudo mkfs.xfs /dev/vda5 -f(不要忘记格式化)
exit
#ceph-deploy disk zap osd1:vda5(如果是一个分区不需要,如果是整个磁盘需要执行这一步)
ceph-deploy osd prepare ceph1:/dev/vda5  ceph2:/dev/vda5
ceph-deploy osd activate ceph1:/dev/vda5  ceph2:/dev/vda5
ceph-deploy admin ceph-admin  ceph-monitor ceph1 ceph2
sudo chmod  +r /etc/ceph/ceph.client.admin.keyring


************************************************************
client 端安装(rbd方式,不需要ceph mds模块):
1、准备client-node
通过admin-node节点执行命令:
ceph-deploy  install  ceph-client
ceph-deploy admin   ceph-client
 
2、创建块设备映像:
rbd create foo --size 4096 
 将ceph提供的块设备映射到client-node
sudo rbd map foo --pool rbd --name client.admin
3、创建文件系统
sudo  mkfs.ext4 -m0  /dev/rbd/rdb/foo/
4、挂载文件系统
sudo mkdir /mnt/test
sudo mount /dev/rbd/rdb/foo/ /mnt/test

参考:

http://www.centoscn.com/CentosServer/test/2015/0521/5489.html