前提条件:已经部署好ceph集群
本次实验由于环境有限,ceph集群是部署在k8s的master节点上的
一、创建ceph存储池
在ceph集群的mon节点上执行以下命令:
ceph osd pool create k8s-volumes 64 64
查看下副本数
[root@master ceph]# ceph osd pool get k8s-volumes size
size:
pg的设置参照以下公式:
Total PGs = ((Total_number_of_OSD * ) / max_replication_count) / pool_count
结算的结果往上取靠近2的N次方的值。比如总共OSD数量是2,复制份数3,pool数量也是1,那么按上述公式计算出的结果是66.66。取跟它接近的2的N次方是64,那么每个pool分配的PG数量就是64。
二、在k8s的所有节点上安装ceph-common
1、配置国内 yum源地址、ceph源地址
cp -r /etc/yum.repos.d/ /etc/yum-repos-d-bak
yum install -y wget
rm -rf /etc/yum.repos.d/*
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel-7.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum clean all
yum makecache
cat <<EOF > /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/x86_64/
enabled=
gpgcheck=
priority=
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/noarch
enabled=
gpgcheck=
priority=
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc [ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-luminous/el7/SRPMS
enabled=
gpgcheck=
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=
EOF
2、安装ceph-common
yum -y install ceph-common
3、将ceph的mon节点的配置文件/etc/ceph/ceph.conf 放到所有k8s节点的/etc/ceph目录下
4、将ceph的mon节点的文件 /etc/ceph/ceph.client.admin.keyring 放到所有k8s节点的/etc/ceph目录下
三、以rbac方式对接ceph
由于是用kubeadm部署的k8s集群,kube-controller-manager是以容器方式运行的,里面并没有ceph-common,所以采用 扩展存储卷插件 的方式来实现。
简单说一下,这里提供 rbac 和 no-rbac 两种方式,这里因为我们搭建的 k8s 集群时开启了 rbac 认证的,所以这里采用 rbac 方式来创建该 deployment。
1、下载插件镜像:(本人已经将其上传到阿里云的镜像仓库了)
docker pull registry.cn-hangzhou.aliyuncs.com/boshen-ns/rbd-provisioner:v1.
2、创建/root/k8s-ceph-rbac/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
3、创建/root/k8s-ceph-rbac/clusterrole.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: rbd-provisioner
[root@master k8s-ceph-rbac]# vim clusterrole.yaml
[root@master k8s-ceph-rbac]# cat clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
resourceNames: ["kube-dns","coredns"]
verbs: ["list", "get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "create", "delete"]
4、创建/root/k8s-ceph-rbac/clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: rbd-provisioner
apiGroup: rbac.authorization.k8s.io
5、创建/root/k8s-ceph-rbac/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: rbd-provisioner
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
6、创建/root/k8s-ceph-rbac/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rbd-provisioner
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: rbd-provisioner
subjects:
- kind: ServiceAccount
name: rbd-provisioner
namespace: default
7、创建/root/k8s-ceph-rbac/deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: rbd-provisioner
spec:
replicas:
strategy:
type: Recreate
template:
metadata:
labels:
app: rbd-provisioner
spec:
containers:
- name: rbd-provisioner
image: registry.cn-hangzhou.aliyuncs.com/boshen-ns/rbd-provisioner:v1.
imagePullPolicy: Never
env:
- name: PROVISIONER_NAME
value: ceph.com/rbd
serviceAccount: rbd-provisioner
8、创建/root/k8s-ceph-rbac/ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: "kubernetes.io/rbd"
data:
key: QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ==
上面key的值用以下方式查看:
[root@master ~]# grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
QVFDQmRvbGNxSHlaQmhBQW45WllIbCtVd2JrTnlPV0xseGQ4RUE9PQ==
9、创建/root/k8s-ceph-rbac/ceph-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-storage-class
provisioner: ceph.com/rbd
parameters:
#monitors: 192.168.137.10:
monitors: ceph-mon-1.default.svc.cluster.local.:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: default
pool: k8s-volumes
userId: admin
userSecretName: ceph-secret
fsType: ext4
imageFormat: ""
imageFeatures: layering
注意:上面的monitors,不能直接写ip,这样以后创建pvc会报:missing Ceph monitors。源码中,monitors需要k8s dns解析,我这里使用外部ceph,肯定没有相关解析。所以手动添加解析,如第10步。
10、创建/root/k8s-ceph-rbac/rbd-monitors-dns.yaml
kind: Service
apiVersion: v1
metadata:
name: ceph-mon-
spec:
type: ExternalName
externalName: 192.168.137.10.xip.io
ceph的mon地址为:192.168.137.10:6789
11、执行以下命令将上面1到10步的yaml文件进行执行
kubeclt apply -f k8s-ceph-rbac/
12、进行测试是否可用
1)创建test-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ceph-storage-class
resources:
requests:
storage: 1Gi
kubectl apply -f test-pvc.yaml
状态为Bound,说明创建的pvc正常
扩展链接:https://github.com/kubernetes-incubator/external-storage