Ceph RBD为Kubernetes集群提供分布式数据存储

时间:2021-08-25 12:45:55

一、环境和准备条件

两台虚拟机:Ubuntu 14.04.41LTSip分别是192.168.110.151hostnamemaster)和192.168.110.152hostnamedockertest4

Ceph采用当前Ubuntu 14.04源中最新的Ceph LTS版本:JEWEL10.2.3

Kubernetes版本为上次安装时的1.4版本。

二、Ceph安装原理

Ceph分布式存储集群由若干组件组成,包括:Ceph MonitorCeph OSDCeph MDS,其中如果你仅使用对象存储和块

存储时,MDS不是必须的(本次我们也不需要安装MDS),仅当你要用到Cephfs时,MDS才是需要安装的。

Ceph的安装模型与k8s有些类似,也是通过一个deploy node远程操作其他Nodecreateprepareactivate各个Node

Ceph组件,官方手册中给出的示意图如下:

三、151机器上安装ceph-deploy

1.配置apt-get

#wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
 OK 
# echo deb https://download.ceph.com/debian-jewel/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list 

2.更新apt-get


apt-get update


3.安装ceph-deploy


apt-get install ceph-deploy

注意:ceph-deploy只需要在admin/deploy node上安装即可。


四、151152两台机器配置无密码登录

1. 将这一账号命名为cephd,我们需要在每个ceph node(包括admin node/deploy node)都建立一个cephd用户,并加

入到sudo组中

以下命令分别在151152两台虚拟机上执行:

useradd -d /home/cephd -m cephd

passwd cephd

添加sudo权限:

echo "cephd ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/cephd

sudo chmod 0440 /etc/sudoers.d/cephd

2.admin node(deploy node)上,登入cephd账号,创建该账号下deploy node到其他各个Nodessh免密登录设

置,密码留空

deploy node上执行:

$ ssh-keygen

3.秘钥拷贝

deploy node的公钥copy到自己和152上去:

$ ssh-copy-id cephd@master

$ ssh-copy-id cephd@dockertest4

4.最后,在Deploy node上创建并编辑~/.ssh/config,这是Ceph官方doc推荐的步骤,这样做的目的是可以避免每次

执行ceph-deploy时都要去指定 –username {username}参数。

//~/.ssh/config
Host master
   Hostname master
   User cephd
Host dockertest4
   Hostname dockertest4
   User cephd

五、如果之前安装过ceph,可以先执行如下命令以获得一个干净的环境:(在151上执行如下命令)

ceph-deploy purge master dockertest4

ceph-deploy forgetkeys

ceph-deploy purgedata master dockertest4


六、创建工作目录

151上,建立cephinstall目录,然后进入cephinstall目录执行相关步骤。

七、安装ceph步骤

1.首先来创建一个ceph cluster,这个环节需要通过执行ceph-deploy new {initial-monitor-node(s)}命令。按照上面的

安装设计,我们的ceph monitor node就是master,因此我们执行下面命令来创建一个名为cephceph cluster:执行

下命令

sudo ceph-deploy new master

2.new命令执行完后,ceph-deploy会在当前目录下创建一些辅助文件:

# ls

ceph.conf  ceph-deploy-ceph.log  ceph.mon.keyring

$ cat ceph.conf

[global]

fsid = f5166c78-e3b6-4fef-b9e7-1ecf7382fd93

mon_initial_members = master

mon_host = 192.168.110.151

auth_cluster_required = cephx

auth_service_required = cephx

auth_client_required = cephx

3.由于我们仅有两个OSD节点,因此我们在进一步安装之前,需要先对ceph.conf文件做一些配置调整:修改配置以

进行后续安装:


[global]标签下,添加下面一行:

osd pool default size = 2


4.ceph.conf保存退出。接下来,我们执行下面命令在node1node2上安装ceph运行所需的各个binary包:

#sudo  ceph-deploy install master dockertest4

5.初始化ceph monitor nod

 有了ceph启动的各个程序后,我们首先来初始化ceph clustermonitor node。在deploy node的工作目录cephinstall下,

执行:

# sudo ceph-deploy mon create-initial

6.在master(monitor node)上,我们看到ceph-mon已经运行起来了:

cephd@master:~/cephinstall$ ps -ef | grep ceph
root       5814   4834  0 05:19 pts/14   00:00:00 su - cephd
cephd      5815   5814  0 05:19 pts/14   00:00:09 -su
ceph      41046      1  0 16:58 ?        00:00:03 /usr/bin/ceph-mon --cluster=ceph -i master -f --setuser ceph --setgroup ceph
ceph      41123      1  0 16:58 ?        00:00:04 /usr/bin/ceph-osd --cluster=ceph -i 0 -f --setuser ceph --setgroup ceph
cephd     41396   5815  0 17:30 pts/14   00:00:00 ps -ef
cephd     41397   5815  0 17:30 pts/14   00:00:00 grep --color=auto ceph

注意:这一点很重要,如果ceph进程不存在,那么在后续的prepare和activate步骤中,也会提示No data was 

received after 300 seconds, disconnecting...,虽然是个警告错误,但是也是的ceph安装失败。

7.prepare ceph OSD node

至此,ceph-mon组件程序已经成功启动了,剩下的只有OSD这一关了。启动OSD node分为两步:prepare

activateOSD node是真正存储数据的节点,我们需要为ceph-osd提供独立存储空间,一般是一个独立的disk。但我们

环境不具备这个条件,于是在本地盘上创建了个目录,提供给OSD

deploy node上执行:

ssh master
sudo mkdir /var/local/osd0
exit
ssh dockertest4
sudo mkdir /var/local/osd1
exit

8.接下来,我们就可以执行prepare操作了,prepare操作会在上述的两个osd0osd1目录下创建一些后续activate

活以及osd运行时所需要的文件:

ceph-deploy osd prepare master:/var/local/osd0 dockertest4:/var/local/osd1

9.激活ceph OSD node

ceph-deploy osd activate master:/var/local/osd0 dockertest4:/var/local/osd1

注意:一般情况下着这一部会报错,错误信息类似如下:

[master][WARNIN] 2016-12-16 14:25:40.325075 7fd1aa73f800 -1  ** ERROR: error creating empty object store in 

/var/local/osd0: (13) Permission denied

出现该问题的原因是:

osd0root拥有,以ceph用户启动的ceph-osd程序自然没有权限在/var/local/osd0目录下创建文件并写入数据了。这个

问题在ceph官方issue中有很多人提出来,也给出了临时修正方法:

osd0osd1的权限赋予ceph:ceph

master

sudo chown -R ceph:ceph /var/local/osd0

dockertest4

sudo chown -R ceph:ceph /var/local/osd1

10.重新激活ceph OSD node

ceph-deploy osd activate master:/var/local/osd0 dockertest4:/var/local/osd1

11.接下来,查看一下ceph集群中的OSD节点状态:

$ ceph osd tree
cephd@master:~/cephinstall$ ceph osd tree
2016-12-16 16:55:43.674267 7f326989d700 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
2016-12-16 16:55:43.674669 7f326989d700 -1 monclient(hunting): ERROR: missing keyring, cannot use cephx for authentication
2016-12-16 16:55:43.674922 7f326989d700  0 librados: client.admin initialization error (2) No such file or directory

很明显,这是存在问题的。本实验采用临时解决方案,跳过认证,方法是:分别修改151152/etc/ceph/ceph.conf

件,修改后的文件内容如下:

[global]

osd_pool_default_size = 2

osd max object name len = 256

osd max object namespace len = 64

fsid = 48c8e252-2a4a-4af7-ba53-f93f744d3c6e

mon_initial_members = master

mon_host = 192.168.110.151

auth_cluster_required = none

auth_service_required = none

auth_client_required = none

注意:(1cephx改成了none

(2)并在原文件基础之上增加了:

osd max object name len = 256

osd max object namespace len = 64

增加这两行的原因是:当前虚拟机的文件系统类型是ext4,详细请参考官网说明,http://docs.ceph.com/docs/jewel/rados/configuration/filesystem-recommendations/

12.接下来,查看一下ceph集群中的OSD节点状态:

cephd@master:~/cephinstall$ ceph osd tree
ID WEIGHT  TYPE NAME            UP/DOWN REWEIGHT PRIMARY-AFFINITY 
-1 0.05557 root default                                           
-2 0.02779     host master                                        
 0 0.02779         osd.0             up  1.00000          1.00000 
-3 0.02779     host dockertest4                                   
 1 0.02779         osd.1             up  1.00000          1.00000 
cephd@master:~/cephinstall$ ceph -s
    cluster 48c8e252-2a4a-4af7-ba53-f93f744d3c6e
     health HEALTH_OK
     monmap e1: 1 mons at {master=192.168.110.151:6789/0}
            election epoch 4, quorum 0 master
     osdmap e10: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v871: 64 pgs, 1 pools, 0 bytes data, 0 objects
            28434 MB used, 26765 MB / 58202 MB avail
                  64 active+clean

13.目前epch已经正常启动


八、cephk8s集成测试


1.测试案例1(参考kubernetes\examples\volumes\rbd下的例子)

(1)rbd-with-secret.json对应本实验如下

{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "rbd2"
    },
    "spec": {
        "containers": [
            {
                "name": "rbd-rw",
                "image": "kubernetes/pause",
                "volumeMounts": [
                    {
                        "mountPath": "/mnt/rbd",
                        "name": "rbdpd"
                    }
                ]
            }
        ],
        "volumes": [
            {
                "name": "rbdpd",
                "rbd": {
                    "monitors": [
                        "192.168.110.151:6789"
                                 ],
                    "pool": "rbd",
                    "image": "foo",
                    "user": "admin",
                    "secretRef": {
                        "name": "ceph-secret"
                        },
                    "fsType": "ext4",
                    "readOnly": true
                }
            }
        ]
    }
}

namevolume名字,这个没什么可说的,顾名思义即可。

rbd.monitors:前面提到过ceph集群的monitor组件,这里填写monitor组件的通信信息,集群里有几个monitor就填几

个,本实验中只有一个151

rbd.poolCeph中的pool记号,它用来给ceph中存储的对象进行逻辑分区用的。默认的pool是”rbd”;

rbd.imageCeph磁盘块设备映像文件;

rbd.userceph client访问ceph storage cluster所使用的用户名。ceph有自己的一套user管理系统,user的写法通常

TYPE.ID,比如client.admin(是不是想到对应的文件:ceph.client.admin.keyring)。client是一种type,而admin

user。一般来说,Type基本都是client


secret.Ref:引用的k8s secret对象名称。


(2)创建image

# rbd create foo -s 1024

(3)查看image

# rbd list

foo

rbd pool(在上述命令中未指定pool name,默认image建立在rbd pool)创建一个大小为1024Miceph image 

foorbd list命令的输出告诉我们foo image创建成功。

(4)foo image映射到内核

root@master:~# rbd map foo

注意:初次执行映射操作的时候会出现如下错误:

rbd: sysfs write failed

RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable".

In some cases useful info is found in syslog - try "dmesg | tail" or so.

rbd: map failed: (6) No such device or address

错误原因:Ubuntu 14.04.1内核仅支持其中的layering feature

解决办法:a# rbd feature disable foo exclusive-lock, object-map, fast-diff, deep-flatten

b)在/etc/ceph/ceph.conf配置文件中加:rbd_default_features = 1 #仅是layering对应的bit码所对应的整数值,然后重启ceph

(5)foo image映射到内核

# rbd map foo

/dev/rbd1

(6)格式化image foo

# mkfs.ext4 /dev/rbd0

(7)创建ceph-secret这个k8s secret对象

# ceph auth get-key client.admin

AQBiKBxYuPXiJRAAsupnTBsURoWzb0k00oM3iQ==

# echo "AQBiKBxYuPXiJRAAsupnTBsURoWzb0k00oM3iQ=="|base64

QVFBZnExTll2NkRUQWhBQWZQQWNteXMzTjlQZHhDei9SOHBQamc9PQo=

(8)创建ceph-secret.yaml

//ceph-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFBZnExTll2NkRUQWhBQWZQQWNteXMzTjlQZHhDei9SOHBQamc9PQo=

(9)创建pod(根据创建ceph-secret.yaml文件)

# kubectl create -f rbd-with-secret.json

pod "rbd2" created

(10)Inspect 容器

root@dockertest5:/home/docker/xu/k8s/server/bin# docker inspect 3dfa4a410643
[
    {
        "Id": "3dfa4a410643721a595f8d0bebe6c84af11ca87f957f387b11802c991bdbbc20",
        "Created": "2016-12-17T01:19:43.126748592Z",
        "Path": "/pause",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 4875,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2016-12-17T01:19:43.478492336Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:f9d5de0795395db6c50cb1ac82ebed1bd8eb3eefcebb1aa724e01239594e937b",
        "ResolvConfPath": "/var/lib/docker/containers/2e8914eda03d3e1f2a1cc73a505b446cc31632d2eac5a96c8fd3c6db60808a14/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/2e8914eda03d3e1f2a1cc73a505b446cc31632d2eac5a96c8fd3c6db60808a14/hostname",
        "HostsPath": "/var/lib/kubelet/pods/ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd/etc-hosts",
        "LogPath": "/var/lib/docker/containers/3dfa4a410643721a595f8d0bebe6c84af11ca87f957f387b11802c991bdbbc20/3dfa4a410643721a595f8d0bebe6c84af11ca87f957f387b11802c991bdbbc20-json.log",
        "Name": "/k8s_rbd-rw.4ab8fe40_rbd2_default_ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd_4826a0bb",
        "RestartCount": 0,
        "Driver": "aufs",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/var/lib/kubelet/pods/ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd/volumes/kubernetes.io~rbd/rbdpd:/mnt/rbd",
                "/var/lib/kubelet/pods/ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd/etc-hosts:/etc/hosts",
                "/var/lib/kubelet/pods/ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd/containers/rbd-rw/4826a0bb:/dev/termination-log"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "container:2e8914eda03d3e1f2a1cc73a505b446cc31632d2eac5a96c8fd3c6db60808a14",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "container:2e8914eda03d3e1f2a1cc73a505b446cc31632d2eac5a96c8fd3c6db60808a14",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 1000,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 2,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": -1,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "aufs",
            "Data": null
        },
        "Mounts": [
            {
                "Source": "/var/lib/kubelet/pods/ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd/volumes/kubernetes.io~rbd/rbdpd",
                "Destination": "/mnt/rbd",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kubelet/pods/ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kubelet/pods/ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd/containers/rbd-rw/4826a0bb",
                "Destination": "/dev/termination-log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            }
        ],
        "Config": {
            "Hostname": "rbd2",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "KUBERNETES_SERVICE_HOST=192.168.110.1",
                "KUBERNETES_SERVICE_PORT=443",
                "KUBERNETES_SERVICE_PORT_HTTPS=443",
                "KUBERNETES_PORT=tcp://192.168.110.1:443",
                "KUBERNETES_PORT_443_TCP=tcp://192.168.110.1:443",
                "KUBERNETES_PORT_443_TCP_PROTO=tcp",
                "KUBERNETES_PORT_443_TCP_PORT=443",
                "KUBERNETES_PORT_443_TCP_ADDR=192.168.110.1",
                "HOME=/",
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
            ],
            "Cmd": null,
            "Image": "kubernetes/pause",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/pause"
            ],
            "OnBuild": null,
            "Labels": {
                "io.kubernetes.container.hash": "4ab8fe40",
                "io.kubernetes.container.name": "rbd-rw",
                "io.kubernetes.container.restartCount": "0",
                "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
                "io.kubernetes.pod.name": "rbd2",
                "io.kubernetes.pod.namespace": "default",
                "io.kubernetes.pod.terminationGracePeriod": "30",
                "io.kubernetes.pod.uid": "ceb15fbf-c3f6-11e6-a39e-000c29ec0bcd"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": null
        }
    }
]

1.测试案例2

Kubernetes volumeCeph RBD的结合,但是k8s volume还不能完全满足实际生产过程对持久化存储的需求,因为k8s 

volumelifetimepod的生命周期相同,一旦poddelete,那么volume中的数据就不复存在了。于是k8s又推出了

Persistent Volume(PV)Persistent Volume Claim(PVC)组合,故名思意:即便挂载其的poddelete了,PV依旧存在,PV

上的数据依旧存在。

(1)创建cephdisk image

# rbd create ceph-image -s 128

(2)创建PVceph-pv.yaml,文件内容如下)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ceph-pv
spec:
  capacity:
    storage: 2Gi
  accessModes:
    - ReadWriteOnce
  rbd:
    monitors:
      - 192.168.110.151:6789
    pool: rbd
    image: ceph-image
    user: admin
    secretRef:
      name: ceph-secret
    fsType: ext4
    readOnly: false
  persistentVolumeReclaimPolicy: Recycle

(3创建PVCceph-pvc.yaml,文件内容如下)

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ceph-claim
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

说明:PVPVC是如何联系在一起的呢,请看官网解释

https://docs.openshift.com/enterprise/3.1/install_config/storage_examples/ceph_example.html

(4)创建podceph-es.yaml,文件内容如下)


apiVersion: v1
kind: Pod
metadata:
  name: ceph-pod1
spec:
  containers:
  - name: es
    image: 192.168.110.151:5000/elasticsearch:1.7.1
    env:
    - name: "CLUSTER_NAME"
      value: "myesdb"
    - name: NODE_MASTER
      value: "true"
    - name: NODE_DATA
      value: "true"
    - name: HTTP_ENABLE
      value: "true"
    ports:
    - containerPort: 9200
      name: http
      protocol: TCP
    - containerPort: 9300
      name: transport
      protocol: TCP
    volumeMounts:
    - name: ceph-vol1
      mountPath: /data
      readOnly: false
  volumes:
  - name: ceph-vol1
    persistentVolumeClaim:
      claimName: ceph-claim

(5)查看pod运行状态

root@master:/home/docker/xu/ceph# kubectl get pod
NAME        READY     STATUS    RESTARTS   AGE
ceph-pod1   1/1       Running   0          41m
rbd2        1/1       Running   0          14m

注意:a)在这两个试验中一定要格式化image

b ceph-secret在两个试验中是一样的,所以在实验二中没再创建

c)如果不对image格式化,不创建ceph-secret,那么pod将一直处于CreatingContainer状态


(6)查看ceph-pod1pog)对应容器的

root@dockertest4:/opt/k8s# docker inspect 8352cf93f053
[
    {
        "Id": "8352cf93f0535a57c317a337b41fe35492ccd0b27da0f264f901f03d27ae5f1c",
        "Created": "2016-12-17T00:51:34.452430089Z",
        "Path": "/docker-entrypoint.sh",
        "Args": [
            "elasticsearch"
        ],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 19912,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2016-12-17T00:51:35.456354338Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
        "Image": "sha256:42014923291599c0ad9c1da557285d1a4d3c096dd9fbbb346189101931507640",
        "ResolvConfPath": "/var/lib/docker/containers/86e869e1d45a9d79646aa0b0e8c17821f77cf95ca82a505d1761d9af4761501c/resolv.conf",
        "HostnamePath": "/var/lib/docker/containers/86e869e1d45a9d79646aa0b0e8c17821f77cf95ca82a505d1761d9af4761501c/hostname",
        "HostsPath": "/var/lib/kubelet/pods/ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd/etc-hosts",
        "LogPath": "/var/lib/docker/containers/8352cf93f0535a57c317a337b41fe35492ccd0b27da0f264f901f03d27ae5f1c/8352cf93f0535a57c317a337b41fe35492ccd0b27da0f264f901f03d27ae5f1c-json.log",
        "Name": "/k8s_ceph-busybox.ec05c2ae_ceph-pod1_default_ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd_ad714501",
        "RestartCount": 0,
        "Driver": "aufs",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/var/lib/kubelet/pods/ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd/volumes/kubernetes.io~rbd/ceph-pv:/data",
                "/var/lib/kubelet/pods/ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd/etc-hosts:/etc/hosts",
                "/var/lib/kubelet/pods/ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd/containers/ceph-busybox/ad714501:/dev/termination-log"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "container:86e869e1d45a9d79646aa0b0e8c17821f77cf95ca82a505d1761d9af4761501c",
            "PortBindings": null,
            "RestartPolicy": {
                "Name": "",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Dns": null,
            "DnsOptions": null,
            "DnsSearch": null,
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "container:86e869e1d45a9d79646aa0b0e8c17821f77cf95ca82a505d1761d9af4761501c",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 1000,
            "PidMode": "",
            "Privileged": false,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined"
            ],
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 2,
            "Memory": 0,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": null,
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": null,
            "DiskQuota": 0,
            "KernelMemory": 0,
            "MemoryReservation": 0,
            "MemorySwap": -1,
            "MemorySwappiness": -1,
            "OomKillDisable": false,
            "PidsLimit": 0,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0
        },
        "GraphDriver": {
            "Name": "aufs",
            "Data": null
        },
        "Mounts": [
            {
                "Source": "/var/lib/kubelet/pods/ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd/volumes/kubernetes.io~rbd/ceph-pv",
                "Destination": "/data",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kubelet/pods/ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd/etc-hosts",
                "Destination": "/etc/hosts",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Source": "/var/lib/kubelet/pods/ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd/containers/ceph-busybox/ad714501",
                "Destination": "/dev/termination-log",
                "Mode": "",
                "RW": true,
                "Propagation": "rprivate"
            },
            {
                "Name": "3617289e7981f85040680427cbb963453e678ddb7211ca4d8a2446a290d91101",
                "Source": "/var/lib/docker/volumes/3617289e7981f85040680427cbb963453e678ddb7211ca4d8a2446a290d91101/_data",
                "Destination": "/usr/share/elasticsearch/data",
                "Driver": "local",
                "Mode": "",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "ceph-pod1",
            "Domainname": "",
            "User": "",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "9200/tcp": {},
                "9300/tcp": {}
            },
            "Tty": false,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "CLUSTER_NAME=myesdb",
                "NODE_MASTER=true",
                "NODE_DATA=true",
                "HTTP_ENABLE=true",
                "KUBERNETES_SERVICE_PORT_HTTPS=443",
                "KUBERNETES_PORT=tcp://192.168.110.1:443",
                "KUBERNETES_PORT_443_TCP=tcp://192.168.110.1:443",
                "KUBERNETES_PORT_443_TCP_PROTO=tcp",
                "KUBERNETES_PORT_443_TCP_PORT=443",
                "KUBERNETES_PORT_443_TCP_ADDR=192.168.110.1",
                "KUBERNETES_SERVICE_HOST=192.168.110.1",
                "KUBERNETES_SERVICE_PORT=443",
                "PATH=/usr/share/elasticsearch/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "LANG=C.UTF-8",
                "JAVA_VERSION=8u66",
                "JAVA_DEBIAN_VERSION=8u66-b01-1~bpo8+1",
                "CA_CERTIFICATES_JAVA_VERSION=20140324",
                "ELASTICSEARCH_MAJOR=1.7",
                "ELASTICSEARCH_VERSION=1.7.1"
            ],
            "Cmd": [
                "elasticsearch"
            ],
            "Image": "192.168.110.151:5000/elasticsearch:1.7.1",
            "Volumes": {
                "/usr/share/elasticsearch/data": {}
            },
            "WorkingDir": "",
            "Entrypoint": [
                "/docker-entrypoint.sh"
            ],
            "OnBuild": null,
            "Labels": {
                "io.kubernetes.container.hash": "ec05c2ae",
                "io.kubernetes.container.name": "ceph-busybox",
                "io.kubernetes.container.ports": "[{\"name\":\"http\",\"containerPort\":9200,\"protocol\":\"TCP\"},{\"name\":\"transport\",\"containerPort\":9300,\"protocol\":\"TCP\"}]",
                "io.kubernetes.container.restartCount": "0",
                "io.kubernetes.container.terminationMessagePath": "/dev/termination-log",
                "io.kubernetes.pod.name": "ceph-pod1",
                "io.kubernetes.pod.namespace": "default",
                "io.kubernetes.pod.terminationGracePeriod": "30",
                "io.kubernetes.pod.uid": "ee7ac2ca-c3f2-11e6-a39e-000c29ec0bcd"
            }
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": null,
            "SandboxKey": "",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": null
        }
    }
]

九、验证试验2是否起作用


在没有引入PVpvc之前,如果pod备删除,对应的数据也应该一并丢失,所以可以通过下面的方法对pvpvc进行验

1) ceph-pod1对应的容器的/data目录下写测试数据(通过docker cp向里面拷贝一个文件就OK

2) 删除ceph-pod1

3)重新创建ceph-pod1,查看/data下的数据是否还存在,如果存在说明本实验就是成功了