码迷,mamicode.com
首页 > 其他好文 > 详细

ceph详细安装部署教程(多监控节点)

时间:2014-06-10 22:46:20      阅读:1872      评论:0      收藏:0      [点我收藏+]

标签:rbd   ceph   mon   rados   

一、前期准备安装ceph-deploy工具

   所有的服务器都是用root用户登录的

1、安装环境

   系统centos-6.5

   设备:1台admin-node (ceph-ploy)  1台 monistor 2台 osd

2、关闭所有节点的防火墙及关闭selinux,重启机器。

 service iptables stop

 sed -i ‘/SELINUX/s/enforcing/disabled/‘ /etc/selinux/config

 chkconfig iptables off

 

3、编辑admin-node节点的ceph yum仓库

vi /etc/yum.repos.d/ceph.repo 

[ceph-noarch]

name=Ceph noarch packages

baseurl=http://ceph.com/rpm/el6/noarch/

enabled=1

gpgcheck=1

type=rpm-md

gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

4、安装搜狐的epel仓库

   rpm -ivh http://mirrors.sohu.com/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm

5、更新admin-node节点的yum源 

    yum clean all

    yum update -y

6、在admin-node节点上建立一个ceph集群目录

   mkdir /ceph

   cd  /ceph

7、在admin-node节点上安装ceph部署工具

    yum install ceph-deploy -y

8、配置admin-node节点的hosts文件

  vi /etc/hosts

10.240.240.210 admin-node

10.240.240.211 node1

10.240.240.212 node2

10.240.240.213 node3


二、配置ceph-deploy部署的无密码登录每个ceph节点   

1、在每个Ceph节点上安装一个SSH服务器

   [ceph@node3 ~]$ yum install openssh-server -y

2、配置您的admin-node管理节点与每个Ceph节点无密码的SSH访问。

[root@ceph-deploy ceph]# ssh-keygen

Generating public/private rsa key pair.

Enter file in which to save the key (/root/.ssh/id_rsa): 

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /root/.ssh/id_rsa.

Your public key has been saved in /root/.ssh/id_rsa.pub.


3、复制admin-node节点的秘钥到每个ceph节点

 ssh-copy-id root@admin-node

 ssh-copy-id root@node1

 ssh-copy-id root@node2

 ssh-copy-id root@node3

4、测试每台ceph节点不用密码是否可以登录

 ssh root@node1

 ssh root@node2

 ssh root@node3

5、修改admin-node管理节点的~/.ssh / config文件,这样它登录到Ceph节点创建的用户

Host admin-node

  Hostname admin-node

  User root   

Host node1

  Hostname node1

  User root

Host node2

  Hostname node2

  User root

Host node3

  Hostname node3

  User root

三、用ceph-deploy工具部署ceph集群

1、在admin-node节点上新建一个ceph集群

[root@admin-node ceph]#  ceph-deploy new node1 node2 node3      (执行这条命令后node1 node2 node3都作为了monitor节点,多个mon节点可以实现互备)

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy new node1 node2 node3

[ceph_deploy.new][DEBUG ] Creating new cluster named ceph

[ceph_deploy.new][DEBUG ] Resolving host node1

[ceph_deploy.new][DEBUG ] Monitor node1 at 10.240.240.211

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node1][DEBUG ] connected to host: admin-node 

[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1

[ceph_deploy.new][DEBUG ] Resolving host node2

[ceph_deploy.new][DEBUG ] Monitor node2 at 10.240.240.212

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node2][DEBUG ] connected to host: admin-node 

[node2][INFO  ] Running command: ssh -CT -o BatchMode=yes node2

[ceph_deploy.new][DEBUG ] Resolving host node3

[ceph_deploy.new][DEBUG ] Monitor node3 at 10.240.240.213

[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds

[node3][DEBUG ] connected to host: admin-node 

[node3][INFO  ] Running command: ssh -CT -o BatchMode=yes node3

[ceph_deploy.new][DEBUG ] Monitor initial members are [‘node1‘, ‘node2‘, ‘node3‘]

[ceph_deploy.new][DEBUG ] Monitor addrs are [‘10.240.240.211‘, ‘10.240.240.212‘, ‘10.240.240.213‘]

[ceph_deploy.new][DEBUG ] Creating a random mon key...

[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...

查看生成的文件

[root@admin-node ceph]# ls

ceph.conf  ceph.log  ceph.mon.keyring

查看ceph的配置文件,三个节点都变为了控制节点

[root@admin-node ceph]# cat ceph.conf 

[global]

auth_service_required = cephx

filestore_xattr_use_omap = true

auth_client_required = cephx

auth_cluster_required = cephx

mon_host = 10.240.240.211,10.240.240.212,10.240.240.213

mon_initial_members = node1, node2, node3

fsid = 4dc38af6-f628-4c1f-b708-9178cf4e032b


[root@admin-node ceph]# 


2、部署之前确保ceph每个节点没有ceph数据包(先清空之前所有的ceph数据,如果是新装不用执行此步骤,如果是重新部署的话也执行下面的命令)

[root@ceph-deploy ceph]# ceph-deploy purgedata admin-node node1 node2 node3  

[root@ceph-deploy ceph]# ceph-deploy forgetkeys

[root@ceph-deploy ceph]# ceph-deploy purge admin-node node1 node2 node3


  如果是新装的话是没有任何数据的 


3、编辑admin-node节点的ceph配置文件,把下面的配置放入ceph.conf中

   osd pool default size = 2


4、在admin-node节点用ceph-deploy工具向各个节点安装ceph

[root@admin-node ceph]# ceph-deploy install admin-node node1 node2 node3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy install admin-node node1 node2 node3

[ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph hosts admin-node node1 node2 node3

[ceph_deploy.install][DEBUG ] Detecting platform for host admin-node ...

[admin-node][DEBUG ] connected to host: admin-node 

[admin-node][DEBUG ] detect platform information from remote host

[admin-node][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.5 Final

[admin-node][INFO  ] installing ceph on admin-node

[admin-node][INFO  ] Running command: yum clean all

[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[admin-node][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates

[admin-node][DEBUG ] Cleaning up Everything

[admin-node][DEBUG ] Cleaning up list of fastest mirrors

[admin-node][INFO  ] Running command: yum -y install wget

[admin-node][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[admin-node][DEBUG ] Determining fastest mirrors

[admin-node][DEBUG ]  * base: mirrors.btte.net

[admin-node][DEBUG ]  * epel: mirrors.neusoft.edu.cn

[admin-node][DEBUG ]  * extras: mirrors.btte.net

[admin-node][DEBUG ]  * updates: mirrors.btte.net

[admin-node][DEBUG ] Setting up Install Process

[admin-node][DEBUG ] Package wget-1.12-1.11.el6_5.x86_64 already installed and latest version

[admin-node][DEBUG ] Nothing to do

[admin-node][INFO  ] adding EPEL repository

[admin-node][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[admin-node][WARNIN] --2014-06-07 22:05:34--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[admin-node][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.24, 209.132.181.25, 209.132.181.26, ...

[admin-node][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.24|:80... connected.

[admin-node][WARNIN] HTTP request sent, awaiting response... 200 OK

[admin-node][WARNIN] Length: 14540 (14K) [application/x-rpm]

[admin-node][WARNIN] Saving to: `epel-release-6-8.noarch.rpm.1‘

[admin-node][WARNIN] 

[admin-node][WARNIN]      0K .......... ....                                       100% 73.8K=0.2s

[admin-node][WARNIN] 

[admin-node][WARNIN] 2014-06-07 22:05:35 (73.8 KB/s) - `epel-release-6-8.noarch.rpm.1‘ saved [14540/14540]

[admin-node][WARNIN] 

[admin-node][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[admin-node][DEBUG ] Preparing...                ##################################################

[admin-node][DEBUG ] epel-release                ##################################################

[admin-node][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[admin-node][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[admin-node][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[admin-node][DEBUG ] Preparing...                ##################################################

[admin-node][DEBUG ] ceph-release                ##################################################

[admin-node][INFO  ] Running command: yum -y -q install ceph

[admin-node][DEBUG ] Package ceph-0.80.1-2.el6.x86_64 already installed and latest version

[admin-node][INFO  ] Running command: ceph --version

[admin-node][DEBUG ] ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)

[ceph_deploy.install][DEBUG ] Detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final

[node1][INFO  ] installing ceph on node1

[node1][INFO  ] Running command: yum clean all

[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[node1][DEBUG ] Cleaning repos: base extras updates

[node1][DEBUG ] Cleaning up Everything

[node1][DEBUG ] Cleaning up list of fastest mirrors

[node1][INFO  ] Running command: yum -y install wget

[node1][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[node1][DEBUG ] Determining fastest mirrors

[node1][DEBUG ]  * base: mirrors.btte.net

[node1][DEBUG ]  * extras: mirrors.btte.net

[node1][DEBUG ]  * updates: mirrors.btte.net

[node1][DEBUG ] Setting up Install Process

[node1][DEBUG ] Resolving Dependencies

[node1][DEBUG ] --> Running transaction check

[node1][DEBUG ] ---> Package wget.x86_64 0:1.12-1.8.el6 will be updated

[node1][DEBUG ] ---> Package wget.x86_64 0:1.12-1.11.el6_5 will be an update

[node1][DEBUG ] --> Finished Dependency Resolution

[node1][DEBUG ] 

[node1][DEBUG ] Dependencies Resolved

[node1][DEBUG ] 

[node1][DEBUG ] ================================================================================

[node1][DEBUG ]  Package       Arch            Version                   Repository        Size

[node1][DEBUG ] ================================================================================

[node1][DEBUG ] Updating:

[node1][DEBUG ]  wget          x86_64          1.12-1.11.el6_5           updates          483 k

[node1][DEBUG ] 

[node1][DEBUG ] Transaction Summary

[node1][DEBUG ] ================================================================================

[node1][DEBUG ] Upgrade       1 Package(s)

[node1][DEBUG ] 

[node1][DEBUG ] Total download size: 483 k

[node1][DEBUG ] Downloading Packages:

[node1][DEBUG ] Running rpm_check_debug

[node1][DEBUG ] Running Transaction Test

[node1][DEBUG ] Transaction Test Succeeded

[node1][DEBUG ] Running Transaction

  Updating   : wget-1.12-1.11.el6_5.x86_64                                  1/2 

  Cleanup    : wget-1.12-1.8.el6.x86_64                                     2/2 

  Verifying  : wget-1.12-1.11.el6_5.x86_64                                  1/2 

  Verifying  : wget-1.12-1.8.el6.x86_64                                     2/2 

[node1][DEBUG ] 

[node1][DEBUG ] Updated:

[node1][DEBUG ]   wget.x86_64 0:1.12-1.11.el6_5                                                 

[node1][DEBUG ] 

[node1][DEBUG ] Complete!

[node1][INFO  ] adding EPEL repository

[node1][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[node1][WARNIN] --2014-06-07 22:06:57--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[node1][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.23, 209.132.181.24, 209.132.181.25, ...

[node1][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.23|:80... connected.

[node1][WARNIN] HTTP request sent, awaiting response... 200 OK

[node1][WARNIN] Length: 14540 (14K) [application/x-rpm]

[node1][WARNIN] Saving to: `epel-release-6-8.noarch.rpm‘

[node1][WARNIN] 

[node1][WARNIN]      0K .......... ....                                       100% 69.6K=0.2s

[node1][WARNIN] 

[node1][WARNIN] 2014-06-07 22:06:58 (69.6 KB/s) - `epel-release-6-8.noarch.rpm‘ saved [14540/14540]

[node1][WARNIN] 

[node1][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[node1][DEBUG ] Preparing...                ##################################################

[node1][DEBUG ] epel-release                ##################################################

[node1][WARNIN] warning: epel-release-6-8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY

[node1][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[node1][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[node1][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[node1][DEBUG ] Preparing...                ##################################################

[node1][DEBUG ] ceph-release                ##################################################

[node1][INFO  ] Running command: yum -y -q install ceph

[node1][WARNIN] warning: rpmts_HdrFromFdno: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY

[node1][WARNIN] Importing GPG key 0x0608B895:

[node1][WARNIN]  Userid : EPEL (6) <epel@fedoraproject.org>

[node1][WARNIN]  Package: epel-release-6-8.noarch (installed)

[node1][WARNIN]  From   : /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6

[node1][WARNIN] Warning: RPMDB altered outside of yum.

[node1][INFO  ] Running command: ceph --version

[node1][WARNIN] Traceback (most recent call last):

[node1][WARNIN]   File "/usr/bin/ceph", line 53, in <module>

[node1][WARNIN]     import argparse

[node1][WARNIN] ImportError: No module named argparse

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph --version


上面报错信息的解决方法是:在报错的节点上执行下面的命令

[root@admin-node ~]# yum install *argparse* -y


5、添加初始监控节点并收集密钥(新的ceph-deploy v1.1.3以后的版本)。

[root@admin-node ceph]# ceph-deploy mon create-initial  

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1

[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node1][DEBUG ] determining if provided host has same hostname in remote

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] deploying mon to node1

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] remote hostname: node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create the mon path if it does not exist

[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done

[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done

[node1][INFO  ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] create the monitor keyring file

[node1][INFO  ] Running command: ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] ceph-mon: mon.noname-a 10.240.240.211:6789/0 is local, renaming to mon.node1

[node1][DEBUG ] ceph-mon: set fsid to 369daf5a-e844-4e09-a9b1-46bb985aec79

[node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1

[node1][INFO  ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring

[node1][DEBUG ] create a done file to avoid re-doing the mon deployment

[node1][DEBUG ] create the init path if it does not exist

[node1][DEBUG ] locating the `service` executable...

[node1][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[node1][WARNIN] /etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or directory

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy.mon][ERROR ] Failed to execute command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[ceph_deploy][ERROR ] GenericError: Failed to create 1 monitors


解决上面报错信息的方法:

手动在node1 node2 node3节点上执行下面的命令

[root@node1 ~]# yum install redhat-lsb  -y


再次执行上面的命令可以成功激活监控节点

[root@admin-node ceph]# ceph-deploy mon create-initial

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mon create-initial

[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2 node3

[ceph_deploy.mon][DEBUG ] detecting platform for host node1 ...

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node1][DEBUG ] determining if provided host has same hostname in remote

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] deploying mon to node1

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] remote hostname: node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create the mon path if it does not exist

[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done

[node1][DEBUG ] create a done file to avoid re-doing the mon deployment

[node1][DEBUG ] create the init path if it does not exist

[node1][DEBUG ] locating the `service` executable...

[node1][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node1

[node1][DEBUG ] === mon.node1 === 

[node1][DEBUG ] Starting Ceph mon.node1 on node1...already running

[node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status

[node1][DEBUG ] ********************************************************************************

[node1][DEBUG ] status for monitor: mon.node1

[node1][DEBUG ] {

[node1][DEBUG ]   "election_epoch": 6, 

[node1][DEBUG ]   "extra_probe_peers": [

[node1][DEBUG ]     "10.240.240.212:6789/0", 

[node1][DEBUG ]     "10.240.240.213:6789/0"

[node1][DEBUG ]   ], 

[node1][DEBUG ]   "monmap": {

[node1][DEBUG ]     "created": "0.000000", 

[node1][DEBUG ]     "epoch": 2, 

[node1][DEBUG ]     "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b", 

[node1][DEBUG ]     "modified": "2014-06-07 22:38:29.435203", 

[node1][DEBUG ]     "mons": [

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.211:6789/0", 

[node1][DEBUG ]         "name": "node1", 

[node1][DEBUG ]         "rank": 0

[node1][DEBUG ]       }, 

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.212:6789/0", 

[node1][DEBUG ]         "name": "node2", 

[node1][DEBUG ]         "rank": 1

[node1][DEBUG ]       }, 

[node1][DEBUG ]       {

[node1][DEBUG ]         "addr": "10.240.240.213:6789/0", 

[node1][DEBUG ]         "name": "node3", 

[node1][DEBUG ]         "rank": 2

[node1][DEBUG ]       }

[node1][DEBUG ]     ]

[node1][DEBUG ]   }, 

[node1][DEBUG ]   "name": "node1", 

[node1][DEBUG ]   "outside_quorum": [], 

[node1][DEBUG ]   "quorum": [

[node1][DEBUG ]     0, 

[node1][DEBUG ]     1, 

[node1][DEBUG ]     2

[node1][DEBUG ]   ], 

[node1][DEBUG ]   "rank": 0, 

[node1][DEBUG ]   "state": "leader", 

[node1][DEBUG ]   "sync_provider": []

[node1][DEBUG ] }

[node1][DEBUG ] ********************************************************************************

[node1][INFO  ] monitor: mon.node1 is running

[node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status

[ceph_deploy.mon][DEBUG ] detecting platform for host node2 ...

[node2][DEBUG ] connected to host: node2 

[node2][DEBUG ] detect platform information from remote host

[node2][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node2][DEBUG ] determining if provided host has same hostname in remote

[node2][DEBUG ] get remote short hostname

[node2][DEBUG ] deploying mon to node2

[node2][DEBUG ] get remote short hostname

[node2][DEBUG ] remote hostname: node2

[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node2][DEBUG ] create the mon path if it does not exist

[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done

[node2][DEBUG ] create a done file to avoid re-doing the mon deployment

[node2][DEBUG ] create the init path if it does not exist

[node2][DEBUG ] locating the `service` executable...

[node2][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node2

[node2][DEBUG ] === mon.node2 === 

[node2][DEBUG ] Starting Ceph mon.node2 on node2...already running

[node2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status

[node2][DEBUG ] ********************************************************************************

[node2][DEBUG ] status for monitor: mon.node2

[node2][DEBUG ] {

[node2][DEBUG ]   "election_epoch": 6, 

[node2][DEBUG ]   "extra_probe_peers": [

[node2][DEBUG ]     "10.240.240.211:6789/0", 

[node2][DEBUG ]     "10.240.240.213:6789/0"

[node2][DEBUG ]   ], 

[node2][DEBUG ]   "monmap": {

[node2][DEBUG ]     "created": "0.000000", 

[node2][DEBUG ]     "epoch": 2, 

[node2][DEBUG ]     "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b", 

[node2][DEBUG ]     "modified": "2014-06-07 22:38:29.435203", 

[node2][DEBUG ]     "mons": [

[node2][DEBUG ]       {

[node2][DEBUG ]         "addr": "10.240.240.211:6789/0", 

[node2][DEBUG ]         "name": "node1", 

[node2][DEBUG ]         "rank": 0

[node2][DEBUG ]       }, 

[node2][DEBUG ]       {

[node2][DEBUG ]         "addr": "10.240.240.212:6789/0", 

[node2][DEBUG ]         "name": "node2", 

[node2][DEBUG ]         "rank": 1

[node2][DEBUG ]       }, 

[node2][DEBUG ]       {

[node2][DEBUG ]         "addr": "10.240.240.213:6789/0", 

[node2][DEBUG ]         "name": "node3", 

[node2][DEBUG ]         "rank": 2

[node2][DEBUG ]       }

[node2][DEBUG ]     ]

[node2][DEBUG ]   }, 

[node2][DEBUG ]   "name": "node2", 

[node2][DEBUG ]   "outside_quorum": [], 

[node2][DEBUG ]   "quorum": [

[node2][DEBUG ]     0, 

[node2][DEBUG ]     1, 

[node2][DEBUG ]     2

[node2][DEBUG ]   ], 

[node2][DEBUG ]   "rank": 1, 

[node2][DEBUG ]   "state": "peon", 

[node2][DEBUG ]   "sync_provider": []

[node2][DEBUG ] }

[node2][DEBUG ] ********************************************************************************

[node2][INFO  ] monitor: mon.node2 is running

[node2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status

[ceph_deploy.mon][DEBUG ] detecting platform for host node3 ...

[node3][DEBUG ] connected to host: node3 

[node3][DEBUG ] detect platform information from remote host

[node3][DEBUG ] detect machine type

[ceph_deploy.mon][INFO  ] distro info: CentOS 6.4 Final

[node3][DEBUG ] determining if provided host has same hostname in remote

[node3][DEBUG ] get remote short hostname

[node3][DEBUG ] deploying mon to node3

[node3][DEBUG ] get remote short hostname

[node3][DEBUG ] remote hostname: node3

[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node3][DEBUG ] create the mon path if it does not exist

[node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node3/done

[node3][DEBUG ] create a done file to avoid re-doing the mon deployment

[node3][DEBUG ] create the init path if it does not exist

[node3][DEBUG ] locating the `service` executable...

[node3][INFO  ] Running command: /sbin/service ceph -c /etc/ceph/ceph.conf start mon.node3

[node3][DEBUG ] === mon.node3 === 

[node3][DEBUG ] Starting Ceph mon.node3 on node3...already running

[node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status

[node3][DEBUG ] ********************************************************************************

[node3][DEBUG ] status for monitor: mon.node3

[node3][DEBUG ] {

[node3][DEBUG ]   "election_epoch": 6, 

[node3][DEBUG ]   "extra_probe_peers": [

[node3][DEBUG ]     "10.240.240.211:6789/0", 

[node3][DEBUG ]     "10.240.240.212:6789/0"

[node3][DEBUG ]   ], 

[node3][DEBUG ]   "monmap": {

[node3][DEBUG ]     "created": "0.000000", 

[node3][DEBUG ]     "epoch": 2, 

[node3][DEBUG ]     "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b", 

[node3][DEBUG ]     "modified": "2014-06-07 22:38:29.435203", 

[node3][DEBUG ]     "mons": [

[node3][DEBUG ]       {

[node3][DEBUG ]         "addr": "10.240.240.211:6789/0", 

[node3][DEBUG ]         "name": "node1", 

[node3][DEBUG ]         "rank": 0

[node3][DEBUG ]       }, 

[node3][DEBUG ]       {

[node3][DEBUG ]         "addr": "10.240.240.212:6789/0", 

[node3][DEBUG ]         "name": "node2", 

[node3][DEBUG ]         "rank": 1

[node3][DEBUG ]       }, 

[node3][DEBUG ]       {

[node3][DEBUG ]         "addr": "10.240.240.213:6789/0", 

[node3][DEBUG ]         "name": "node3", 

[node3][DEBUG ]         "rank": 2

[node3][DEBUG ]       }

[node3][DEBUG ]     ]

[node3][DEBUG ]   }, 

[node3][DEBUG ]   "name": "node3", 

[node3][DEBUG ]   "outside_quorum": [], 

[node3][DEBUG ]   "quorum": [

[node3][DEBUG ]     0, 

[node3][DEBUG ]     1, 

[node3][DEBUG ]     2

[node3][DEBUG ]   ], 

[node3][DEBUG ]   "rank": 2, 

[node3][DEBUG ]   "state": "peon", 

[node3][DEBUG ]   "sync_provider": []

[node3][DEBUG ] }

[node3][DEBUG ] ********************************************************************************

[node3][INFO  ] monitor: mon.node3 is running

[node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status

[ceph_deploy.mon][INFO  ] processing monitor mon.node1

[node1][DEBUG ] connected to host: node1 

[node1][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status

[ceph_deploy.mon][INFO  ] mon.node1 monitor has reached quorum!

[ceph_deploy.mon][INFO  ] processing monitor mon.node2

[node2][DEBUG ] connected to host: node2 

[node2][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status

[ceph_deploy.mon][INFO  ] mon.node2 monitor has reached quorum!

[ceph_deploy.mon][INFO  ] processing monitor mon.node3

[node3][DEBUG ] connected to host: node3 

[node3][INFO  ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status

[ceph_deploy.mon][INFO  ] mon.node3 monitor has reached quorum!

[ceph_deploy.mon][INFO  ] all initial monitors are running and have formed quorum

[ceph_deploy.mon][INFO  ] Running gatherkeys...

[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /etc/ceph/ceph.client.admin.keyring

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from node1.

[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring

[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-osd/ceph.keyring

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from node1.

[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-mds/ceph.keyring

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] fetch remote file

[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from node1.

通过上面的输出信息可知,三个节点都变为了监控节点


查看ceph集群目录多了下面几个文件

ceph.bootstrap-mds.keyring

ceph.bootstrap-osd.keyring

ceph.client.admin.keyring 


6、添加osd节点

先添加node1节点,进入node1节点查看未分配的分区

[root@admin-node ceph]# ssh node1

[root@node1 ~]# fdisk -l


Disk /dev/sda: 53.7 GB, 53687091200 bytes

255 heads, 63 sectors/track, 6527 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x000d6653


   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          39      307200   83  Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2              39        6401    51104768   83  Linux

/dev/sda3            6401        6528     1015808   82  Linux swap / Solaris


Disk /dev/sdb: 21.5 GB, 21474836480 bytes

255 heads, 63 sectors/track, 2610 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x843e46d0


   Device Boot      Start         End      Blocks   Id  System

/dev/sdb1               1        2610    20964793+   5  Extended

/dev/sdb5               1        2610    20964762   83  Linux


[root@node1 ~]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2              48G  2.5G   44G   6% /

tmpfs                 242M   68K  242M   1% /dev/shm

/dev/sda1             291M   33M  243M  12% /boot


查看可以看出第二块硬盘为使用,使用第二块硬盘的sdb5分区作为osd硬盘

  

在admin-node节点上添加osd设备

[root@admin-node ceph]#ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd prepare node2:/var/local/osd0 node3:/var/local/osd1

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node2:/var/local/osd0: node3:/var/local/osd1:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.5 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][WARNIN] osd keyring does not exist yet, creating one

[node1][DEBUG ] create a keyring file

[ceph_deploy.osd][ERROR ] IOError: [Errno 2] No such file or directory: ‘/var/lib/ceph/bootstrap-osd/ceph.keyring‘

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


解决上面报错信息的方法如下:(上面的报错一般出现在添加非监控节点的osd)

上面错误信息的意思是:在创建osd节点的时候在osd节点上缺少/var/lib/ceph/bootstrap-osd/ceph.keyring文件,查看监控节点发现有这个文件,把监控节点上的文件拷贝到node1节点上去即可。

在node1节点上建立一个目录:mkdir /var/lib/ceph/bootstrap-osd/。

登录node1:

[root@admin-node ceph]# ssh node1

[root@admin-node ceph]#scp /var/lib/ceph/bootstrap-osd/ceph.keyring root@node1:/var/lib/ceph/bootstrap-osd/


再次执行osd初始化命令

[root@admin-node ceph]# ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb5 journal None activate False

[node1][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb5

[node1][WARNIN] mkfs.xfs: No such file or directory

[node1][WARNIN] ceph-disk: Error: Command ‘[‘/sbin/mkfs‘, ‘-t‘, ‘xfs‘, ‘-f‘, ‘-i‘, ‘size=2048‘, ‘--‘, ‘/dev/sdb5‘]‘ returned non-zero exit status 1

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy.osd][ERROR ] Failed to execute command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb5

[ceph_deploy][ERROR ] GenericError: Failed to create 1 OSDs


上面的报错信息说明在node1上没有mkfs.xfs文件,需要在node1上安装mkfs.xfs文件。

[root@admin-node ceph]# ssh node1

[root@node1 ~]# yum install xfs* -y

再次执行osd初始化命令可以成功初始化新加入的osd节点

[root@admin-node ceph]# ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd prepare node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] Deploying osd to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][INFO  ] Running command: udevadm trigger --subsystem-match=block --action=add

[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/sdb5 journal None activate False

[node1][INFO  ] Running command: ceph-disk-prepare --fs-type xfs --cluster ceph -- /dev/sdb5

[node1][DEBUG ] meta-data=/dev/sdb5              isize=2048   agcount=4, agsize=1310298 blks

[node1][DEBUG ]          =                       sectsz=512   attr=2, projid32bit=0

[node1][DEBUG ] data     =                       bsize=4096   blocks=5241190, imaxpct=25

[node1][DEBUG ]          =                       sunit=0      swidth=0 blks

[node1][DEBUG ] naming   =version 2              bsize=4096   ascii-ci=0

[node1][DEBUG ] log      =internal log           bsize=4096   blocks=2560, version=2

[node1][DEBUG ]          =                       sectsz=512   sunit=0 blks, lazy-count=1

[node1][DEBUG ] realtime =none                   extsz=4096   blocks=0, rtextents=0

[node1][WARNIN] INFO:ceph-disk:calling partx on prepared device /dev/sdb5

[node1][WARNIN] INFO:ceph-disk:re-reading known partitions will display errors

[node1][WARNIN] last arg is not the whole disk

[node1][WARNIN] call: partx -opts device wholedisk

[node1][INFO  ] checking OSD status...

[node1][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json

[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.

Unhandled exception in thread started by 

Error in sys.excepthook:



在admin节点上激活osd设备

[root@admin-node ceph]# ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] activating host node1 disk /dev/sdb5

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[node1][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5

[node1][WARNIN] got monmap epoch 2

[node1][WARNIN] 2014-06-07 23:36:52.377131 7f1b9a7087a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

[node1][WARNIN] 2014-06-07 23:36:52.436136 7f1b9a7087a0 -1 journal FileJournal::_open: disabling aio for non-block journal.  Use journal_force_aio to force use of aio anyway

[node1][WARNIN] 2014-06-07 23:36:52.437990 7f1b9a7087a0 -1 filestore(/var/lib/ceph/tmp/mnt.LvzAgX) could not find 23c2fcde/osd_superblock/0//-1 in index: (2) No such file or directory

[node1][WARNIN] 2014-06-07 23:36:52.470988 7f1b9a7087a0 -1 created object store /var/lib/ceph/tmp/mnt.LvzAgX journal /var/lib/ceph/tmp/mnt.LvzAgX/journal for osd.0 fsid 4dc38af6-f628-4c1f-b708-9178cf4e032b

[node1][WARNIN] 2014-06-07 23:36:52.471176 7f1b9a7087a0 -1 auth: error reading file: /var/lib/ceph/tmp/mnt.LvzAgX/keyring: can‘t open /var/lib/ceph/tmp/mnt.LvzAgX/keyring: (2) No such file or directory

[node1][WARNIN] 2014-06-07 23:36:52.471528 7f1b9a7087a0 -1 created new key in keyring /var/lib/ceph/tmp/mnt.LvzAgX/keyring

[node1][WARNIN] added key for osd.0

[node1][WARNIN] ERROR:ceph-disk:Failed to activate

[node1][WARNIN] Traceback (most recent call last):

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 2579, in <module>

[node1][WARNIN]     main()

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 2557, in main

[node1][WARNIN]     args.func(args)

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 1910, in main_activate

[node1][WARNIN]     init=args.mark_init,

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 1724, in mount_activate

[node1][WARNIN]     mount_options=mount_options,

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 1544, in move_mount

[node1][WARNIN]     maybe_mkdir(osd_data)

[node1][WARNIN]   File "/usr/sbin/ceph-disk", line 220, in maybe_mkdir

[node1][WARNIN]     os.mkdir(*a, **kw)

[node1][WARNIN] OSError: [Errno 2] No such file or directory: ‘/var/lib/ceph/osd/ceph-0‘

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5


上面报错信息的意思是:在node1节点上没有/var/lib/ceph/osd/ceph-0这个目录,需要在node1节点上创建这个目录。

[root@admin-node ceph]# ssh node1

[root@node1 ~]# mkdir /var/lib/ceph/osd/

[root@node1 ~]# mkdir /var/lib/ceph/osd/ceph-0


再次执行激活osd命令

[root@admin-node ceph]# ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] activating host node1 disk /dev/sdb5

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[node1][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5

[node1][WARNIN] /etc/init.d/ceph: line 15: /lib/lsb/init-functions: No such file or directory

[node1][WARNIN] ceph-disk: Error: ceph osd start failed: Command ‘[‘/sbin/service‘, ‘ceph‘, ‘start‘, ‘osd.0‘]‘ returned non-zero exit status 1

[node1][ERROR ] RuntimeError: command returned non-zero exit status: 1

[ceph_deploy][ERROR ] RuntimeError: Failed to execute command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5


上面报错信息的解决方法:

[root@admin-node ceph]# ssh node1

[root@node1] yum install redhat-lsb  -y


再次执行激活osd命令osd节点可以正常运行

[root@admin-node ceph]# ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy osd activate node1:/dev/sdb5

[ceph_deploy.osd][DEBUG ] Activating cluster ceph disks node1:/dev/sdb5:

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.osd][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.osd][DEBUG ] activating host node1 disk /dev/sdb5

[ceph_deploy.osd][DEBUG ] will use init type: sysvinit

[node1][INFO  ] Running command: ceph-disk-activate --mark-init sysvinit --mount /dev/sdb5

[node1][DEBUG ] === osd.0 === 

[node1][DEBUG ] Starting Ceph osd.0 on node1...

[node1][DEBUG ] starting osd.0 at :/0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal

[node1][WARNIN] create-or-move updating item name ‘osd.0‘ weight 0.02 at location {host=node1,root=default} to crush map

[node1][INFO  ] checking OSD status...

[node1][INFO  ] Running command: ceph --cluster=ceph osd stat --format=json

Unhandled exception in thread started by 

Error in sys.excepthook:


Original exception was:


按上面的方法添加node2 node3为osd节点


7、复制ceph配置文件及密钥到mon、osd节点

[root@admin-node ceph]# ceph-deploy admin admin-node node1 node2 node3

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy admin admin-node node1 node2 node3

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to admin-node

[admin-node][DEBUG ] connected to host: admin-node 

[admin-node][DEBUG ] detect platform information from remote host

[admin-node][DEBUG ] detect machine type

[admin-node][DEBUG ] get remote short hostname

[admin-node][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node1

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[node1][DEBUG ] get remote short hostname

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node2

[node2][DEBUG ] connected to host: node2 

[node2][DEBUG ] detect platform information from remote host

[node2][DEBUG ] detect machine type

[node2][DEBUG ] get remote short hostname

[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node3

[node3][DEBUG ] connected to host: node3 

[node3][DEBUG ] detect platform information from remote host

[node3][DEBUG ] detect machine type

[node3][DEBUG ] get remote short hostname

[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

Unhandled exception in thread started by 

Error in sys.excepthook:


Original exception was:


8、确保你有正确的ceph.client.admin.keyring权限

[root@admin-node ceph]# chmod +r /etc/ceph/ceph.client.admin.keyring


9、查看三台监控节点的选举状态

[root@admin-node ~]# ceph quorum_status --format json-pretty


{ "election_epoch": 30,

  "quorum": [

        0,

        1,

        2],

  "quorum_names": [

        "node1",

        "node2",

        "node3"],

  "quorum_leader_name": "node1",

  "monmap": { "epoch": 2,

      "fsid": "4dc38af6-f628-4c1f-b708-9178cf4e032b",

      "modified": "2014-06-07 22:38:29.435203",

      "created": "0.000000",

      "mons": [

            { "rank": 0,

              "name": "node1",

              "addr": "10.240.240.211:6789\/0"},

            { "rank": 1,

              "name": "node2",

              "addr": "10.240.240.212:6789\/0"},

            { "rank": 2,

              "name": "node3",

              "addr": "10.240.240.213:6789\/0"}]}}


10、查看集群运行状态

[root@admin-node ceph]# ceph health

HEALTH_WARN clock skew detected on mon.node2, mon.node3

出现上面信息的意思是,node1 node2 node3的时间不一致,必须把他们的时间同步,解决方法如下:

把admin-node配置ntp服务器,所有的节点都同步admin-node。

再次执行结果如下:

[root@admin-node ceph]# ceph health

HEALTH_OK


12、添加一个元数据服务器

[root@admin-node ceph]# ceph-deploy mds create node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mds create node1

[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node1:node1

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mds][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.mds][DEBUG ] remote host will use sysvinit

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create path if it doesn‘t exist

[ceph_deploy.mds][ERROR ] OSError: [Errno 2] No such file or directory: ‘/var/lib/ceph/mds/ceph-node1‘

[ceph_deploy][ERROR ] GenericError: Failed to create 1 MDSs

解决上面报错的方法:

[root@admin-node ceph]# ssh node1

Last login: Fri Jun  6 06:41:25 2014 from 10.241.10.2

[root@node1 ~]# mkdir /var/lib/ceph/mds/

[root@node1 ~]# mkdir /var/lib/ceph/mds/ceph-node1


再次执行元数据服务器创建完成

[root@admin-node ceph]# ceph-deploy mds create node1

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy mds create node1

[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node1:node1

[node1][DEBUG ] connected to host: node1 

[node1][DEBUG ] detect platform information from remote host

[node1][DEBUG ] detect machine type

[ceph_deploy.mds][INFO  ] Distro info: CentOS 6.4 Final

[ceph_deploy.mds][DEBUG ] remote host will use sysvinit

[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node1

[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

[node1][DEBUG ] create path if it doesn‘t exist

[node1][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node1 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node1/keyring

[node1][INFO  ] Running command: service ceph start mds.node1

[node1][DEBUG ] === mds.node1 === 

[node1][DEBUG ] Starting Ceph mds.node1 on node1...

[node1][DEBUG ] starting mds.node1 at :/0


再次查看运行状态

[root@admin-node ceph]# ceph -w

    cluster 591ef1f4-69f7-442f-ba7b-49cdf6695656

     health HEALTH_OK

     monmap e1: 1 mons at {node1=10.240.240.211:6789/0}, election epoch 2, quorum 0 node1

     mdsmap e4: 1/1/1 up {0=node1=up:active}

     osdmap e9: 2 osds: 2 up, 2 in

      pgmap v22: 192 pgs, 3 pools, 1884 bytes data, 20 objects

            10310 MB used, 30616 MB / 40926 MB avail

                 192 active+clean


2014-06-06 08:12:49.021472 mon.0 [INF] pgmap v22: 192 pgs: 192 active+clean; 1884 bytes data, 10310 MB used, 30616 MB / 40926 MB avail; 10 B/s wr, 0 op/s

2014-06-06 08:14:47.932311 mon.0 [INF] pgmap v23: 192 pgs: 192 active+clean; 1884 bytes data, 10310 MB used, 30615 MB / 40926 MB avail


13、安装ceph client

安装ceph客户端

[root@admin-node ~]# ceph-deploy install ceph-client

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy install ceph-client

[ceph_deploy.install][DEBUG ] Installing stable version firefly on cluster ceph hosts ceph-client

[ceph_deploy.install][DEBUG ] Detecting platform for host ceph-client ...

[ceph-client][DEBUG ] connected to host: ceph-client 

[ceph-client][DEBUG ] detect platform information from remote host

[ceph-client][DEBUG ] detect machine type

[ceph_deploy.install][INFO  ] Distro info: CentOS 6.4 Final

[ceph-client][INFO  ] installing ceph on ceph-client

[ceph-client][INFO  ] Running command: yum clean all

[ceph-client][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[ceph-client][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates

[ceph-client][DEBUG ] Cleaning up Everything

[ceph-client][DEBUG ] Cleaning up list of fastest mirrors

[ceph-client][INFO  ] Running command: yum -y install wget

[ceph-client][DEBUG ] Loaded plugins: fastestmirror, refresh-packagekit, security

[ceph-client][DEBUG ] Determining fastest mirrors

[ceph-client][DEBUG ]  * base: mirrors.btte.net

[ceph-client][DEBUG ]  * epel: mirrors.hust.edu.cn

[ceph-client][DEBUG ]  * extras: mirrors.btte.net

[ceph-client][DEBUG ]  * updates: mirrors.btte.net

[ceph-client][DEBUG ] Setting up Install Process

[ceph-client][DEBUG ] Package wget-1.12-1.11.el6_5.x86_64 already installed and latest version

[ceph-client][DEBUG ] Nothing to do

[ceph-client][INFO  ] adding EPEL repository

[ceph-client][INFO  ] Running command: wget http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[ceph-client][WARNIN] --2014-06-07 06:32:38--  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

[ceph-client][WARNIN] Resolving dl.fedoraproject.org... 209.132.181.24, 209.132.181.25, 209.132.181.26, ...

[ceph-client][WARNIN] Connecting to dl.fedoraproject.org|209.132.181.24|:80... connected.

[ceph-client][WARNIN] HTTP request sent, awaiting response... 200 OK

[ceph-client][WARNIN] Length: 14540 (14K) [application/x-rpm]

[ceph-client][WARNIN] Saving to: `epel-release-6-8.noarch.rpm.1‘

[ceph-client][WARNIN] 

[ceph-client][WARNIN]      0K .......... ....                                       100%  359K=0.04s

[ceph-client][WARNIN] 

[ceph-client][WARNIN] 2014-06-07 06:32:39 (359 KB/s) - `epel-release-6-8.noarch.rpm.1‘ saved [14540/14540]

[ceph-client][WARNIN] 

[ceph-client][INFO  ] Running command: rpm -Uvh --replacepkgs epel-release-6*.rpm

[ceph-client][DEBUG ] Preparing...                ##################################################

[ceph-client][DEBUG ] epel-release                ##################################################

[ceph-client][INFO  ] Running command: rpm --import https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc

[ceph-client][INFO  ] Running command: rpm -Uvh --replacepkgs http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[ceph-client][DEBUG ] Retrieving http://ceph.com/rpm-firefly/el6/noarch/ceph-release-1-0.el6.noarch.rpm

[ceph-client][DEBUG ] Preparing...                ##################################################

[ceph-client][DEBUG ] ceph-release                ##################################################

[ceph-client][INFO  ] Running command: yum -y -q install ceph

[ceph-client][DEBUG ] Package ceph-0.80.1-2.el6.x86_64 already installed and latest version

[ceph-client][INFO  ] Running command: ceph --version

[ceph-client][DEBUG ] ceph version 0.80.1 (a38fe1169b6d2ac98b427334c12d7cf81f809b74)


把秘钥及配置文件拷贝到客户端

[root@admin-node ceph]# ceph-deploy admin ceph-client

[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf

[ceph_deploy.cli][INFO  ] Invoked (1.5.3): /usr/bin/ceph-deploy admin ceph-client

[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to ceph-client

[ceph-client][DEBUG ] connected to host: ceph-client 

[ceph-client][DEBUG ] detect platform information from remote host

[ceph-client][DEBUG ] detect machine type

[ceph-client][DEBUG ] get remote short hostname

[ceph-client][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf


正常centos6.4的系统是没有Module rbd的,在进行下面的操作时会出现报错:

[root@ceph-client ceph]#  rbd map test-1 -p test --name client.admin  -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring

ERROR: modinfo: could not find module rbd

FATAL: Module rbd not found.

rbd: modprobe rbd failed! (256)

解决的方法是升级内核版本:

Once you have deployed the almighty CEPH storage, you will want to be able to actualy use it (RBD).

Before we begin, some notes:

Current CEPH version: 0.67 (“dumpling”).

OS: Centos 6.4 x86_64 (running some VMs on KVM, basic CentOS qemu packages, nothing custom)

Since CEPH RBD module was first introduced with kernel 2.6.34 (and current RHEL/CentOS kernel is 2.6.32) – that means we need a newer kernel.

So, one of the options for the new kernel is, 3.x from elrepo.org:

rpm --import http://elrepo.org/RPM-GPG-KEY-elrepo.org

rpm -Uvh http://elrepo.org/elrepo-release-6-5.el6.elrepo.noarch.rpm

yum --enablerepo=elrepo-kernel install kernel-ml                # will install 3.11.latest, stable, mainline

If you want that new kernel to boot by default, edit /etc/grub.conf, and change the Default=1 to Default=0, and reboot.


14、在客户端上应用ceph块存储

新建一个ceph pool

[root@ceph-client ceph]# rados mkpool test

在pool中新建一个镜像

[root@ceph-client ceph]# rbd create test-1 --size 4096 -p test -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring   (“-m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring”可以不用加)  

把镜像映射到pool块设备中

[root@ceph-client ceph]#  rbd map test-1 -p test --name client.admin  -m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring  (“-m 10.240.240.211 -k /etc/ceph/ceph.client.admin.keyring”可以不用加)  

查看rbd的映射关系

[root@ceph-client ~]# rbd showmapped

id pool    image       snap device    

0  rbd     foo         -    /dev/rbd0 

1  test    test-1      -    /dev/rbd1 

2  jiayuan jiayuan-img -    /dev/rbd2 

3  jiayuan zhanguo     -    /dev/rbd3 

4  jiayuan zhanguo-5G  -    /dev/rbd4 


把新建的镜像ceph块进行格式化

[root@ceph-client dev]# mkfs.ext4 -m0 /dev/rbd1

新建一个挂载目录

[root@ceph-client dev]# mkdir /mnt/ceph-rbd-test-1

把新建的镜像ceph块挂载到挂载目录

[root@ceph-client dev]# mount /dev/rbd1 /mnt/ceph-rbd-test-1/

查看挂载情况

[root@ceph-client dev]# df -h

Filesystem            Size  Used Avail Use% Mounted on

/dev/sda2              19G  2.5G   15G  15% /

tmpfs                 116M   72K  116M   1% /dev/shm

/dev/sda1             283M   52M  213M  20% /boot

/dev/rbd1             3.9G  8.0M  3.8G   1% /mnt/ceph-rbd-test-1


完成上面的步骤就可以向新建的ceph文件系统中存数据了。


15、在客户端上建立cephFS文件系统

 

[root@ceph-client ~]# mkdir /mnt/mycephfs

[root@ceph-client ~]# mount  -t ceph 10.240.240.211:6789:/ /mnt/mycephfs -v -o name=admin,secret=AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==   

10.240.240.211:6789:/ on /mnt/mycephfs type ceph (rw,name=admin,secret=AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==)  


#上述命令中的name和secret参数值来自monitor的/etc/ceph/keyring文件:

[root@node1 ~]# cat /etc/ceph/ceph.client.admin.keyring 

[client.admin]

        key = AQDT9pNTSFD6NRAAoZkAgx21uGQ+DM/k0rzxow==


本文出自 “zhanguo1110” 博客,请务必保留此出处http://zhanguo1110.blog.51cto.com/5750817/1423771

ceph详细安装部署教程(多监控节点),布布扣,bubuko.com

ceph详细安装部署教程(多监控节点)

标签:rbd   ceph   mon   rados   

原文地址:http://zhanguo1110.blog.51cto.com/5750817/1423771

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!