标签:
Region :可以理解为区域,是基于地理位置的逻辑划分;如:华南,华北之类,包含多个region的Ceph集群必须指定一个master region,一个region可以包含一个或者多个zone
Zone : 可以理解为可用区,它包含一组Ceph rgw实例,一个region必须指定一个master zone用以处理客户端请求。
本文描述的多可用区部署拓扑如下:
Ceph | SH / SH-1 SH-2 | | SH-SH-1 SH-SH-2
在Ceph集群配置名为SH的Region,在Region下配置名为SH-1及SH-2两个Zone,并将SH-1设置为master,SH-2备用,可以通过radosgw-agent实现数据复制;每个Zone各运行一个rgw实例,分别为SH-SH-1及SH-SH-2
rgw组成要素rgw作为一个客户端,包含如下基本元素:
rgw实例名, 本文中两个实例分别是SH-SH-1,SH-SH-2rgw实例用户ceph.conf中配置入口rgw实例运行时数据目录rgwpoolsCeph rgw需要使用多个pool来存储相关的配置及用户数据。如果后续创建的rgw用户具有相关权限,在rgw实例启动的时候是会自动创建某些存储池的;但是,通常都会建议用户自行创建。为便于区别不同Zone,在各存储池名前冠以.{region-name}-{zone-name}前缀,SH-1及SH-2的各存储池如下
.SH-SH-1.rgw.root
.SH-SH-1.rgw.control
.SH-SH-1.rgw.gc
.SH-SH-1.rgw.buckets
.SH-SH-1.rgw.buckets.index
.SH-SH-1.rgw.buckets.extra
.SH-SH-1.log
.SH-SH-1.intent-log
.SH-SH-1.usage
.SH-SH-1.users
.SH-SH-1.users.email
.SH-SH-1.users.swift
.SH-SH-1.users.uid
.SH-SH-2.rgw.root
.SH-SH-2.rgw.control
.SH-SH-2.rgw.gc
.SH-SH-2.rgw.buckets
.SH-SH-2.rgw.buckets.index
.SH-SH-2.rgw.buckets.extra
.SH-SH-2.log
.SH-SH-2.intent-log
.SH-SH-2.usage
.SH-SH-2.users
.SH-SH-2.users.email
.SH-SH-2.users.swift
.SH-SH-2.users.uid
创建存储池的命令如下: ceph osd pool create {pool_name} 128 128
注意:不要忘记存储池名前的’.’,否则在启动rgw实例的时候会失败
rgw用户及秘钥创建秘钥文件
在/etc/ceph/目录下创建秘钥文件并设置执行权限
#ceph-authtool --create-keyring /etc/ceph/ceph.client.radosgw.keyring
#chmod +r /etc/ceph/ceph.client.radosgw.keyring
rgw用户及秘钥为每个实例生成用户及秘钥,并存储到前述创建的秘钥文件中
#ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.SH-SH-1 --gen-key
#ceph-authtool /etc/ceph/ceph.client.radosgw.keyring -n client.radosgw.SH-SH-2 --gen-key
为前述创建的用户授予合适的权限
#ceph-authtool -n client.radosgw.SH-SH-1 --cap osd ‘allow rwx‘ --cap mon ‘allow rwx‘ /etc/ceph/ceph.client.radosgw.keyring
#ceph-authtool -n client.radosgw.SH-SH-2 --cap osd ‘allow rwx‘ --cap mon ‘allow rwx‘ /etc/ceph/ceph.client.radosgw.keyring
将用户添加到Ceph集群
#ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.SH-SH-1 -i /etc/ceph/ceph.client.radosgw.keyring
#ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.SH-SH-2 -i /etc/ceph/ceph.client.radosgw.keyring
rgw实例信息到配置文件ceph.conf配置文件[global]
rgw region root pool = .SH.rgw.root //用于存储region信息,会自动创建
[client.radosgw.SH-SH-1] //实例名,格式为:{tpye}.{id}
rgw region = SH //region名
rgw zone = SH-1 //zone名
rgw zone root pool = .SH-SH-1.rgw.root //根存储池,存储zone信息
keyring = /etc/ceph/ceph.client.radosgw.keyring //秘钥文件
rgw dns name = {hostname} //DNS
;rgw socket path = /var/run/ceph/$name.sock //unix路径
rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0
host = {host-name} //主机名,通过`hostname -s`获得
[client.radosgw.SH-SH-2]
rgw region = SH
rgw zone = SH-2
rgw zone root pool = .SH-SH-2.rgw.root
keyring = /etc/ceph/ceph.client.radosgw.keyring
rgw dns name = {hostname}
rgw frontends = fastcgi socket_port=9000 socket_host=0.0.0.0
host = {host-name}
各配置的含义请看上文的注解,这里要说明的一点是:rgw socket path及rgw frontends中的socket_port配置是互斥的并且rgw socket path配置优先,即:如果配置了rgw socket path,rgw实例将启动unix socket监听并忽略socket_port配置,只有在没有配置rgw socket path的情况下,rgw实例才会使用socket_port及socket_host建立socket监听
ceph节点的配置信息通过ceph-deploy推送配置文件到rgw实例节点
#ceph-deploy --overwrite-conf config push {inst-1} {inst-2}
Regionregion配置文件创建一个包含region信息,名为sh.json的配置文件:
{ "name": "SH", //region名
"api_name": "SH",
"is_master": "true", //设置为master
"endpoints": "", //region之间的复制地址
"master_zone": "SH-1", //master zone
"zones": [
{ "name": "SH-1",
"endpoints": [
"http:\/\/{fqdn}:80\/"], //zone之间的复制地址
"log_meta": "true",
"log_data": "true"},
{ "name": "SH-2",
"endpoints": [
"http:\/\/{fqdn}:80\/"],
"log_meta": "true",
"log_data": "true"}],
"placement_targets": [ //可用的位置组,
{
"name": "default-placement",
"tags": []
}
],
"default_placement": "default-placement"}
endpoints是用于region之间,zone之间的数据复制地址,需设置为rgw实例前端(上例中{fqdn}需设置为主机名)的地址;如果是单region或者单zone配置,该地址可不设置
placement_targets指定可用的位置组,上文中只配置了一个;zone中的placement_pools配置值来源于此,placement_pools用来存储zone的用户数据,如:bucket,bucket_index
Region通过前述的sh.json配置文件创建region(通过一个实例执行即可)
#radosgw-admin region set --infile us.json --name client.radosgw.SH-SH-1
Region#radosgw-admin region default --rgw-region=SH --name client.radosgw.SH-SH-1
regionmap#radosgw-admin regionmap update --name client.radosgw.SH-SH-1
#radosgw-admin regionmap update --name client.radosgw.SH-SH-2
ZoneSH-1及SH-2信息,名为sh-1.json及sh-2.json的配置文件,sh-1.json的内容如下(sh-2.json中只是将SH-SH-1替换为SH-SH-2):{ "domain_root": ".SH-SH-1.domain.rgw",
"control_pool": ".SH-SH-1.rgw.control",
"gc_pool": ".SH-SH-1.rgw.gc",
"log_pool": ".SH-SH-1.log",
"intent_log_pool": ".SH-SH-1.intent-log",
"usage_log_pool": ".SH-SH-1.usage",
"user_keys_pool": ".SH-SH-1.users",
"user_email_pool": ".SH-SH-1.users.email",
"user_swift_pool": ".SH-SH-1.users.swift",
"user_uid_pool": ".SH-SH-1.users.uid",
"system_key": { "access_key": "", "secret_key": ""},
"placement_pools": [
{ "key": "default-placement",
"val": { "index_pool": ".SH-SH-1.rgw.buckets.index",
"data_pool": ".SH-SH-1.rgw.buckets"}
}
]
}
注意
placement_pools配置,用来存储bucket,bucket index等信息key需要是定义region时placement_target中指定的某个name
Zone - SH-1#radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1
#radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1
Zone - SH-2#radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2
#radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2
regionmap#radosgw-admin regionmap update --name client.radosgw.SH-SH-1
#radosgw-admin regionmap update --name client.radosgw.SH-SH-2
Zone - 用户Zone用户信息存储在Zone的存储池中,所以需要先配置Zone再创建用户;创建用户后,请注意保留access_key及secret_key信息,以便后面更新Zone信息:
#radosgw-admin user create --uid="sh-1" --display-name="Region-SH Zone-SH-1" --name client.radosgw.SH-SH-1 --system
#radosgw-admin user create --uid="sh-2" --display-name="Region-SH Zone-SH-2" --name client.radosgw.SH-SH-2 --system
Zone将上述创建的两个用户的access_key及secret_key分布拷贝到创建Zone那一节创建的两个配置文件sh-1.json及sh-2.json中,并执行下面的命令更新Zone配置:
#radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1
#radosgw-admin zone set --rgw-zone=SH-1 --infile sh-1.json --name client.radosgw.SH-SH-1
#radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2
#radosgw-admin zone set --rgw-zone=SH-2 --infile sh-2.json --name client.radosgw.SH-SH-2
Apache,Nignx及civetweb都可以作为rgw前端,在这之前,曾写过一篇有关rgw各前端配置的文章,有需要的读者可以前往阅读。下面,给出本文中的apache配置(/etc/httpd/cond.d/rgw.conf):
<VirtualHost *:80>
ServerName {fqdn}
DocumentRoot /var/www/html
ErrorLog /var/log/httpd/rgw_error.log
CustomLog /var/log/httpd/rgw_access.log combined
# LogLevel debug
RewriteEngine On
RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
SetEnv proxy-nokeepalive 1
#ProxyPass / unix:///var/run/ceph/ceph-client.radosgw.SH-SH-1.asok
ProxyPass / fcgi://{fqdn}:9000/
</VirtualHost>
{fqdn}需替换为所在节点的主机名,ProxyPass需要根据
ceph.conf文件中的rgw实例配置来设置
在实例节点上分别为rgw实例创建数据目录,目录格式:{type}.{id}:
#mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.SH-SH-1
#mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.SH-SH-2
在两个实例节点上分别启动实例及前端
#radosgw -c /etc/ceph/ceph.conf -n client.radosgw.SH-SH-1
#systemctl restart httpd.service
#radosgw -c /etc/ceph/ceph.conf -n client.radosgw.SH-SH-2
#systemctl restart httpd.service
rgw配好后,可以通过radosgw-agent将master zone - SH-1中的数据复制到slave zone - SH-1,实现提高可用性及读性能的目的
创建cluster-data-sync.conf文件,并填充如下内容:
src_access_key: {source-access-key} //master zone用户的秘钥信息
src_secret_key: {source-secret-key}
destination: https://{fqdn}:port //这里是slave zone的endpoints地址
dest_access_key: {destination-access-key} //slave zone用户的秘钥信息
dest_secret_key: {destination-secret-key}
log_file: {log.filename} //日志文件
标签:
原文地址:http://www.cnblogs.com/damizhou/p/5824157.html