码迷,mamicode.com
首页 > 其他好文 > 详细

在rhel7中搭建openstack kilo

时间:2016-02-18 12:11:03      阅读:624      评论:0      收藏:0      [点我收藏+]

标签:openstack kilo centos rhel

1.       安装Controller

1.1        配置主机名

 

1.2        配置网络

 

1.3        配置Selinux

 

1.4        安装源配置

安装源包括:CENTOS7EPEL7OPENSTACK-KILO

1.5        安装Mariadb

安装mysql数据库依赖包

                   # yuminstall mariadb mariadb-server MySQL-python –y

编辑文件完成下列步骤:

#vi /etc/my.cnf.d/mariadb_openstack.cnf

[mysqld] 部分, 修改添加下列的选项

[mysqld]

bind-address = 10.0.0.11

default-storage-engine = innodb

lower_case_table_names=1

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = ‘SET NAMES utf8‘

character-set-server = utf8

 

 

 

启动数据库服务

#systemctl start mariadb.service

#systemctl enable  mariadb.service

设置MySQL安全配置向导(此步骤主要设置mysql可远程连接):

#mysql_secure_installation

设置root密码为A0staryh

                            技术分享

 

 

#systemctl restart  mariadb.service

 

1.6        安装Rabbitmq-Server

安装rabbitmq服务

# yum install rabbitmq-server  -y

启动rabbitmq服务并设置为不开机启动:

# systemctl start rabbitmq-server.service

# systemctl enable rabbitmq-server.service

# systemctl status rabbitmq-server.service

技术分享

添加ops用户

# rabbitmqctl add_user openstack A0staryh

赋权

# rabbitmqctl set_permissions openstack ".*"".*" ".*"

                      

# systemctl restart rabbitmq-server.service

1.7        安装身份验证服务Keystone

1.7.1       配置前的准备

  • 操作数据库,这里要用到刚刚数据库安装时用到的密码

# mysql -uroot -pA0staryh

建立数据库,与授权用户,还有加密码KEYSTONE_DBPASS

MariaDB [(none)]> CREATE DATABASE keystone;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO‘keystone‘@‘localhost‘ IDENTIFIED BY ‘A0staryh‘;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘%‘IDENTIFIED BY ‘A0staryh‘;

MariaDB [(none)]> exit

  • 生成一个随机値,做为管理令牌,为后面的配置要用

# openssl rand -hex 10

fb7269c14626a5966181

1.7.2       安装和配置组件

  • 安装包

# yum install –y openstack-keystone python-keystoneclient

 

# vim /etc/keystone/keystone.conf

admin_token = 11d5c31d5a96d7b42315

verbose = true

 

[database]

connection = mysql://keystone:A0staryh@controller/keystone

 

[token]

provider = keystone.token.providers.uuid.Provider

driver = keystone.token.persistence.backends.sql.Token

 

[revoke]

driver = keystone.contrib.revoke.backends.sql.Revoke

  • 创建管理证书与密钥,设置相关文件权限

# keystone-manage pki_setup --keystone-user keystone--keystone-group keystone

No handlers could be found for logger "oslo_config.cfg"

The following cert files already exist, use --rebuild to remove theexisting files before regenerating:

/etc/keystone/ssl/private/cakey.pem already exists

/etc/keystone/ssl/certs/ca.pem already exists

/etc/keystone/ssl/private/signing_key.pem already exists

/etc/keystone/ssl/certs/signing_cert.pem already exists

 

# chown -R keystone:keystone /var/log/keystone

# chown -R keystone:keystone /etc/keystone/ssl

# chmod -R o-rwx /etc/keystone/ssl

  • 填充数据库数据

# su -s /bin/sh -c "keystone-manage db_sync"keystone

No handlers could be found for logger"oslo_config.cfg"

  • 填加到启动与启动程序

# systemctl enable openstack-keystone.service

# systemctl start openstack-keystone.service

 

  • 默认情况下,认证身份令牌到期后不会删除,会一直存在数据库中,所以加一下自动删除

# (crontab -l -u keystone 2>&1 | grep -q token_flush)|| echo ‘@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log2>&1‘ >> /var/spool/cron/keystone

1.7.3       创建租户、用户和角色

  • 创建admin用户

# export OS_SERVICE_TOKEN=11d5c31d5a96d7b42315

# export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0

# keystone tenant-create --name admin --description "AdminTenant"

# keystone user-create --name admin --pass A0staryh --email root@localhost

# keystone role-create --name admin

# keystone user-role-add --tenant admin --user admin --roleadmin

  • 创建demo用户

# keystone tenant-create --name demo --description "DemoTenant"

# keystone user-create --name demo --tenant demo --pass A0staryh --email demo@localhost

  • 创建 service租户

# keystone tenant-create --name service --description"Service Tenant"

 

1.7.4       创建服务实体和 API 端点

  • 创建实体

# keystone service-create --name keystone --type identity--description "OpenStack Identity"

 

# keystone endpoint-create --service-id $(keystoneservice-list | awk ‘/ identity / {print $2}‘) --publicurlhttp://controller:5000/v2.0 --internalurl http://controller:5000/v2.0--adminurl http://controller:35357/v2.0 --region regionOne

+-------------+----------------------------------        +

|   Property  |             Value                    |

+-------------+----------------------------------         +

|   adminurl  |   http://controller:35357/v2.0        |

|      id     | 71ed01478ea34f12bfe81cc9de80ff75   |

| internalurl |  http://controller:5000/v2.0            |

|  publicurl  |  http://controller:5000/v2.0           |

|    region   |           regionOne                  |

|  service_id |efa9e2e0830b4bd4a8d6470f1d1c95d4 |

+-------------+----------------------------------+

 

  • 验证

# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT

使用admin租户和用户,需要一个认证的令牌

# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 token-get

使用 admin租户和用户,列出租户以验证 admin租户和用户

# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 tenant-list

使用 admin租户和用户,列出用户

# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 user-list

使用 admin租户和用户,列出角色

# keystone --os-tenant-name admin --os-username admin--os-password A0staryh--os-auth-url http://controller:35357/v2.0 role-list

  • 创建客户端环境脚本

# vi admin-openrc.sh

export OS_TENANT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=A0staryh

export OS_AUTH_URL=http://controller:35357/v2.0

 

# vi demo-openrc.sh

export OS_TENANT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=A0staryh

export OS_AUTH_URL=http://controller:5000/v2.0

 

# source admin-openrc.sh

 

1.8        安装镜像服务Glance

1.8.1       创建数据库、服务证书和 API 端点

  • 创建数据库

# mysql -uroot -pA0staryh

MariaDB [(none)]> CREATE DATABASEglance;

MariaDB [(none)]> GRANT ALLPRIVILEGES ON glance.* TO ‘glance‘@‘localhost‘ IDENTIFIED BY ‘A0staryh‘;

MariaDB [(none)]> GRANT ALLPRIVILEGES ON glance.* TO ‘glance‘@‘%‘ IDENTIFIED BY ‘A0staryh‘;

  • 导入 admin身份凭证以执行管理员用户专有的命令

# sourceadmin-openrc.sh

  • 创建服务证书

创建glance用户

#keystone user-create --name glance --pass A0staryh

glance用户添加admin角色

# keystoneuser-role-add --user glance --tenant service --role admin

创建glance服务实体

# keystone service-create --name glance--type image --description "OpenStack Image Service"

创建镜像服务的 API 端点

# keystone endpoint-create --service-id$(keystone service-list | awk ‘/ image / {print $2}‘) --publicurl http://controller:9292--internalurl http://controller:9292--adminurl http://controller:9292--region regionOne

 

1.8.2       安装和配置镜像服务组件

  • 安装软件包

# yum install openstack-glance python-glanceclient

  • 修改配置文件vim /etc/glance/glance-api.conf

 

[DEFAULT]

verbose = True

notification_driver = noop

 

[database]

connection =mysql://glance:A0staryh@controller/glance

 

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = glance

admin_password = A0staryh

[paste_deploy]

flavor = keystone

 

[glance_store]

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

 

 

注意:注释所有 auth_hostauth_port auth_protocol选项,因为

identity_uri已经包括了它们。

  • 修改vim /etc/glance/glance-registry.conf

[DEFAULT]

verbose = True

notification_driver = noop

[database]

connection = mysql://glance:A0staryh@controller/glance

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = glance

admin_password = A0staryh

[paste_deploy]

flavor = keystone

  • 写入镜像服务数据库

# su -s /bin/sh -c"glance-manage db_sync" glance

  • 启动镜像服务并将其配置为随系统启动

# systemctlenable openstack-glance-api.service openstack-glance-registry.service

#systemctl start openstack-glance-api.service openstack-glance-registry.service

1.8.3       验证

# mkdir /tmp/images

下载cirros-0.3.0-x86_64-disk.img,并将文件拷贝至该目录

# source admin-openrc.sh

# glance image-create --name"cirros-0.3.0-x86_64" --file /tmp/images/cirros-0.3.0-x86_64-disk.img--disk-format qcow2 --container-format bare --is-public True --progress

# glance image-list

2.       安装计算服务Nova

2.1        controller端创建数据库

  • 创建数据库

# mysql -u root -p

MariaDB [(none)]> CREATE DATABASE nova;

MariaDB [(none)]> GRANT ALL PRIVILEGES ONnova.* TO ‘nova‘@‘localhost‘ IDENTIFIED BY ‘A0staryh‘;

MariaDB [(none)]> GRANT ALL PRIVILEGES ONnova.* TO ‘nova‘@‘%‘ IDENTIFIED BY ‘A0staryh‘;

  • 导入 admin身份凭证以执行管理员用户专有的命令

# source admin-openrc.sh

  • 创建服务证书

创建 nova用户

# keystone user-create --name nova--pass A0staryh

# keystone user-role-add --user nova--tenant service --role admin

# keystone service-create --name nova--type compute --description "OpenStack Compute"

# keystone endpoint-create--service-id $(keystone service-list | awk ‘/ compute / {print $2}‘)--publicurl http://controller:8774/v2/%\(tenant_id\)s--internalurl http://controller:8774/v2/%\(tenant_id\)s--adminurl http://controller:8774/v2/%\(tenant_id\)s--region regionOne

  • 安装包

# yum install openstack-nova-api openstack-nova-certopenstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-schedulerpython-novaclient

  • 修改配置文件 vim /etc/nova/nova.conf

[DEFAULT]

rpc_backend = rabbit

my_ip= 10.0.0.11

rabbit_host = controller
rabbit_password =
A0staryh

auth_strategy= keystone

vncserver_listen = 10.0.0.11
vncserver_proxyclient_address = 10.0.0.11

verbose = True

[database]
connection = mysql://nova:
A0staryh@controller/nova

 

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0
identity_uri = http://
controller:35357
admin_tenant_name = service
admin_user = nova
admin_password =
A0staryh

[glance]

host = controller

 

  • 同步Compute 数据库

# su -s/bin/sh -c "nova-manage db sync" nova

 

  • 启动服务

# systemctl enableopenstack-nova-api.service openstack-nova-cert.service  openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

# systemctl startopenstack-nova-api.service openstack-nova-cert.service  openstack-nova-consoleauth.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

2.2        安装配置compute节点

  • 安装软件包

# yum install openstack-nova-computesysfsutils

  • 修改配置文件 vim /etc/nova/nova.conf

[DEFAULT]

         verbose = True

rpc_backend = rabbit
rabbit_host =
controller
rabbit_password =
A0staryh

auth_strategy = keystone

my_ip = 10.0.0.2[管理网IP地址]

vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = 10.0.0.2[
管理网IP地址]
novncproxy_base_url = http://
controller:6080/vnc_auto.html

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0
identity_uri = http://
controller:35357
admin_tenant_name = service
admin_user = nova
admin_password =
A0staryh

[glance]

 host = controller

 

  • 完成安装
    确定您的计算节点是否支持虚拟机的硬件加速。
    # egrep -c ‘(vmx|svm)‘ /proc/cpuinfo

启动计算服务及其依赖,并将其配置为随系统自动启动

# systemctl enable libvirtd.serviceopenstack-nova-compute.service

# systemctl start libvirtd.service openstack-nova-compute.service

2.3        controller端验证

# source admin-openrc.sh

# nova service-list

# nova image-list

3.      安装网络组件

3.1        使用OPENSTACK网络Neutron

3.1.1       controller端创neutron

  • 创建数据库

# mysql -uroot -pA0staryh

MariaDB [(none)]> CREATE DATABASE neutron;

MariaDB [(none)]> GRANT ALL PRIVILEGES ONneutron.* TO ‘neutron‘@‘localhost‘ IDENTIFIED BY ‘A0staryh‘;

MariaDB [(none)]> GRANT ALL PRIVILEGES ONneutron.* TO ‘neutron‘@‘%‘ IDENTIFIED BY ‘A0staryh‘;

  • 导入 admin 身份凭证以执行管理员用户专有的命令:
    # source admin-openrc.sh

  • 创建服务证书

# keystone user-create --name neutron --pass A0staryh

# keystone user-role-add --user neutron --tenantservice --role admin

# keystone service-create --name neutron --typenetwork --description "OpenStack Networking"

# keystone endpoint-create --service-id $(keystoneservice-list | awk ‘/ network / {print $2}‘) --publicurl http://controller:9696  --adminurl http://controller:9696  --internalurl http://controller:9696 --region regionOne

  • 安装网络组件

# yum install -y openstack-neutronopenstack-neutron-ml2 python-neutronclient which

  • 修改配置文件vim/etc/neutron/neutron.conf

[DEFAULT]

         verbose = True

rpc_backend= rabbit
rabbit_host =
controller
rabbit_password =
A0staryh

auth_strategy= keystone

core_plugin= ml2
service_plugins = router
allow_overlapping_ips = True

notify_nova_on_port_status_changes= True
notify_nova_on_port_data_changes = True
nova_url = http://
controller:8774/v2
nova_admin_auth_url = http://
controller:35357/v2.0
nova_region_name = regionOne
nova_admin_username = nova
nova_admin_tenant_id = SERVICE_TENANT_ID

【获取方式:

# source admin-openrc.sh

#keystone tenant-get service

nova_admin_password= A0staryh

[database]

connection = mysql://neutron:A0staryh@controller/neutron

[keystone_authtoken]

auth_uri= http://controller:5000/v2.0
identity_uri = http://
controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password =
A0staryh

  • 配置Modular Layer2 (ML2) 插件

修改配置文件vim/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers= flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch

[ml2_type_gre]

tunnel_id_ranges = 1:1000

[securitygroup]

enable_security_group= True
enable_ipset = True
firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

  • 配置 Compute 以使用Networking

修改控制节点上的配置文件 vim /etc/nova/nova.conf

[DEFAULT]

network_api_class= nova.network.neutronv2.api.API

security_group_api= neutron

linuxnet_interface_driver= nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver= nova.virt.firewall.NoopFirewallDriver

[neutron]

url =http://controller:9696

auth_strategy= keystone

admin_auth_url= http://controller:35357/v2.0

admin_tenant_name= service

admin_username= neutron

admin_password= A0staryh

 

  • 完成安装

创建链接

# ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

同步数据库

# su -s/bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade kilo" neutron

重启 Compute 服务

# systemctlrestart openstack-nova-api.service openstack-nova-scheduler.serviceopenstack-nova-conductor.service

启动 Networking 服务并将其配置为随系统启动

#systemctl enable neutron-server.service

#systemctl start neutron-server.service

  • 验证

导入admin身份凭证以执行管理员用户专有的命令:

# sourceadmin-openrc.sh

列出加载的扩展,以验证是否成功启动了一个 neutron-server 进程:

# neutronext-list

3.1.2       network安装配置网络节点

  • 配置前的准备

修改配置文件vim /etc/sysctl.conf 以将下列参数包含其中:

net.ipv4.ip_forward=1

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

使修改生效:

# sysctl-p

  • 安装网络组件

# yuminstall -y openstack-neutron openstack-neutron-ml2openstack-neutron-openvswitch

  • 配置网络的通用组件

修改配置文件vim /etc/neutron/neutron.conf

[DEFAULT]

verbose = True

rpc_backend = rabbit

rabbit_host = controller

rabbit_password = A0staryh

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = A0staryh

  • 配置Modular Layer 2 (ML2)插件

修改配置文件 vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,gre

tenant_network_types = gre

mechanism_drivers = openvswitch

[ml2_type_flat]

flat_networks = external

[ml2_type_gre]

tunnel_id_ranges = 1:1000

[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]

local_ip = 10.0.0.11

###########################################################

local_ip= INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS

管理段控制节点ip,如果没有管理网段,就填控制节点IP]

###########################################################

enable_tunneling = True

bridge_mappings = external:br-ex

[agent]

tunnel_types = gre

 

  • 配置 Layer-3 (L3) 代理

修改配置文件 /etc/neutron/l3_agent.ini

[DEFAULT]

verbose = True

interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver

use_namespaces = True

external_network_bridge = br-ex

router_delete_namespaces = True

  • 配置DHCP代理

修改配置文件vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

verbose = True

interface_driver =neutron.agent.linux.interface.OVSInterfaceDriver

dhcp_driver =neutron.agent.linux.dhcp.Dnsmasq

use_namespaces = True

dhcp_delete_namespaces = True

##############################################################################

注意:一些云镜像会忽略 DHCP MTU 选项,在这种情况下,您要配置其使用metadata

# vim /etc/neutron/dhcp_agent.ini

[DEFAULT]

dnsmasq_config_file =/etc/neutron/dnsmasq-neutron.conf

创建并修改文件 /etc/neutron/dnsmasq-neutron.conf

dhcp-option-force=26,1454

杀死所有存在的 dnsmasq 进程:

# pkilldnsmasq

##############################################################################

  • 配置metadata代理

修改配置文件 /etc/neutron/metadata_agent.ini

[DEFAULT]

auth_url = http://controller:5000/v2.0

auth_region = regionOne

admin_tenant_name = service

admin_user = neutron

admin_password = A0staryh

nova_metadata_ip = 10.0.0.11controlerIp

metadata_proxy_shared_secret = A0staryh

  • 控制节点(controller节点)上,修改配置文件vim /etc/nova/nova.conf

[neutron]

service_metadata_proxy = True

metadata_proxy_shared_secret = A0staryh(与上个配置文件保持一致)

 

# systemctl restart openstack-nova-api.service

  • 配置Open vSwitch (OVS)服务

#systemctl enable openvswitch.service

#systemctl start openvswitch.service

#ovs-vsctl add-br br-ex

# ovs-vsctladd-port br-ex eth1

  • 完成安装

# ln -s/etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

# cp/usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

# sed -i‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘/usr/lib/systemd/system/neutron-openvswitch-agent.service

# systemctlenable neutron-openvswitch-agent.service neutron-l3-agent.serviceneutron-dhcp-agent.service neutron-metadata-agent.serviceneutron-ovs-cleanup.service

#systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.serviceneutron-metadata-agent.service

请勿直接地启动 neutron-ovs-cleanup 服务

  • controller端验证

# sourceadmin-openrc.sh

# neutronagent-list

此处可能由于在controller端执行同步数据时,写的版本有误导致错误,重新执行数据同步命令,将juno修改为kilo即可。

3.1.3       compute端安装和配置计算节点

  • 配置前的准备

# vim /etc/sysctl.conf

net.ipv4.conf.all.rp_filter=0

net.ipv4.conf.default.rp_filter=0

# sysctl –p

  • 安装网络组件

# yuminstall openstack-neutron-ml2 openstack-neutron-openvswitch

  • 配置网络的通用组件

# vim /etc/neutron/neutron.conf

[DEFAULT]

rpc_backend = rabbit

rabbit_host = controller

rabbit_password = A0staryh

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

verbose = True

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = neutron

admin_password = A0staryh

  • 配置 Modular Layer 2 (ML2) 插件

# vim /etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,gre

tenant_network_types = gre

mechanism_drivers = openvswitch

[ml2_type_gre]

tunnel_id_ranges = 1:1000

[securitygroup]

enable_security_group = True

enable_ipset = True

firewall_driver =neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver

[ovs]

local_ip = 10.0.0.2

###########################################################

local_ip= INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS

计算节点上的管理网络接口的IP 地址

###########################################################

enable_tunneling = True

[agent]

tunnel_types = gre

  • 配置 Open vSwitch (OVS) 服务

启动 OVS 服务并将其配置为随系统启动:

# systemctl enableopenvswitch.service

# systemctl startopenvswitch.service

  • 配置 Compute 以使用 Networking

# vim /etc/nova/nova.conf

[DEFAULT]

network_api_class= nova.network.neutronv2.api.API

security_group_api= neutron

linuxnet_interface_driver= nova.network.linux_net.LinuxOVSInterfaceDriver

firewall_driver= nova.virt.firewall.NoopFirewallDriver

[neutron]

url =http://controller:9696

auth_strategy= keystone

admin_auth_url= http://controller:35357/v2.0

admin_tenant_name= service

admin_username= neutron

admin_password= A0staryh

  • 完成安装

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini/etc/neutron/plugin.ini

# cp/usr/lib/systemd/system/neutron-openvswitch-agent.service/usr/lib/systemd/system/neutron-openvswitch-agent.service.orig

# sed -i‘s,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g‘/usr/lib/systemd/system/neutron-openvswitch-agent.service

# systemctl restartopenstack-nova-compute.service

#systemctl enable neutron-openvswitch-agent.service

# systemctl startneutron-openvswitch-agent.service

  • 验证操作

controller节点执行

# source admin-openrc.sh

# neutron agent-list

3.1.4       controller创建初始网络

可通过Dashboard完成

3.2        使用传统网络

3.2.1       controller配置

# vim /etc/nova/nova.conf

         [DEFAULT]

network_api_class= nova.network.api.API

security_group_api= nova

# systemctl restart openstack-nova-api.serviceopenstack-nova-scheduler.service openstack-nova-conductor.service

3.2.2       compute端配置

# yum install openstack-nova-networkopenstack-nova-api

# vi /etc/nova/nova.conf

[DEFAULT]

network_api_class= nova.network.api.API

security_group_api= nova

firewall_driver= nova.virt.libvirt.firewall.IptablesFirewallDriver

network_manager= nova.network.manager.FlatDHCPManager

network_size= 254

allow_same_net_traffic= False

multi_host= True

send_arp_for_ha= True

share_dhcp_address= True

force_dhcp_release= True

flat_network_bridge= br100

flat_interface= INTERFACE_NAME

public_interface= INTERFACE_NAME

 

# systemctl enable openstack-nova-network.serviceopenstack-nova-metadata-api.service

# systemctl start openstack-nova-network.serviceopenstack-nova-metadata-api.service

3.2.3       controller配置

# sourceadmin-openrc.sh

# novanetwork-create demo-net --bridge br100 --multi-host T --fixed-range-v410.0.1.0/24

# novanet-list

4.       安装DASHBORAD

4.1.1       安装配置

# yuminstall openstack-dashboard httpd mod_wsgi memcached python-memcached

# vim /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"

ALLOWED_HOSTS = [‘*‘]

CACHES = {

 ‘default‘: {

 ‘BACKEND‘:‘django.core.cache.backends.memcached.MemcachedCache‘,

 ‘LOCATION‘:‘127.0.0.1:11211‘,

 }

}

由于一个包的 bug,仪表板的 CSS 会加载失败。可以执行以下命令来解决这个问题:

# chown-R apache:apache /usr/share/openstack-dashboard/static

启动服务

#systemctl enable httpd.service memcached.service

#systemctl start httpd.service memcached.service

 

4.1.2       验证

http://controller/dashboard

5.       安装块设备存储服务

5.1.1       controller端创建数据库及配置

  • 创建数据库

# mysql -uroot-pA0staryh

MariaDB[(none)]> CREATE DATABASE cinder;

MariaDB[(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘localhost‘IDENTIFIED BY ‘A0staryh‘;

MariaDB[(none)]> GRANT ALL PRIVILEGES ON cinder.* TO ‘cinder‘@‘%‘ IDENTIFIED BY ‘A0staryh‘;

# sourceadmin-openrc.sh

创建 cinder 用户

# keystoneuser-create --name cinder --pass A0staryh

# keystone user-role-add --user cinder--tenant service --role admin

# keystone service-create --name cinderv2--type volumev2 --description "OpenStack Block Storage"

# keystone endpoint-create --service-id$(keystone service-list | awk ‘/ volumev2 / {print $2}‘) --publicurl http://controller:8776/v2/%\(tenant_id\)s --internalurl http://controller:8776/v2/%\(tenant_id\)s --adminurl http://controller:8776/v2/%\(tenant_id\)s --region regionOne

  • 安装并配置块设备存储服务在控制节点服务器上的组件

# yum installopenstack-cinder python-cinderclient python-oslo-db

# vim /etc/cinder/cinder.conf

[database]

connection = mysql://cinder:A0staryh@controller/cinder

[DEFAULT]

rpc_backend = rabbit

rabbit_host = controller

rabbit_password = A0staryh

auth_strategy = keystone

my_ip = 10.0.0.11

verbose = True

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = cinder

admin_password = A0staryh

# su -s /bin/sh -c"cinder-manage db sync" cinder

# systemctl enable openstack-cinder-api.serviceopenstack-cinder-scheduler.service

# systemctl start openstack-cinder-api.serviceopenstack-cinder-scheduler.service

5.1.2       compute节点上创建块存储

  • 配置前的准备

# yuminstall lvm2

#systemctl enable lvm2-lvmetad.service

#systemctl start lvm2-lvmetad.service

#pvcreate /dev/sdb1

#vgcreate cinder-volumes/dev/sdb1

# vim /etc/lvm/lvm.conf

         devices {

filter = [ "a/sdb/", "r/.*/"]

  • 安装并配置块存储卷组件

# yum install openstack-cindertargetcli python-oslo-db MySQL-python

# vim /etc/cinder/cinder.conf

         [database]

connection = mysql://cinder:A0staryh@controller/cinder

[DEFAULT]

rpc_backend = rabbit

rabbit_host = controller

rabbit_password = A0staryh

auth_strategy = keystone

my_ip = 10.0.0.2

#################################################

my_ip =MANAGEMENT_INTERFACE_IP_ADDRESS

存储节点的管理网IP地址

#################################################

glance_host = controller

iscsi_helper = lioadm

verbose = True

[keystone_authtoken]

auth_uri = http://controller:5000/v2.0

identity_uri = http://controller:35357

admin_tenant_name = service

admin_user = cinder

admin_password = A0staryh

  • 启动服务

# systemctlenable openstack-cinder-volume.service target.service

# systemctl startopenstack-cinder-volume.service target.service

  • 验证操作

controller端操作

# source admin-openrc.sh

# cinder service-list

创建一个 1 GB 的卷

# source demo-openrc.sh

# cinder create --display-name demo-volume1 1

 

 

 

6.       安装对象存储服务

6.1.1       controller端创建数据库及配置

  • 创建数据库

# keystone user-create --name swift --pass A0staryh

# keystone user-role-add --user swift --tenant service --role admin

# keystone service-create --name swift --type object-store--description "OpenStack Object Storage"

# keystone endpoint-create --service-id $(keystone service-list | awk‘/ object-store / {print $2}‘) --publicurl ‘http://controller:8080/v1/AUTH_%(tenant_id)s‘ --internalurl‘http://controller:8080/v1/AUTH_%(tenant_id)s‘--adminurl http://controller:8080 --region regionOne

  • controller节点配置

# yum install openstack-swift-proxy python-swiftclient python-keystone-auth-token python-keystonemiddlewarememcached

将配置文件proxy-server.conf-sample拷贝至/etc/swift

# vim /etc/swift/proxy-server.conf

[DEFAULT]

bind_port = 8080

user = swift

swift_dir = /etc/swift

[pipeline:main]

pipeline =authtoken cache healthcheck keystoneauth proxy-logging proxy-server

[app:proxy-server]

allow_account_management= true

account_autocreate= true

[filter:keystoneauth]

use =egg:swift#keystoneauth

operator_roles= admin,_member_

[filter:authtoken]

paste.filter_factory= keystonemiddleware.auth_token:filter_factory

auth_uri =http://controller:5000/v2.0

identity_uri= http://controller:35357

admin_tenant_name= service

admin_user= swift

admin_password= A0staryh

delay_auth_decision= true

[filter:cache]

memcache_servers= 127.0.0.1:11211

 

 

6.1.2       安装和配置存储节点

# yuminstall xfsprogs rsync

#mkfs.xfs /dev/sda5

# vi /etc/fstab

         /dev/sda5 /srv/node/sda5 xfsnoatime,nodiratime,nobarrier,logbufs=8 0 2

# mount/srv/node/sda5

# vim /etc/rsyncd.conf    

uid = swift

gid = swift

log file = /var/log/rsyncd.log

pid file = /var/run/rsyncd.pid

address = 10.0.0.2

[account]

max connections = 2

path = /srv/node/

read only = false

lock file = /var/lock/account.lock

[container]

max connections = 2

path = /srv/node/

read only = false

lock file =/var/lock/container.lock

[object]

max connections = 2

path = /srv/node/

read only = false

lock file = /var/lock/object.lock

 

#systemctl enable rsyncd.service

#systemctl start rsyncd.service

 

 

 

# yuminstall openstack-swift-account openstack-swift-containeropenstack-swift-object

# vim/etc/swift/account-server.conf

        

[DEFAULT]

bind_ip = 10.0.0.2[存储节点的管理地址]

bind_port = 6002

user = swift

swift_dir = /etc/swift

devices = /srv/node

[pipeline:main]

pipeline = account-server

[官方手册是pipeline = healthcheck reconaccount-server,但是是错误的]

[filter:recon]

recon_cache_path =/var/cache/swift

 

# vim /etc/swift/container-server.conf

         [DEFAULT]

bind_ip = 10.0.0.2[存储节点的管理地址]

bind_port = 6001

user = swift

swift_dir = /etc/swift

devices = /srv/node

 

[pipeline:main]

pipeline = account-server

[官方手册是pipeline = healthcheck reconaccount-server,但是是错误的]

 [filter:recon]

recon_cache_path =/var/cache/swift

 

# vim /etc/swift/object-server.conf

[DEFAULT]

bind_ip = 10.0.0.2[存储节点的管理地址]

bind_port = 6000

user = swift

swift_dir = /etc/swift

devices = /srv/node

[pipeline:main]

pipeline = account-server

[官方手册是pipeline = healthcheck reconaccount-server,但是是错误的]

 [filter:recon]

recon_cache_path =/var/cache/swift

# chown-R swift:swift /srv/node

# mkdir-p /var/cache/swift

# chown-R swift:swift /var/cache/swift

6.1.3       controller端创建初始化的 rings

  • 帐户 ring

# cd /etc/swift

# swift-ring-builder account.builder create 10 3 1

# swift-ring-builder account.builder addr1z1-10.0.0.2:6002/sda5 100

# swift-ring-builder account.builder

# swift-ring-builder account.builder rebalance

  • 容器 ring

# cd/etc/swift

# swift-ring-builder container.builder create 10 31

# swift-ring-builder container.builder addr1z1-10.0.0.2:6001/sda5 100

# swift-ring-builder container.builder

# swift-ring-builder container.builder rebalance

  • 对象环

# cd /etc/swift

# swift-ring-builder object.builder create 10 3 1

# swift-ring-builder object.builder addr1z1-10.0.0.2:6000/sda5 100

# swift-ring-builder object.builder

# swift-ring-builder object.builder rebalance

  • 分发环配置文件

  •        完成安装

  • 在控制节点上

vim/etc/swift.conf

[swift-hash]

swift_hash_path_suffix= A0staryh

swift_hash_path_prefix= A0staryh

[storage-policy:0]

name =Policy-0

default= yes

  • switf.conf拷贝至每个存储节点和其他运行了代理服务的额外节点

  • 在控制节点和其他运行了代理服务的节点上

# systemctl enable openstack-swift-proxy.servicememcached.service

# systemctl start openstack-swift-proxy.service memcached.service

  • 在存储节点上,启动对象存储服务,并将其设置为随系统启动

# systemctl enable openstack-swift-account.serviceopenstack-swift-account-auditor.service openstack-swift-account-reaper.serviceopenstack-swift-account-replicator.service

# systemctl start openstack-swift-account.serviceopenstack-swift-account-auditor.service openstack-swift-account-reaper.serviceopenstack-swift-account-replicator.service

# systemctl enable openstack-swift-container.serviceopenstack-swift-container-auditor.service openstack-swift-container-replicator.serviceopenstack-swift-container-updater.service

# systemctl start openstack-swift-container.serviceopenstack-swift-container-auditor.serviceopenstack-swift-container-replicator.serviceopenstack-swift-container-updater.service

# systemctl enable openstack-swift-object.serviceopenstack-swift-object-auditor.serviceopenstack-swift-object-replicator.serviceopenstack-swift-object-updater.service

# systemctl startopenstack-swift-object.service openstack-swift-object-auditor.service openstack-swift-object-replicator.serviceopenstack-swift-object-updater.service

6.1.5       验证操作

# sourcedemo-openrc.sh

# swiftstat


本文出自 “Directed By Huboss” 博客,转载请与作者联系!

在rhel7中搭建openstack kilo

标签:openstack kilo centos rhel

原文地址:http://huboss.blog.51cto.com/9883568/1742890

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!