码迷,mamicode.com
首页 > 其他好文 > 详细

openstack的四大服务组件及openstack环境搭建

时间:2019-06-29 01:12:18      阅读:160      评论:0      收藏:0      [点我收藏+]

标签:令行   重启   发热   页面   likely   pytho   会话   restart   nal   

opensatck的虚拟机创建流程图
技术图片

一.openstack的四大服务及组件功能

1.keystone认证服务的一些概念

1)User:

使用openstack的用户

2)Role:

给用户添加到一个角色中,给予此用户操作权限

3)Tenant:

人、项目或组织拥有的资源合集,一个租户下有多个用户,可以给用户权限划分来使用租户中的资源

4)TOken:

密令,口令,keystone认证将token口令返回到浏览器,即在一段时间免秘钥登录,功能类似与cookie
会话保持,但又不同于cookie,cookie记录浏览器的登录信息,不能分配用户访问权限;token保存了用户认证信息,
与用户权限相关。

2.glance镜像服务的组件及功能

1)glance-api

接收镜像的删除、上传和读取

2)glance-registry

负责与mysql数据库的交互,用于存储和获取镜像的元数据,在数据库中用于存储镜像信息的两张表image表
和image-property表
image表:用于存放镜像文件的格式、大小等信息
image-property表:用于存放镜像文件定制化信息

3)image-store

镜像的保存与读取的接口,仅仅是一个接口

4)注意

glance服务不需要配置消息队列,但要配置keystone认证和数据库

3.nova配置虚拟机服务的组件及功能(openstack最早组件之一)

1)nova-api

接收和响应外部请求,将接收到的请求通过消息队列发送给其他服务组件

2)nova-compute

创建虚拟机,是通过libvirt来调用kvm模块创建虚拟机,nova分为控制节点和计算节点,nova之间通过
消息队列进行通信

3)nova-schdule

是用来调度创建虚拟机所需的物理机

4)nova-palcement-api

监控提供者的库存和使用量,如跟踪计算节点资源存储池的使用量、ip的分配等情况,配合schdule实现物理机的调度

5)nova-conductor

计算节点访问数据库时的中间件,即当nova-compute需要获取或更新数据库中的实例信息,不会直接访问数据库,而
是通过conductor来访问数据库,当在较大的集群环境中时,需要横向扩展conductor,但不要扩展在计算节点上

6)nova-novncproxy

VNC代理,用来显示虚拟机操作终端的界面

7)nova-medata-api

接收虚拟机的元数据请求

4.neutron网络服务的一些组件和功能(以前叫做nova-network,已更名为netron)

1)分为自服务网络和提供者网络

自服务网络:可以自己创建网络,用虚拟路由器连接外网,此网络类型使用极少
提供者网络:虚拟机网络桥接到物理机网络,且必须和物理机在同一网络段,大多数都选择此网络类型

2)neutron-server

对外提供openstack的网络API,接收请求,并调用插件

3)plugin

处理neutron-server接收的请求维护逻辑网络状态并调用agent处理请求

4)neutron-linuxbridge-agent

处理plugin请求,确保网络提供者实现各种网络功能

5)消息队列

neutron-server、agent、plugin它们之间是通过消息队列来进行通信和调用的

6)网络提供者

提供虚拟网络或物理网络设备,例如linux-bridge、支持neutron服务的物理交换机,    
其网络、子网、端口、路由等信息都存放在数据库中

二.环境准备(所有节点centos7.2版本)

1.controll-node控制节点

1)网卡

            eth0:192.168.1.10/24   
            eth1:192.168.23.100/24   

2) 需要的包:

            python-openstackclient  #openstack的客户端连接包
            python2-PyMySQL #连接数据库包
            mariadb #用于mysql数据库连接测试的客户端
            python-memcached   #连接memcached数据包
            openstack-keystone   #认证服务包
            httpd
            mod_wsgi  #httpd的模块包
            openstack-glance  #镜像服务包
            openstack-nova-api #接收和响应外部请求包
            openstack-nova-conductor 
            openstack-nova-console 
            openstack-nova-novncproxy 
            openstack-nova-scheduler 
            openstack-nova-placement-api  
            openstack-neutron
            openstack-neutron-ml2  #二层模块插件
            openstack-neutron-linuxbridge
            ebtables
            openstack-dashboard

2.compute-node计算节点

1)网卡

     eth0:192.168.1.10/24   
     eth1:192.168.23.201/24

2) 需要的包:

     python-openstackclient
     openstack-nova-compute
     openstack-neutron-linuxbridge
     ebtables
     ipset 

3.mysql-node数据库节点

1)网卡

    eth0:192.168.1.41/24 

2) 需要的包:

     python-openstackclient  
     mariadb 
     mariadb-server  
     rabbitmq-server   
     memcached

4.实验前必须关闭的功能和开启的功能,避免实验无法进行

1)防火墙要禁用

2)NetworkManager要禁用:会导致无法bond网卡和桥接网卡不能生效

3)selinux一定要禁用:可能会导致网络不通的问题

4)开启chrony时钟同步,保持所有节点时间同步,避免控制节点找不到计算节点而报错导致实验无法进行

三.openstack的搭建过程(openstack的ocata版本)

1.准备openstack仓库及一些软件包的安装

1)分别在controll-node和compute-node安装openstack仓库

~]#yum install centos-release-openstack-ocata -y 

2)在所有节点安装openstack客户端:

~]#yum install python-openstackclient -y

3)controll-node控制节点安装数据库连接包:

用于连接memcached数据的包
~]#yum install  python-memcached -y
用于连接mysql数据库的包
~]#yum install python2-PyMySQL -y  
安装mysql数据库客户端用于远程测试连接
~]#yum install mariadb -y

4)mysql-node节点安装mysql数据库、消息队列、memcached等

安装mysql数据库:
     ~]#yum install mariadb mariadb-server -y   
配置mysql数据库的配置文件:
     ~]#vim/etc/my.cnf.d/openstack.cnf   
    [mysqld]
    bind-address = 192.168.1.41
 default-storage-engine = innodb
 innodb_file_per_table = on
 max_connections = 4096
 collation-server = utf8_general_ci
 character-set-server = utf8
 …………
 ~]#vim/etc/my.cnf
 ……
 [mysqld]
 bind-address = 192.168.1.41
 ……
启动mysql服务:
systemctl enable mariadb.service && systemctl start mariadb.service 
执行数据库安全安装命令,删除匿名用户以及无密码登录,确保数据库安全
~]#mysql_secure_installation 
安装消息队列rabbitmq:
~]# yum install rabbitmq-server -y   
 ~]#systemctl enable rabbitmq-server.service  && systemctl start rabbitmq-server.service  
~]#rabbitmqctl add_user openstack openstack  #添加 openstack用户
 ~]#rabbitmqctl set_permissions openstack ".*" ".*" ".*" #允许读写权限
安装memcached数据库:
~]#yum install memcached -y
~]#vim /etc/sysconfig/memcached
     OPTIONS="-l 127.0.0.1,::1,192.168.1.41"
~]#systemctl enable memcached.service && systemctl start memcached.service

2.认证服务keystone的部署

1)mysql-node节点创建keystone数据库以及keystone授权用户

MariaDB [(none)]> CREATE DATABASE keystone;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO ‘keystone‘@‘%‘  IDENTIFIED BY ‘keystone‘

2)controll-node安装认证服务相关的包并配置认证服务配置文件

~]#yum install openstack-keystone httpd mod_wsgi -y
 ~]# vim /etc/keystone/keystone.conf
    [database]
    # ...
 connection = mysql+pymysql://keystone:keystone@192.168.1.41/keystone #指明连接的数据库
[token]
# ...
provider = fernet

3)controll-node执行keystone的一些初始化命令

~]#su -s /bin/sh -c "keystone-manage db_sync" keystone  #同步数据到数据库中
~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone   初始化fernet
~]#keystone-manage credential_setup --keystone-user keystone --keystone-group keystone #初始化证书
引导启动keystone服务:
~]#keystone-manage bootstrap --bootstrap-password keystone --bootstrap-admin-url http://192.168.23.100:35357/v3/ --bootstrap-internal-url http://192.168.23.100:5000/v3/ --bootstrap-public-url http://192.168.23.100:5000/v3/ --bootstrap-region-id RegionOne
编辑httpd配置文件:
~]#vim /etc/httpd/conf/httpd.conf
ServerName 192.168.23.100  
创建软连接:
~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动http的服务:
~]#systemctl enable httpd.service && systemctl start httpd.service
声明openstack管理员账号及密码等:
~]#export OS_USERNAME=admin
~]# export OS_PASSWORD=keystone
~]# export OS_PROJECT_NAME=admin
~]# export OS_USER_DOMAIN_NAME=Default
~]# export OS_PROJECT_DOMAIN_NAME=Default
~]# export OS_AUTH_URL=http://192.168.23.100:35357/v3
~]# export OS_IDENTITY_API_VERSION=3

4)在controll-node创建域名、项目等

创建一个service项目:
~]#openstack project create --domain default --description "Service Project" service
创建一个demo项目:
~]#openstack project create --domain default --description "Demo Project" demo
创建一个demo用户:
~]#openstack user create --domain default --password-prompt demo
创建一个user角色:
~]#openstack role create user
将demo用户添加到demo项目中并授予user角色的权限:
~]#openstack role add --project demo --user demo user
编辑keystone-paste.ini配置文件,移出掉以下[….]section中的admin_token_auth选项,为安全不启用临时账户
~]#vim /etc/keystone/keystone-paste.ini
remove ‘admin_token_auth‘ from the [pipeline:public_api],[pipeline:admin_api], and [pipeline:api_v3] sections.
将设置的变量秘钥、系统认证url变量复位:
~]#unset OS_AUTH_URL OS_PASSWORD
admin用户请求一个身份认证密令,访问时需要输入管理员秘钥:
~]#openstack --os-auth-url http://192.168.23.100:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue
demo用户请求一个身份认证密令,访问时需要输入demo用户秘钥:
~]#openstack --os-auth-url http://192.168.23.100:5000/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name demo --os-username demo token issue

5)在controll-node创建用户授权脚本,以便下次访问服务

admin用户授权脚本:
~]#vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_AUTH_URL=http://192.168.23.100:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
~]#chmod +x /data/admin-openrc
demo用户授权脚本:
~]#vim /data/demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.23.100:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
~]#chmd +x /data/demo-openrc
执行admin收取脚本测试访问
~]#. admin-openrc
~]#openstack token issue

3.glance镜像服务部署

1)在mysql-node上创建glance数据库以及glance授权用户:

MariaDB [(none)]> CREATE DATABASE glance;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO ‘glance‘@‘%‘  IDENTIFIED BY ‘glance‘;

2)controll-node执行glance服务相关命令

执行admin授权脚本,以admin身份访问:
~]#. admin-openrc
创建glance用户:
~]#openstack user create --domain default --password-prompt glance
将glance用户赋予admin管理员权限:
~]#openstack role add --project service --user glance admin
创建一个服务名为image的服务
~]#openstack service create --name glance --description "OpenStack Image" image
创建endpoint
~]#openstack endpoint create --region RegionOne image public http://192.168.23.100:9292 #公有端点
~]#openstack endpoint create --region RegionOne image internal http://192.168.23.100:9292  #私有端点
~]#openstack endpoint create --region RegionOne image admin http://192.168.23.100:9292 #管理端点

3)controll-node安装glance镜像服务相关包并配置glance配置文件

~]#yum install openstack-glance -y
~]#vim /etc/glance/glance-api.conf
[database]
# ...
connection = mysql+pymysql://glance:glance@192.168.1.41/glance #指定连接的数据库

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211  #指定memcached数据库服务
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance #glance服务的用户名
password = glance #glance服务的用户名密码

[paste_deploy]
# ...
flavor = keystone

[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/   #镜像文件的路径

~]#vim /etc/glance/glance-registry.conf
[database]
# ...
connection = mysql+pymysql://glance:glance@192.168.1.41/glance  #指定glance数据库

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance

[paste_deploy]
# ...
flavor = keystone
生成表文件到mysql的glance数据库中:
~]# su -s /bin/sh -c "glance-manage db_sync" glance
启动glance多有的相关服务:
~]# systemctl enable openstack-glance-api.service  openstack-glance-registry.service
~]#systemctl start openstack-glance-api.service openstack-glance-registry.service
执行admin权限脚本:
 ~]#. admin-openrc
下载cirros镜像文件:
~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
将镜像文件加载到glance服务中:
~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public
查看服务是否加成功:
~]#openstack image list

4.nova服务的部署

1)在mysql-node上创建nova相关数据库以及nova授权用户

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO ‘nova‘@‘%‘     IDENTIFIED BY ‘nova123456‘;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO ‘nova‘@‘%‘     IDENTIFIED BY ‘nova123456‘;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO ‘nova‘@‘%‘     IDENTIFIED BY ‘nova123456‘;

2)controll-node执行nova服务的相关命令

 执行admin授权脚本:
~]#. admin-openrc
创建nova用户:
~]#openstack user create --domain default --password-prompt nova
授予nova用户admin权限:
~]#openstack role add --project service --user nova admin
创建一个服务名为cpmpute的服务
~]#openstack service create --name nova --description "OpenStack Compute" compute
创建enpoint:
~]#openstack endpoint create --region RegionOne compute public http://192.168.23.100:8774/v2.1  #公共端点
~]#openstack endpoint create --region RegionOne compute internal http://192.168.23.100:8774/v2.1 # 私有端点
 ~]#openstack endpoint create --region RegionOne compute admin http://192.168.23.100:8774/v2.1 # 管理端点
创建placement用户:
~]#openstack user create --domain default --password-prompt placement
将placement用户授予admin权限
~]#openstack role add --project service --user placement admin
创建一个服务名为placement服务
~]# openstack service create --name placement --description "Placement API" placement
创建endpoint:
~]#openstack endpoint create --region RegionOne placement public http://192.168.23.100:8778 #公共端点
~]#openstack endpoint create --region RegionOne placement internal http://192.168.23.100:8778 # 私有端点
~]#openstack endpoint create --region RegionOne placement admin http://192.168.23.100:8778 # 管理端点

3)在controll-node安装nova相关的包并配置nova的配置文件

~]#yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

 ~]#vim /etc/nova/nova.conf
[api_database]
# ...
connection = mysql+pymysql://nova:nova123456@192.168.1.41/nova_api  #指定nova_api数据

[database]
# ...
connection = mysql+pymysql://nova:nova123456@192.168.1.41/nova #指定nova数据库

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@192.168.1.41  #指定消息队列
my_ip = 192.168.23.100   管理节点ip,可以不启用
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver  #禁用掉计算防火墙

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers =192.168.1.41:11211 #指定memcached数据库
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova  #nova服务的用户名
password = nova123456 #nova服务的用户名密码

[vnc]
enabled = true
# ...
vncserver_listen = $my_ip  #指定vnc的监听地址
vncserver_proxyclient_address = $my_ip #指定vnc代理地址

[glance]
# ...
api_servers = http://192.168.23.100:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp  #指定锁目录

[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.23.100:35357/v3
username = placement
password = placement

~]#vim /etc/httpd/conf.d/00-nova-placement-api.conf
<Directory /usr/bin>
     <IfVersion >= 2.4>
            Require all granted
     </IfVersion>
     <IfVersion < 2.4>
            Order allow,deny
            Allow from all
     </IfVersion>
</Directory>
重新启动httd服务:
~]#systemctl restart httpd

4)配置完成后,在controll-node 上执行导入数据命令并启动nova的所有服务

导入nova的表格数据到mysql中的nova对应的数据库:
~]#su -s /bin/sh -c "nova-manage api_db sync" nova
~]#su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
~]#su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
~]#su -s /bin/sh -c "nova-manage db sync" nova
查询cell单元:
~]#nova-manage cell_v2 list_cells
启动nova所有的相关服务:
~]#systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
~]#systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service  openstack-nova-conductor.service openstack-nova-novncproxy.service

5)在compute-node部署nova服务

计算节点安装nova服务相关包并配置nova服务的配置文件:
~]#yum install openstack-nova-compute -y
~]#vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:openstack@192.168.1.41
my_ip = 192.168.23.201   #计算节点

[api]
# ...
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova123456

[vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://192.168.23.100:6080/vnc_auto.html

[glance]
# ...
api_servers = http://192.168.23.100:9292

[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp

[placement]:
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://192.168.23.100:35357/v3
username = placement
password = placement
查看你的计算节点对虚拟机是否支持硬件,返回值为1时或比一大说明就支持
~]#egrep -c ‘(vmx|svm)‘ /proc/cpuinfo
如果你的计算节点不支持虚拟机硬件加速就需要配置一下选项libvirt
~]#vim /etc/nova/nova.conf
[libvirt]
# ...
virt_type = qemu

6)在compute-node完成以上的配置后启动nova服务

~]#systemctl enable libvirtd.service openstack-nova-compute.service
~]#systemctl start libvirtd.service openstack-nova-compute.service
 Note注意
If the nova-compute service fails to start, check /var/log/nova/nova-compute.log. 
The error message AMQP server on controller:5672 is unreachable likely indicates that the firewall
 on the controller node is preventing access to port 5672. 
Configure the firewall to open port 5672 on the controller node and restart nova-compute service on the compute node.
执行admin授权脚本:
~]#. admin-openrc
查看管理程序状态:
~]# openstack hypervisor list 
查询计算主机并添加到cell库:
~]#su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova  
如果需要添加一个新的计算节点需要在主节点上运行nova-manage cell_v2 discover_hosts
或者
~]#vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300   #设置合适的时间间隔

7)配置完成后在controll-node上检查nova服务

~]#. admin-openrc
列出所有计算服务
~]#openstack compute service list
~]#openstack catalog list
 ~]#openstack image list
 nova服务的升级状态检查:
~]#nova-status upgrade check

5.neutron网络服务的部署

1)在mysql-node上创建neutron数据库以及neutron服务的授权用户

MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO ‘neutron‘@‘%‘ IDENTIFIED BY ‘neutron‘;

2)在controll-node上执行neutron服务的相关命令

执行admin授权脚本,以admin的身份执行命令
~]#. admin-openrc
创建neutron用户:
~]#openstack user create --domain default --password-prompt neutron
将neutron用户属于admin权限
~]#openstack role add --project service --user neutron admin
创建一个服务名为neutron,类型为network的网络服务:
~]#openstack service create --name neutron --description "OpenStack Networking" network
创建endpoint:
~]#openstack endpoint create --region RegionOne network public http://192.168.23.100:9696 #公共端点
~]#openstack endpoint create --region RegionOne network internal http://192.168.23.100:9696 #私用端点
~]#openstack endpoint create --region RegionOne network admin http://192.168.23.100:9696 #管理端点

3)在controll-node上安装neutron服务的相关包及配置相关的配置文件(这里一provider-network网络类型为例)

~]#yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y

~]#vim /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:neutron123@192.168.1.41/neutron

[DEFAULT]
# ...
core_plugin = ml2
service_plugins =
transport_url = rabbit://openstack:openstack@192.168.1.41
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[nova]
# ...
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

~]#vim /etc/neutron/plugins/ml2/ml2_conf.ini  #模块二层配置文件(model layer2)
[ml2]    
# ...
type_drivers = flat,vlan  #分别启用flat(桥接网络)、vlan(虚拟局域网络)
tenant_network_types =        #为空表示禁用用户创建子网
mechanism_drivers = linuxbridge  #机制驱动为桥接
extension_drivers = port_security  #开启扩展驱动端口安全

[ml2_type_flat]
# ...
flat_networks = provider   #指定的虚拟网络名称

[securitygroup]
# ...
enable_ipset = true

~]#vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1  #物理网卡映射到eth1,其中发热provider为虚拟网络名,必须跟上面指定的flat_networks=
指定的虚拟网络名称相同

[vxlan]
enable_vxlan = false   #禁用vxlan技术,从而进制用户创建自己的网络

[securitygroup]   #开启安全组可以限定外界主机的访问规则
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

~]#vim /etc/neutron/dhcp_agent.ini  #dhcp代理服务
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

~]#vim /etc/neutron/metadata_agent.ini
[DEFAULT]
# ...
nova_metadata_ip = 192.168.23.100
metadata_proxy_shared_secret = 123456

~]#vim /etc/nova/nova.conf
[neutron]
# ...
url = http://192.168.23.100:9696
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = 123456
……

4)配置完成后,在controll-node执行neutron网络服务的数据库导入初始化并开启neutron的所有服务

网络初始化脚本软连接:
~]#ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini  
将neutron生成的表格导入mysql中的neutron数据库中:
~]#su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启nova-api服务:
~]#systemctl restart openstack-nova-api.service
启动neutron网络服务的所有服务:
~]#systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
~]#systemctl start neutron-server.service  neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

提供者网络模式工作在二层,无需开启此三层服务,此服务用在自服务网络模式下需要开启:
~]#systemctl enable neutron-l3-agent.service && systemctl start neutron-l3-agent.service 

5)在compute-node部署neutron服务

在计算节点安装neutron服务的相关包并配置neutron配置文件
~]#yum install openstack-neutron-linuxbridge ebtables ipset -y
~]#vim /etc/neutron/neutron.conf
在[database]下注释掉所有connection选项

[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack@192.168.1.41
auth_strategy = keystone

[keystone_authtoken]
# ...
auth_uri = http://192.168.23.100:5000
auth_url = http://192.168.23.100:35357
memcached_servers = 192.168.1.41:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron

[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

选择和控制node相同的网络选项
~]#vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth1  #此处的虚拟网卡名provider必须和控制节点保持一致

[vxlan]
enable_vxlan = false    #同样关闭VXlan

[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
~]#vim /etc/nova/nova.conf
[neutron]
# ...
url = http://192.168.23.100:9696
auth_url = http://192.168.23.100:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
重启nova服务:
~]#systemctl restart openstack-nova-compute.service
启动计算节点上的neutron服务:
 ~]#systemctl enable neutron-linuxbridge-agent.service && systemctl start neutron-linuxbridge-agent.service 

6.部署仪表dashboard的web访问端

1)在controll-nod是上安装dashboard的相关包并配置dashboard的配置文件

~]#yum install openstack-dashboard -y
~]#vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "192.168.23.100"  #指定主机
ALLOWED_HOSTS = [‘*,‘]  #允许所有主机访问dashboard

SESSION_ENGINE = ‘django.contrib.sessions.backends.cache‘ #

CACHES = {   #
        ‘default‘: {
                 ‘BACKEND‘: ‘django.core.cache.backends.memcached.MemcachedCache‘,
                 ‘LOCATION‘: ‘controller:11211‘,
        }
}

OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST #

OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True #

OPENSTACK_API_VERSIONS = {   
        "identity": 3,
        "image": 2,
        "volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"  #

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" #

OPENSTACK_NEUTRON_NETWORK = {    #
        ‘enable_router‘: False,
        ‘enable_quotas‘: False,
        ‘enable_ipv6‘: False,
        ‘enable_distributed_router‘: False,
        ‘enable_ha_router‘: False,
        ‘enable_lb‘: False,
        ‘enable_firewall‘: False,
        ‘enable_***‘: False,
        ‘enable_fip_topology_check‘: False,
}

TIME_ZONE = "Asia/Shanghai"  #时区设置
重新启动httpd服务和memcached服务
~]# systemctl restart httpd.service memcached.service
dashboard页面访问测试:

技术图片

7.创建一个虚拟网络

1)执行admin权限脚本

~]#. admin-openrc

2) 创建一个共享外网的提供网络者的名称为provider,网络类型为flat,网络名称也为provider

~]#openstack network create  --share --external --provider-physical-network provider --provider-network-type flat provider

3)在provider虚拟网络基础上创建一个子网络名称也为provider,分配的地址池为192.168.23.10-192.168.23.99 ,子网范围为192.168.23.0/24

~]#openstack subnet create --network provider --allocation-pool start=192.168.23.10,end=192.168.23.99 --dns-nameserver 192.168.23.1 --gateway 192.168.23.1 --subnet-range 192.168.23.0/24 provider

8.创建一个虚拟机实例两种方法

方法一:.直接在web创建

1)点击管理员先创建一实例类型
技术图片
2)选择实例类型
技术图片
3)点击创建实例类型
技术图片
4)填写实例类型中的虚拟cpu个数、内存、盘大小等信息,再点击创建实例类型
技术图片
5)查看创建好的实例类型
技术图片
6)选择项目、在选择实例
技术图片
7)选择创建实例
技术图片

8)填写创建的 虚拟机名称
技术图片
9)选择镜像源cirrors
技术图片

10)在选择创建好的实例类型,在点击右下角的实例创建
技术图片

11)实例正在创建中,等待孵化,即将完成创建
技术图片

方法二:直接在控制节点命令行创建虚拟机创

1)创建虚拟机命令

~]#openstack server create --flavor m1.nano --image cirros   --nic net-id=06a17cc8-57b8-4823-a6df-28da24061d8a --security-group default test-vm

2)命令选项注释

    server create #创建一个虚拟机实例
    --flavor  #指定的实例类型即创建的虚拟机vcpu个数、内存大小、磁盘大小等配置信息
    --nic net-id=指定的网络Id号,即虚拟机是基于网络的创建
     --image 指定的镜像名称
    default test-vm #指定默认的虚拟机名称为test-vm

openstack的四大服务组件及openstack环境搭建

标签:令行   重启   发热   页面   likely   pytho   会话   restart   nal   

原文地址:https://blog.51cto.com/14234542/2415140

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!