openstack 安装部署

时间:2023-03-09 07:04:55
openstack 安装部署

环境准备

本次搭建的是openstack kilo版本,计算节点和控制节点采用linux bridge的方式连接

1.两台服务器

  • controller 172.16.201.9
  • compute01 172.16.201.8

2.基本环境配置

  • 添加hosts绑定

    在每个服务器上添加host绑定,确保可以通过域名进行访问
172.16.201.9 controller
172.16.201.8 compute01
  • 关闭防火墙
systemctl stop firewalld.service
systemctl disable firewalld.service
  • 禁用selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
setenforce 0
  • 时间更新同步
yum install ntpdate -y
ntpdate asia.pool.ntp.org
  • yum源配置
rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-8.noarch.rpm    #epel yum源
yum install vim net-tools htop centos-release-openstack-liberty -y #openstack yum源
  • 数据库安装配置
yum install mariadb mariadb-server MySQL-python -y
sed -i "/\[mysqld\]$/a character-set-server = utf8" /etc/my.cnf
sed -i "/\[mysqld\]$/a init-connect = 'SET NAMES utf8'" /etc/my.cnf
sed -i "/\[mysqld\]$/a collation-server = utf8_general_ci" /etc/my.cnf
sed -i "/\[mysqld\]$/a innodb_file_per_table" /etc/my.cnf
sed -i "/\[mysqld\]$/a default-storage-engine = innodb" /etc/my.cnf
sed -i "/\[mysqld\]$/a bind-address = 172.16.201.9" /etc/my.cnf
systemctl enable mariadb.service #开机自启动
systemctl start mariadb.service #启动数据库
mysql_secure_installation #安全设置,必须得做,可以设置root密码、删除test库等
  • rabbitMQ安装
yum install -y rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl restart rabbitmq-server.service
rabbitmqctl add_user openstack pass #创建用户
rabbitmqctl set_permissions openstack ".*" ".*" ".*" #设置权限
  • Openstack配置工具安装
yum install -y python-openstackclient openstack-utils

3.openstack认证组件keystone配置

  • 创建数据库和用户授权

    mysql -u root -p 连接数据库
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'172.16.201.%' IDENTIFIED BY 'pass';
  • 安装软件包
yum install openstack-keystone httpd mod_wsgi  memcached python-memcached -y
  • 配置keystone

    编辑 /etc/keystone/keystone.conf,下面是需要添加和改动的配置
DEFAULT]
...
admin_token = openstack [database]
...
connection = mysql://keystone:pass@controller/keystone [memcache]
...
servers = controller:11211 [token]
...
provider = uuid
driver = memcache [revoke]
...
driver = sql

或者使用命令快捷修改:

openstack-config --set /etc/keystone/keystone.conf DEFAULT admin_token openstack
openstack-config --set /etc/keystone/keystone.conf database connection mysql://keystone:pass@controller/keystone
openstack-config --set /etc/keystone/keystone.conf memcache servers controller:11211
openstack-config --set /etc/keystone/keystone.conf token provider uuid
openstack-config --set /etc/keystone/keystone.conf token driver memcache
openstack-config --set /etc/keystone/keystone.conf revoke driver sql
  • 配置apache
sed -i "s/#ServerName www.example.com:80/ServerName controller/" /etc/httpd/conf/httpd.conf

创建apache启动的配置文件

cat > /etc/httpd/conf.d/wsgi-keystone.conf << OFF
Listen 5000
Listen 35357 <VirtualHost *:5000>
WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-public
WSGIScriptAlias / /usr/bin/keystone-wsgi-public
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost> <VirtualHost *:35357>
WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}
WSGIProcessGroup keystone-admin
WSGIScriptAlias / /usr/bin/keystone-wsgi-admin
WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
<IfVersion >= 2.4>
ErrorLogFormat "%{cu}t %M"
</IfVersion>
ErrorLog /var/log/httpd/keystone-error.log
CustomLog /var/log/httpd/keystone-access.log combined <Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
OFF
  • 启动服务
systemctl enable memcached.service
systemctl start memcached.service
systemctl enable httpd.service
systemctl start httpd.service
  • 初始化数据库
su -s /bin/sh -c "keystone-manage db_sync" keystone
  • 创建服务和Endpoint

    设置临时环境变量:
export OS_TOKEN=openstack
export OS_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3

创建openstack组件中基本用户和服务

openstack service create --name keystone --description "OpenStack Identity" identity
openstack endpoint create --region RegionOne identity public http://controller:5000/v2.0
openstack endpoint create --region RegionOne identity internal http://controller:5000/v2.0
openstack endpoint create --region RegionOne identity admin http://controller:35357/v2.0
openstack project create --domain default --description "Admin Project" admin
openstack user create admin --domain default --password redhat
openstack role create admin
openstack role add --project admin --user admin admin
openstack project create --domain default --description "Service Project" service
openstack project create --domain default --description "Demo Project" demo
openstack user create demo --domain default --password demo
openstack role create user
openstack role add --project demo --user demo user

创建授权脚本,便于后期直接使用

cat > /root/admin-openrc.sh << OFF
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=admin
export OS_TENANT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=redhat
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
OFF cat > /root/demo-openrc.sh << OFF
export OS_PROJECT_DOMAIN_ID=default
export OS_USER_DOMAIN_ID=default
export OS_PROJECT_NAME=demo
export OS_TENANT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
OFF

4.openstack镜像组件glance配置

  • 创建数据库和授权用户
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'172.16.201.%' IDENTIFIED BY 'pass';
  • 创建服务和Endpoint

    在keystone里,创建glance镜像服务,并且创建相关Endpoint
openstack user create glance --domain default --password pass
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image service" image
openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
  • 软件包安装
yum install openstack-glance python-glance python-glanceclient -y
  • 修改glance-api

    修改 /etc/glance/glance-api.conf
database]
...
connection = mysql://glance:pass@controller/glance
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = pass [paste_deploy]
...
flavor = keystone
[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
[DEFAULT]
...
notification_driver = noop
verbose = True

快捷命令修改:

openstack-config --set /etc/glance/glance-api.conf database  connection mysql://glance:pass@controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_plugin password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password pass
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/
openstack-config --set /etc/glance/glance-api.conf DEFAULT notification_driver noop
openstack-config --set /etc/glance/glance-api.conf DEFAULT verbose True
  • 修改glace-registry

修改 /etc/glance/glance-registry.conf

database]

...

connection = mysql://glance:pass@controller/glance
[keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = glance
password = pass [paste_deploy]
...
flavor = keystone
[DEFAULT]
...
notification_driver = noop
verbose = True

快捷修改:

openstack-config --set /etc/glance/glance-registry.conf database connection mysql://glance:pass@controller/glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken auth_plugin password
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-registry.conf keystone_authtoken password pass
openstack-config --set /etc/glance/glance-registry.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-registry.conf DEFAULT notification_driver noop
openstack-config --set /etc/glance/glance-registry.conf DEFAULT verbose True
  • 初始化数据库
su -s /bin/sh -c "glance-manage db_sync" glance
  • 启动服务
systemctl enable openstack-glance-api.service openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
  • 验证

    在环境变量增加glance的API版本
cd
echo "export OS_IMAGE_API_VERSION=2" | tee -a admin-openrc.sh demo-openrc.sh
source admin-openrc.sh

下载镜像:

 wget http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

上传镜像:

glance image-create --name "cirros"   --file /root/cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare   --visibility public --progress

查看镜像

openstack image list

5.openstack控制节点服务nova安装

  • 创建数据库

CREATE DATABASE nova;
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'172.16.201.%' IDENTIFIED BY 'pass';
  • 创建服务和Endpoint
source admin-openrc.sh
openstack user create nova --domain default --password pass
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://controller:8774/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2/%\(tenant_id\)s
  • 软件包安装
yum install openstack-nova-api openstack-nova-cert  openstack-nova-conductor openstack-nova-console  openstack-nova-novncproxy openstack-nova-scheduler  python-novaclient -y
  • 配置nova配置

    编辑/etc/nova/nova.conf ,
[database]
...
connection = mysql://nova:pass@controller/nova [DEFAULT]
...
rpc_backend = rabbit
auth_strategy = keystone
my_ip = 10.0.0.11
verbose = True
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
enabled_apis = osapi_compute,metadata [vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
enabled = True
novncproxy_base_url = http://172.16.201.9:6080/vnc_auto.html [glance]
host = controller [oslo_concurrency]
lock_path = /var/lib/nova/tmp [oslo_messaging_rabbit]
...
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = pass [keystone_authtoken]
...
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = pass

或者执行如下命令

openstack-config --set /etc/nova/nova.conf database connection mysql://nova:pass@controller/nova
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password pass
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password pass
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.16.201.9
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen "$"my_ip
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address "$"my_ip
openstack-config --set /etc/nova/nova.conf glance host controller
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT verbose True
  • 同步数据库
su -s /bin/sh -c "nova-manage db sync" nova
  • 启动服务
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

6.openstack 网络服务neutron安装

*数据库创建和授权用户

CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'172.16.201.%' IDENTIFIED BY 'pass';
  • 服务和Endpoint
openstack user create neutron --domain default --password pass
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
  • 安装包
yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge python-neutronclient iptables ipset
  • 配置neutron配置文件

    编辑配置文件/etc/neutron/neutron.conf 修改如下内容
[DEFAULT]
core_plugin = ml2
service_plugins =
rpc_backend = rabbit
auth_strategy = keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
nova_url = http://controller:8774/v2 [keystone_authtoken]
auth_uri = http://controller:5000
identity_uri = http://127.0.0.1:5000
admin_tenant_name = %SERVICE_TENANT_NAME%
admin_user = %SERVICE_USER%
admin_password = %SERVICE_PASSWORD%
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = pass
[database]
connection = mysql://neutron:pass@controller/neutron
[nova]
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = nova
password = pass
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = pass

或者执行下面命令快捷编辑:

openstack-config --set /etc/neutron/neutron.conf database connection mysql://neutron:pass@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password pass
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password pass
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes True
openstack-config --set /etc/neutron/neutron.conf DEFAULT nova_url http://controller:8774/v2
openstack-config --set /etc/neutron/neutron.conf nova auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf nova auth_plugin password
openstack-config --set /etc/neutron/neutron.conf nova project_domain_id default
openstack-config --set /etc/neutron/neutron.conf nova user_domain_id default
openstack-config --set /etc/neutron/neutron.conf nova region_name RegionOne
openstack-config --set /etc/neutron/neutron.conf nova project_name service
openstack-config --set /etc/neutron/neutron.conf nova username nova
openstack-config --set /etc/neutron/neutron.conf nova password pass
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True
  • 配置Modular Layer 2 (ML2) plug-in

    编辑 /etc/neutron/plugins/ml2/ml2_conf.ini,修改如下配置:
[ml2]
type_drivers = flat
tenant_network_types =
mechanism_drivers = linuxbridge
extension_drivers = port_security
[ml2_type_flat]
flat_networks = public
[securitygroup]
enable_ipset = True

或者执行下面命令快捷编辑:

openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks public
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset True
  • Linux bridge agent

    编辑配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = public:em2
[vxlan]
enable_vxlan = False
[agent]
prevent_arp_spoofing = True
[securitygroup]
enable_security_group = True
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

或者执行下面命令快捷编辑:

openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:em2  #注意em2为桥接的网卡
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  • DHCP agent

    编辑配置文件:/etc/neutron/dhcp_agent.ini,
[DEFAULT]
interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
verbose = True

或者执行下面命令快捷编辑:

openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver neutron.agent.linux.interface.BridgeInterfaceDriver
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata True
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT verbose True
  • metadata agent

    编辑配置文件:/etc/neutron/metadata_agent.ini
[DEFAULT]
auth_url = http://controller:35357
auth_region = RegionOne
auth_uri = http://controller:5000
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = neutron
password = pass
nova_metadata_ip = controller
metadata_proxy_shared_secret = neutron
verbose = True

或者执行下面命令快捷编辑:

openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_uri http://controller:5000
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_url http://controller:35357
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_region RegionOne
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT auth_plugin password
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_domain_id default
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT user_domain_id default
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT project_name service
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT username neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT password pass
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_ip controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret neutron
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT verbose True
  • Nova使用 Neutron

    在nova的配置文件中加入下面内容:
neutron]
url = http://controller:9696
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
region_name = RegionOne
project_name = service
username = neutron
password = pass
service_metadata_proxy = True
metadata_proxy_shared_secret = neutron

或者执行下面命令快捷编辑:

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_plugin password
openstack-config --set /etc/nova/nova.conf neutron project_domain_id default
openstack-config --set /etc/nova/nova.conf neutron user_domain_id default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password pass
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy True
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret neutron
  • 初始化数据库
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
  • 启动服务
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
  • 重启nova服务
systemctl restart openstack-nova-api.service
  • 检查配置:
[root@controller ~]# nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2016-09-06T01:06:30.000000 | - |
| 3 | nova-scheduler | controller | internal | enabled | up | 2016-09-06T01:06:30.000000 | - |
| 4 | nova-consoleauth | controller | internal | enabled | up | 2016-09-06T01:06:30.000000 | - |
| 5 | nova-cert | controller | internal | enabled | up | 2016-09-06T01:06:31.000000 | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
[root@controller ~]# neutron agent-list
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
| 7e2983e4-4e1d-4f57-aa0a-b94e957a5aee | Linux bridge agent | controller | :-) | True | neutron-linuxbridge-agent |
| df95805f-2601-45a9-84e5-d19bbbb6d184 | Metadata agent | controller | :-) | True | neutron-metadata-agent |
| e97e5690-0190-4e79-a8cd-fa9162cb4305 | DHCP agent | controller | :-) | True | neutron-dhcp-agent |
+--------------------------------------+--------------------+------------+-------+----------------+---------------------------+
  • 创建provider 网络

    Flat网络,网络是管理员创建,租户是无法创建网络。
neutron net-create public --shared --provider:physical_network public --provider:network_type flat
Created a new network:
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | True |
| id | 6d3ef00d-3e39-47ee-a7f5-7c2dda763eb4 |
| mtu | 0 |
| name | public |
| port_security_enabled | True |
| provider:network_type | flat |
| provider:physical_network | public |
| provider:segmentation_id | |
| router:external | False |
| shared | True |
| status | ACTIVE |
| subnets | |
| tenant_id | fb050cf19ec847fa9b54455f3d3144c5 |
+---------------------------+--------------------------------------+

创建子网

 neutron subnet-create public 172.16.202.0/24 --name public  --allocation-pool start=172.16.202.100,end=172.16.202.250 --dns-nameserver 233.5.5.5 --gateway 172.16.202.1
Created a new subnet:
+-------------------+------------------------------------------------------+
| Field | Value |
+-------------------+------------------------------------------------------+
| allocation_pools | {"start": "172.16.202.100", "end": "172.16.202.250"} |
| cidr | 172.16.202.0/24 |
| dns_nameservers | 233.5.5.5 |
| enable_dhcp | True |
| gateway_ip | 172.16.202.1 |
| host_routes | |
| id | 732bdb5e-ca1b-45b2-a68b-b4a1f509dec7 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | public |
| network_id | 6d3ef00d-3e39-47ee-a7f5-7c2dda763eb4 |
| subnetpool_id | |
| tenant_id | fb050cf19ec847fa9b54455f3d3144c5 |
+-------------------+------------------------------------------------------+

7.openstack Horizon安装

yum install -y openstack-dashboard

编辑 /etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*', ]
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
} TIME_ZONE = "Asia/Shanghai" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" OPENSTACK_NEUTRON_NETWORK = {
'enable_router': False,
'enable_quotas': False,
'enable_ipv6': False,
'enable_distributed_router': False,
'enable_ha_router': False,
'enable_lb': False,
'enable_firewall': False,
'enable_vpn': False,
'enable_fip_topology_check': True,

重启服务

systemctl restart httpd.service memcached.service

最后通过访问http://172.16.201.9/dashboard访问

账户密码为 admin redhat

8.计算服务节点compute安装

  • 安装compute服务
yum install -y openstack-nova-compute sysfsutils openstack-utils
  • 配置
openstack-config --set /etc/nova/nova.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/nova/nova.conf oslo_messaging_rabbit rabbit_password pass
openstack-config --set /etc/nova/nova.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_plugin password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password pass
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.16.201.8
openstack-config --set /etc/nova/nova.conf DEFAULT network_api_class nova.network.neutronv2.api.API
openstack-config --set /etc/nova/nova.conf DEFAULT security_group_api neutron
openstack-config --set /etc/nova/nova.conf DEFAULT linuxnet_interface_driver nova.network.linux_net.NeutronLinuxBridgeInterfaceDriver
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf vnc enabled True
openstack-config --set /etc/nova/nova.conf vnc vncserver_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.16.201.8
openstack-config --set /etc/nova/nova.conf vnc vncserver_proxyclient_address "$"my_ip
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://172.16.201.9:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance host controller
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf DEFAULT verbose True
openstack-config --set /etc/nova/nova.conf libvirt virt_type kvm

8.计算服务节点neutron安装网络服务

yum install openstack-neutron openstack-neutron-linuxbridge iptables ipset
  • 配置
openstack-config --set /etc/neutron/neutron.conf DEFAULT rpc_backend rabbit
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_host controller
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_userid openstack
openstack-config --set /etc/neutron/neutron.conf oslo_messaging_rabbit rabbit_password pass
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:35357
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_plugin password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_id default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_id default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password pass
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set /etc/neutron/neutron.conf DEFAULT verbose True
  • 配置 the Linux bridge agent
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings public:em2
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan False
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini agent prevent_arp_spoofing True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group True
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
  • 配置nova使用Neutron
openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:35357
openstack-config --set /etc/nova/nova.conf neutron auth_plugin password
openstack-config --set /etc/nova/nova.conf neutron project_domain_id default
openstack-config --set /etc/nova/nova.conf neutron user_domain_id default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password pass
  • 启动服务
systemctl enable libvirtd.service neutron-linuxbridge-agent.service openstack-nova-compute.service
systemctl start libvirtd.service neutron-linuxbridge-agent.service openstack-nova-compute.service

至此,openstack基本环境意境搭建完成,可以使用dashboard进行创建实例。下面介绍怎么搭建cinder存储

9.块存储cinder安装部署

在控制节点上进行如下操作:

  • 创建数据库,及用户授权
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'172.16.201.%' IDENTIFIED BY 'pass';

*创建用户和endpoint

openstack user create --password-prompt cinder
openstack role add --project service --user cinder admin
openstack service create --name cinder --description "OpenStack Block Storage" volume
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack endpoint create --region RegionOne cinder public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne cinder internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne cinder admin http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne cinderv2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne cinderv2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne cinderv2 admin http://controller:8776/v2/%\(tenant_id\)s

*安装软件包

yum install openstack-cinder python-cinderclient python-oslo-db
  • 配置文件修改

    编辑配置文件 /etc/cinder/cinder.conf
[DEFAULT]
auth_strategy = keystone
verbose = True
my_ip = 172.16.201.9
rpc_backend = rabbit
glance_host = controller
[database]
connection = mysql://cinder:pass@controller/cinder
[keystone_authtoken]
auth_uri = http://controller:5000
auth_url = http://controller:35357
auth_plugin = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = pass [oslo_concurrency]
lock_path = /var/lock/cinder
[oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = pass

*同步数据库

su -s /bin/sh -c "cinder-manage db sync" cinder
  • 启动服务
 systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

在存储节点上进行如下操作:

机器有限,所以,计算节点也充当存储节点,

lvm配置管理,创建存储盘

  • 安装软件包
yum install qemu lvm2
  • 启动服务
 systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

*创建存储盘

pvcreate /dev/sdb1  #创建物理卷
vgcreate cinder-volumes /dev/sdb1 #创建卷组
  • 设置访问权限

    编辑/etc/lvm/lvm.conf
filter = [ "a/.*/" ]
filter = [ "a/sdb1/", "a/sdb/", "r/.*/"]

cinder存储节点配置

  • 安装软件包
yum install openstack-cinder targetcli python-oslo-db python-oslo-log MySQL-python
  • 配置

    编辑配置文件 /etc/cinder/cinder.conf
[DEFAULT]
my_ip = 172.16.201.8
glance_host = controller
auth_strategy = keystone
rpc_backend = rabbit
enabled_backends = lvm
[database]
connection = mysql://cinder:pass@controller/cinder
[oslo_concurrency]
lock_path = /var/lock/cinder [oslo_messaging_rabbit]
rabbit_host = controller
rabbit_userid = openstack
rabbit_password = pass
lvm]
...
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm
  • 启动服务
 systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service
  • 问题:
    1. 无法挂载云硬盘

      当配置完之后,发现云硬盘可以创建,但是无法挂载在虚拟机上,经查看日志/var/log/cinder/volume.log发现,'/var/lock/cinder没创建
2016-09-19 20:28:13.238 5561 ERROR oslo_messaging.rpc.dispatcher     os.makedirs(path)
2016-09-19 20:28:13.238 5561 ERROR oslo_messaging.rpc.dispatcher File "/usr/lib64/python2.7/os.py", line 157, in makedirs
2016-09-19 20:28:13.238 5561 ERROR oslo_messaging.rpc.dispatcher mkdir(name, mode)
2016-09-19 20:28:13.238 5561 ERROR oslo_messaging.rpc.dispatcher OSError: [Errno 13] Permission denied: '/var/lock/cinder'

创建,及授权

mkdir /var/lock/cinder
chown -R cinder.cinder /var/lock/cinder
注意在./var/lock下创建的都是临时的,系统重启之后就删除了