openstack 网络更改版

时间:2021-11-24 20:03:02

Openstack环境部署 (参考文献:http://www.cnblogs.com/kevingrace/p/5707003.html 和 https://docs.openstack.org/mitaka/zh_CN)

注:建议更改某个服务的配置文件时,拷贝一份,防止修改错误而乱删乱改!!!

1、系统:centOS7

2、数量:暂定4台

1、控制节点:controller1 IP:192.168.2.201 外网:124.65.181.122

2、计算节点:nova1 IP:192.168.2.202 外网:124.65.181.122

3、块存储节点:cinder IP:192.168.2.222

4、共享文件节点:manila IP:192.168.2.223

3、域名解析和关闭iptables、selinux(所有节点)

域名解析:vi  /etc/hosts

192.168.2.201 controller1

192.168.2.202 nova1

192.168.2.222 cinder1

192.168.2.223 manila1

注:可选择编辑controller1节点的hosts文件然后逐一发送至其他节点:scp  /etc/hosts  IP地址:/etc/hosts

关闭selinux

永久关闭:vi /etc/selinux/config

SELINUX=disabled

临时关闭:setenforce 0

关闭iptables

永久关闭:systemctl  disable  firewalld.service

临时关闭:systemctl  stop    firewalld.service

4、配置网络时间协议(NTP)

控制节点:

yum  install  chrony

编辑:vi  /etc/chrony.conf

allow  192.168/24 #允许的服务器和自己同步时间

systemctl  enable  chronyd.service #开机自启

systemctl  start    chronyd.service

timedatectl  set-timezone  Asia/Shanghai #设置时区

timedatectl  status #查看

其他节点:

yum  install  chrony

编辑:vi  /etc/chrony.conf

servcer  controller1  iburst #设置时间服务主机名/IP

systemctl  enable  chronyd.service #开机自启

systemctl  start    chronyd.service

timedatectl  set-timezone  Asia/Shanghai #设置时区

chronyc  sources

测试是否时间同步

所有节点执行相同:chronyc  sources

5、升级包、系统(所有节点)

yum  install  centos-release-openstack-mitaka

升级包:yum  upgrade #若更新新内核,需重启来使用新内核

客户端:yum  install  python-openstackclient

安全策略:yum  install  openstack-selinux

6、数据库---mysql (控制节点)

安装软件包:yum  install  mariadb  mariadb-server  MySQL-python

拷贝配置文件:cp /usr/share/mariadb/my-medium.cnf /etc/my.cnf #或者/usr/share/mysql/my-medium.cnf /etc/my.cnf

编辑:vi  /etc/my.cnf

[mysqld]

default-storage-engine = innodb

innodb_file_per_table

collation-server = utf8_general_ci

init-connect = 'SET NAMES utf8'

character-set-server = utf8

设置开机自启:systemctl enable mariadb.service

链接: ln  -s /usr/lib/systemd/system/mariadb.service  /etc/systemd/system/multi-user.target.wants/mariadb.service

    

初始化数据库:mysql_install_db  --datadir="/var/lib/mysql"  --user="mysql"

开启数据库:systemctl  start  mariadb.service

设置密码及初始化:mysql_secure_installation

此处我们登陆数据库,分别创建核心节点的数据库然后赋予相应权限

CREATE DATABASE keystone; #身份认证

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'keystone';

GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'keystone';

CREATE DATABASE glance; #镜像服务

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glance';

GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glance';

CREATE DATABASE nova; #计算服务

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'nova';

GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'nova';

CREATE DATABASE neutron; #网络服务

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'neutron';

GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutron';

CREATE DATABASE cinder; #块存储服务

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY 'cinder';

GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY 'cinder';

刷新数据库:flush  privileges;

查看:show  databases;

7、消息队列----rabbitmq (控制节点)

安装软件包:yum  install  rabbitmq-server

启动rabbitmq:端口为5672

systemctl  enable  rabbitmq-server.service

链接:

ln -s  /usr/lib/systemd/system/rabbitmq-server.service /etc/systemd/system/multi-user.target.wants/rabbitmq-server.service

启动:systemctl  start  rabbitmq-server.service

#启动会报错则去/usr/sbin/ 找到这个服务用./rabbitmq-server  启动会看到详细报错原因

#报错原因是hostname里面的主机名要和hosts里面主机名对应才可以,否则rabbitmq-server.service 检测不到,识别不了

注:若验证是否开启成功执行查看端口命令:netstat  -anpt

添加openstack用户及密码:rabbitmqctl  add_user  openstack  openstack123 #openstack123表示自行定义的密码

为openstack用户设置权限:rabbitmqctl  set_permissions  openstack  “.*” “.*” “.*” #允许配置、写、读访问openstack

查看支持的插件:rabbitmq-plugins  list

启动插件:rabbitmq-plugins  enable  rabbitmq_management #rabbitmq_management表示实现WEB管理

重启rabbitmq服务: systemctl  restart  rabbitmq-server.service

端口:lsof  -i:15672

测试访问http://192.168.2.201:15672 登陆的用户密码皆是guest

8、认证服务----keystone (端口:5000和35357) #控制节点执行

1、安装软件包:yum  install  openstack-keystone httpd  mod_wsgi  memcached  python-memcached

注:memcached表示认证服务缓存

2、首先生成随机值:openssl rand -hex 10

3、拷贝一份keystone配置文件,防止修改出错后排查:cp  /etc/keystone/keystone.conf  /etc/keystone/keystone.conf.bak

编辑文件vi  /etc/keystone/keystone.conf:

[DEFAULT]

admin_token = b6f89e3f5d766bb71bf8 #此处是生成的随机值

token_format = UUID

[database]

connection = mysql+pymysql://keystone:keystone123@controller1/keystone

[memcache]

servers = controller1:11211

[token]

provider = uuid

driver =  keystone.token.persistence.backends.sql.Token

注:keystone默认使用SQL数据库存储token,token默认值为1天(24h)。Openstack中每个组件执行的每次命令(请求)都需要token验证,每次访问都会创建token,增长速度非常快,token表数据也会越来越多。随着时间的推移,无效的记录越来越多,企业私有云的量就可以几万条、几十万条。这么多无效的token导致针对token表的SQL语句变慢,性能也会变差,要么手动写个定时脚本清理token表;要么把token存放在memcache缓存中,利用memcache特性,自动删除不使用的缓存。(本次使用第二种方法)

4、创建数据库表,使用命令同步:su -s /bin/sh -c "keystone-manage db_sync" keystone

数据库检查表:mysql  -h  192.168.2.201  -u  keystone  -pkeystone123 #密码键入,直接登陆keystone库

#echo -n  redhat  | openssl md5   生成md5加密密码

#update users set  passwd='e2798af12a7a0f4f70b4d69efbc25f4d' where userid = '1';

5、启动apache和memcache

启动memcache:

systemctl enable memcached

注:执行此命令后若出现:Created symlink from /etc/systemd/system/multi-user.target.wants/memcached.service to /usr/lib/systemd/system/memcached.service.表示做了一条链接,让其开机自启。然后重新执行此命令!

systemctl  start  memcached #启动memcache

###  /usr/bin/memcached -d -uroot   #若没有11211端口则用此方法启动

验证方法则是查看其默认的11211端口是否开启

6、配置httpd,编辑其/etc/httpd/conf/httpd.conf文件

ServerName controller1:80

创建文件/etc/httpd/conf.d/wsgi-keystone.conf,内容如下:

Listen 5000

Listen 35357

<VirtualHost *:5000>

WSGIDaemonProcess keystone-public processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-public

WSGIScriptAlias / /usr/bin/keystone-wsgi-public

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

Require all granted

</Directory>

</VirtualHost>

<VirtualHost *:35357>

WSGIDaemonProcess keystone-admin processes=5 threads=1 user=keystone group=keystone display-name=%{GROUP}

WSGIProcessGroup keystone-admin

WSGIScriptAlias / /usr/bin/keystone-wsgi-admin

WSGIApplicationGroup %{GLOBAL}

WSGIPassAuthorization On

ErrorLogFormat "%{cu}t %M"

ErrorLog /var/log/httpd/keystone-error.log

CustomLog /var/log/httpd/keystone-access.log combined

<Directory /usr/bin>

Require all granted

</Directory>

</VirtualHost>

启动httpd:

systemctl enable httpd

systemctl start httpd

过滤查看:netstat  -lntup | grep  httpd #或者查看全部其开启的端口 netstat  -anpt

7、创建keystone用户

临时设置admin_token用户的环境变量,用来创建用户

配置认证令牌:export OS_TOKEN=b6f89e3f5d766bb71bf8 #产生的随机值要写/etc/keystone/keystone.conf里面的

配置端点URL:export OS_URL=http://controller1:35357/v3

配置认证API版本:export OS_IDENTITY_API_VERSION=3

8、创建服务实体和身份认证服务:openstack service create --name keystone --description "Openstack Identity" identity

(注:实体ID:e6aa9c8d2e504978a77d09d09d8213d4 名称:keystone 类:identity) #只是标记,你可忽略

#出现错误则:keystone-manage db_sync  重新命名解决

9、创建认证服务API端点:(public公共的、internal内部的、admin管理的)

openstack endpoint create --region RegionOne identity public http://controller1:5000/v3

openstack endpoint create --region RegionOne identity internal http://controller1:5000/v3

openstack endpoint create --region RegionOne identity admin http://controller1:5000/v3

查看端点列表:

10、创建域‘default’:openstack domain create --description "Default Domain" default

查看域列表:

11、创建admin项目、admin用户、admin角色;添加``admin`` 角色到 admin 项目和用户上

项目:openstack project create --domain default --description "Admin Project" admin

用户:openstack user create --domain default --password-prompt admin  #执行命令后,输入自定义密码,本次密码为admin123

角色:openstack role create admin

添加:openstack role add --project admin --user admin admin #--project admin代表项目,--user admin代表用户

注意:此处陈述下大致的openstack逻辑关系======================================================

1、创建域,以下说明皆在域内,可以说域相当于总框架

2、admin表示管理任务服务的项目;demo表示常务任务服务的项目;service表示每个服务包含独有用户的项目

3、service项目中对应每个模块的一个实体

4、每个模块对应三个变种端点:public(公共)、internal(内部)、admin(管理)

5、除了service独有用户的项目以外,基本其他项目都相对应一个用户、角色

6、每个模块的用户我们使用openstack项目名称做代表(keystone、glance、nova等)

7、而每个模块下的用户基本会对应一个角色

8、基本架构可简单描述:域--->项目→用户→角色

变种端点

其他:

查看域列表:openstack  domain  list

查看API端点列表:openstack endpoint list

查看项目列表:openstack  project  list

查看用户列表:openstack  user  list

查看角色列表:openstack  role  list

过滤配置文件内容:cat  配置文件路径  grep -v "^#"|grep -v "^$"

( 一些常见问题:http://www.cnblogs.com/kevingrace/p/5811167.html )

注意问题:若查看列表时出现以下显示

1、[root@controller1 ~]# openstack project list

Could not find requested endpoint in Service Catalog.或者

__init__() got an unexpected keyword argument 'token'或者

The resource could not be found. (HTTP 404)

请重新执行token认证:(unset  OS_TOKEN  OS_URL)

12、创建service项目:openstack project create --domain default --description "Service Project" service

13、创建demo项目:openstack project create --domain default --description "Demo Project" demo

查看项目列表:

创建demo用户:openstack user create --domain default --password-prompt demo   #执行后输入自定义密码,本次密码为demo123

创建user角色:openstack role create user

添加:openstack role add --project demo --user demo user

查看用户列表:

查看角色列表:

14、验证,获取token(只有获取到才能说明keystone配置成功):unset  OS_TOKEN  OS_URL

用户admin,请求认证令牌:openstack --os-auth-url http://controller1:35357/v3 --os-project-domain-name default --os-user-domain-name default --os-project-name admin --os-username admin token issue

15、创建环境变量脚本:

编辑admin:

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=admin

export OS_USERNAME=admin

export OS_PASSWORD=admin123

export OS_AUTH_URL=http://controller1:35357/v3

export OS_INENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

编辑demo:

export OS_PROJECT_DOMAIN_NAME=default

export OS_USER_DOMAIN_NAME=default

export OS_PROJECT_NAME=demo

export OS_USERNAME=demo

export OS_PASSWORD=demo123

export OS_AUTH_URL=http://controller1:5000/v3

export OS_IDENTITY_API_VERSION=3

export OS_IMAGE_API_VERSION=2

测试切换admin环境变量: .admin-openrc

测试切换demo环境变量: .  demo-openrc

 

镜像模块(端口   API9191;registry9292)

1、安装包:yum install openstack-glance python-glance python-glanceclient

2、编辑修改/etc/glance/glance-api.conf #注意,修改前请拷贝一份其配置文件;使其配置出错可以恢复

[database]

connection = mysql+pymysql://glance:glance123@controller1/glance

[glance_store]

stores = file,http

default_store = file

filesystem_store_datadir = /var/lib/glance/images/

[keystone_authtoken]

auth_uri = http://controller1:5000

auth_url = http://controller1:35357

memcached_servers = controller1:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = glance123

[paste_deploy]

flavor = keystone

3、编辑修改/etc/glance/glance-registry.conf #注意,修改前请拷贝一份其配置文件;使其配置出错可以恢复

[database]

connection = mysql+pymysql://glance:glance123@controller1/glance

[glance_store]

[keystone_authtoken]

auth_uri = http://controller1:5000

auth_url = http://controller1:35357

memcached_servers = controller1:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = glance

password = glance123

[paste_deploy]

flavor = keystone

创建数据库表,初始化数据库: su -s /bin/sh -c "glance-manage db_sync" glance  #忽略输出信息,比如:

测试登陆数据然后查看列表:mysql -h controller1  -uglance -pglanage123

4、切换环境变量: . admin-openrc

创建关于glance用户:openstack user create --domain default --password-prompt glance #本次glance用户密码定义为glance123

查看用户列表:

添加admin角色到glance用户和service项目上:openstack role add --project service --user glance admin

设置开机自启:systemctl enable openstack-glance-api openstack-glance-registry

开启:systemctl start openstack-glance-api openstack-glance-registry

查看是否有相应端口,确认是否开启:netstat -lnutp |grep 9191

5、创建glance服务实体:openstack service create --name glance --description "OpenStack Image service" image

查看实体列表:

创建镜像服务的API端点:

openstack endpoint create --region RegionOne image public http://controller1:9292

openstack endpoint create --region RegionOne image internal http://controller1:9292

openstack endpoint create --region RegionOne image admin http://controller1:9292

查看端点列表:

6、测试

下载源镜像:wget -q http://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img

注:若提示wget命令未找到须执行:yum  install  wget  -y

上传:glance image-create --name "cirros" --file cirros-0.3.4-x86_64-disk.img --disk-format qcow2 --container-format bare --visibility public --progress

查看镜像列表:

#####若上传报500错误则执行su -s /bin/sh -c "glance-manage db_sync" glance

计算服务

控制节点安装的软件包:

yum install -y openstack-nova-api openstack-nova-cert openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler python-novaclient

注:具体安装包解释请查看编写的openstack技术数据文档!

控制节点执行编辑/etc/nova/nova.conf(表示如果控制节点也作为计算节点便设置)

[DEFAULT]#只启用计算和元数据API

my_ip=192.168.2.201 #控制节点IP

enabled_apis=osapi_compute,metadata

auth_strategy=keystone

allow_resize_to_same_host=true

firewall_driver=nova.virt.firewall.NoopFirewallDriver

network_api_class=nova.network.neutronv2.api.API

use_neutron=true

rpc_backend=rabbit

[api_database]#配置数据库连接

connection=mysql+pymysql://nova:nova123@controller1/nova_api

[database]

connection=mysql+pymysql://nova:nova123@controller1/nova

[glance]#配置服务API的位置

...

api_servers= http://controller1:9292

[keystone_authtoken]#配置认证服务访问

...

auth_uri=http://controller1:5000

auth_url = http://controller1:35357

memcached_servers = controller1:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = nova123

[libvirt]

...

virt_type=kvm #若控制节点也作为计算节点,这一行需添加

[neutron]#网络配置

...

url=http://controller1:9696

auth_url = http://controller1:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron123

service_metadata_proxy = True

metadata_proxy_shared_secret = neutron

[oslo_messaging_rabbit]#配置消息队列访问

...

rabbit_host=controller1

rabbit_userid=openstack

rabbit_password=openstack123 #openstack定义的密码

[vnc]#配置VNC代理

...

keymap=en-us #若控制节点也作为计算节点,需添加

vncserver_listen=$my_ip

vncserver_proxyclient_address=$my_ip

novncproxy_base_url=http://124.65.181.122:6080/vnc_auto.html #若控制节点也作为计算几点,需添加

同步compute数据库:

su -s /bin/sh -c "nova-manage api_db sync" nova

su -s /bin/sh -c "nova-manage db sync" nova

创建nova用户:openstack user create --domain default --password-prompt nova #注:本次密码自定义设置的是nova123

查看用户列表:

给nova用户添加admin角色:openstack role add --project service --user nova admin

启动相关nova相关的服务:

systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

创建nova实体:openstack service create --name nova --description "OpenStack Compute" compute

查看实体列表:

创建compute服务API端点:

openstack endpoint create --region RegionOne compute public http://controller1:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute internal http://controller1:8774/v2.1/%\(tenant_id\)s

openstack endpoint create --region RegionOne compute admin http://controller1:8774/v2.1/%\(tenant_id\)s

端点列表查看:

检查:

计算节点安装的软件包:yum install -y openstack-nova-compute sysfsutils

编辑文件计算节点/etc/nova/nova.conf

[DEFAULT]

my_ip=192.168.2.202 #计算节点1的IP

enabled_apis=osapi_compute,metadata

auth_strategy=keystone

firewall_driver=nova.virt.firewall.NoopFirewallDriver

network_api_class=nova.network.neutronv2.api.API

use_neutron=true

rpc_backend=rabbit

[api_database]

connection=mysql+pymysql://nova:nova123@controller1/nova_api

[database]

connection=mysql+pymysql://nova:nova123@controller1/nova

[glance]

api_servers= http://controller1:9292

[keystone_authtoken]

auth_uri=http://controller1:5000

auth_url = http://controller1:35357

memcached_servers = controller1:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = nova

password = nova123 #自定义的计算节点密码

[libvirt]

virt_type=qemu

[neutron]

url=http://controller1:9696

auth_url = http://controller1:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = neutron

password = neutron123 #自定义的网络模块密码

[oslo_concurrency]

lock_path=/var/lib/nova/tmp

[oslo_messaging_rabbit]

rabbit_host=controller1

rabbit_userid=openstack

rabbit_password=openstack123

[vnc]

keymap=en-us

vncserver_listen=0.0.0.0 #所有IP访问

vncserver_proxyclient_address=$my_ip

novncproxy_base_url=http://192.168.2.201:6080/vnc_auto.html #控制节点IP

启动服务:

systemctl enable libvirtd openstack-nova-compute

systemctl start libvirtd openstack-nova-compute

测试glance是否正常:(已解决,详情在下)

测试keystone是否正常:

网络模块

控制节点安装:yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables 计算节点安装:yum install -y openstack-neutron openstack-neutron-linuxbridge ebtables ipset

1、控制节点编辑以下配置文件

1、编辑/etc/neutron/neutron.conf

[DEFAULT]

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

allow_overlapping_ips = True

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

rpc_backend = rabbit

[database]

connection = mysql+pymysql://neutron:neutron123@controller1/neutron

[keystone_authtoken]

auth_uri = http://controller1:5000

auth_url = http://controller1:35357

memcached_servers = controller1:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = neutron

password = neutron123

[nova]#配置网络通知计算网络拓扑变化

auth_url = http://controller1:35357

auth_type = password

project_domain_name = default

user_domain_name = default

region_name = RegionOne

project_name = service

username = nova

password = nova123

[oslo_concurrency]

lock_path = /var/log/neutron/tmp

[oslo_messaging_rabbit]

rabbit_host = controller1

rabbit_userid = openstack

rabbit_password = openstack123

2、编辑/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]

type_drivers = flat,vlan,vxlan

tenant_network_types = vxlan

mechanism_drivers = linuxbridge,l2population

extension_drivers = port_security #启用端口安全

[ml2_type_flat]#虚拟网络配置提供者平面网络

flat_networks = provider

[ml2_type_vxlan]

vni_ranges = 1:1000

[securitygroup]

enable_ipset = true

3、编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]

physical_interface_mappings = provider:enp5s0 #网卡名称

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

enable_security_group = true

[vxlan]

enable_vxlan = true

local_ip = 192.168.2.201

l2_population = true

4、编辑/etc/neutron/dhcp_agent.ini

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq

enable_isolated_metadata = true

5、编辑/etc/neutron/l3_agent.ini,添加如下

[DEFAULT]

interface_driver = neutron.agent.linux.interface.BridgeInterfaceDriver

external_network_bridge =

6、编辑/etc/neutron/metadata_agent.ini

[DEFAULT]

nova_metadata_ip = controller1

metadata_proxy_shared_secret = neutron123

1、创建连接:ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

2、创建neutron用户:openstack user create --domain default --password-prompt neutron #本次设置自定义用户密码为neutron123

查看用户列表:

3、添加admin角色到neutron用户:openstack role add --project service --user neutron admin

4、更新数据库:su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

5、创建neutron服务实体:openstack service create --name neutron --description "OpenStack Network" network

查看实体列表:

6、创建网络服务API端点:

openstack endpoint create --region RegionOne network public http://controller1:9696

openstack endpoint create --region RegionOne network internal http://controller1:9696

openstack endpoint create --region RegionOne network admin http://controller1:9696

查看端点列表:

5、启动服务并检查(注:由于计算和网络有联系,在nova.conf中做了网络的关联配置,需重启api)

systemctl restart openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

6、启动网络相关服务

开机自启:systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

启动服务:systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

计算节点配置:

1、编辑/etc/neutron/neutron.conf #可从controller1节点中把文件拷贝到compute1节点

[DEFAULT]

state_path = /var/lib/neutron

auth_strategy = keystone

core_plugin = ml2

service_plugins = router

notify_nova_on_port_status_changes = true

notify_nova_on_port_data_changes = true

nova_url = http://controller1:8774/v2.1

rpc_backend = rabbit

[database]

connection = mysql+pymysql://neutron:neutron123@controller1/neutron

[keystone_authtoken]

auth_uri = http://controller1:5000

auth_url = http://controller1:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

project_name = service

username = neutron

password = neutron123

admin_tenant_name = %SERVICE_TENANT_NAME%

admin_user = %SERVICE_USER%

admin_password = %SERVICE_PASSWORD%

[nova]

auth_url = http://controller1:35357

auth_plugin = password

project_domain_id = default

user_domain_id = default

region_name = RegionOne

project_name = service

username = nova

password = nova123

[oslo_concurrency]

lock_path = $state_path/lock

[oslo_messaging_rabbit]

rabbit_host = controller1

rabbit_port = 5672

rabbit_userid = openstack

rabbit_password = openstack123

2、编辑/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[agent]

prevent_arp_spoofing = true

[linux_bridge]

bridge_mappings = provider:em1

[securitygroup]

firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

enable_security_group = true

[vxlan]

enable_vxlan = false

7、公网测试查看:

查看neutron-server进程是否正常启动:

问题:在控制节点测试若发现以下问题

1、[root@controller1 ~]# neutron agent-list

404-{u'error': {u'message': u'The resource could not be found.', u'code': 404, u'title': u'Not Found'}}

Neutron server returns request_ids: ['req-649eb926-7200-4a3d-ad91-b212ee5ef767']

请执行:unset OS_TOKEN OS_URL #初始化

2、[root@controller1 ~]# neutron agent-list

Unable to establish connection to http://controller1:9696/v2.0/agents.json

请执行重新启动:systemctl restart neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service

创建虚拟机

1、创建桥接网络

在那个项目下创建虚拟机,此处我们选择admin: .admin-openrc(若选择demo,相应切换即可)

执行:neutron net-create flat --shared --provider:physical_network provider --provider:network_type flat#provider表示在配置文件中的:provider:网卡名称。

创建子网:neutron subnet-create flat 192.168.2.0/24 --name flat-subnet --allocation-pool start=192.168.2.100,end=192.168.2.200 --dns-nameserver 192.168.2.1 --gateway 192.168.2.1

注:填写宿主机的内网网关,下面DNS和内网网关可以设置成宿主机的内网ip,下面192.168.2.100-200是分配给虚拟机的ip范围

查看子网:

注:创建的网络删除方法

1、查看是否有路由------neutron router-list

2、删除路由网关-----neutron router-gateway-clear 路由名称(查看路由后,直接输入要删除的路由)

3、删除路由接口-----neutron router-interface-delete 路由名称 路由接口(注:路由接口则是你在创建时键入的名称)

4、删除路由-----neutron router-delete 路由名称

5、删除子网----neutron subnet-delete 子网名称(注:子网名称则是同删除路由相关而创建的子网)

6、删除网络----neutron net-delete 网络名称

注:查看网络----neutron net-list

查看子网----neutron subnet-list

查看路由---neutron router-list

若没有路由则直接删除子网即可!

创建虚拟机

1、创建key

[root@controller1 ~]# . demo-openrc #这是在demo账号下常见虚拟机;如果要在admin账号下创建虚拟机,相应切换即可

[root@controller1 ~]# ssh-keygen -q -N ""

2、将公钥添加到虚拟机

[root@controller1 ~]#  nova keypair-add --pub-key /root/.ssh/id_rsa.pub mykey

3、创建安全组

nova secgroup-add-rule default icmp -1 -1 0.0.0.0/0 #表示可ping

nova secgroup-add-rule default tcp 22 22 0.0.0.0/0 #表示可ssh连接

4、创建虚拟机

查看支持的虚拟机类型

查看镜像:

查看网络:

创建虚拟机:nova boot --flavor m1.tiny --image cirros --nic net-id=f3a7aa1e-9799-47cd-a1d4-fb1e4d191f2d --security-group default --key-name mykey hello-instance

注:--flavor m1.tiny #表示选择的虚拟机类型

--image cirros #cirros表示的是镜像名称,可自定义

--key-name mykey #表示key的名称,可以自定义

hello-instance #表示虚拟机名称,可自定义

查看列表:

执行命令,让其Web界面打开虚拟机:(输入URL即可进入登陆界面)

使用浏览器登陆novnc:(谷歌浏览器)

注:登陆云主机用户名为:cirros 密码为默认密码:cubswin:) (图中有提示)

控制节点删除虚拟机使用的命令:nova  delete  ID(查看列表中的ID)

也可以在控制节点命令行中执行ssh命令,然后切换云主机:ssh  cirros@IP;如果ssh切换提示失败等,我们把生成的key文件修改权限至700。在主机使用ssh切换时,需要使用默认用户名登陆,登陆成功后则使用su命令切换即可。 (查看列表中有相应IP显示)

其他centOS镜像地址:http://cloud.centos.org/centos/

本次使用镜像下载地址:http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2

远程是否可以连接参考文献:https://sanwen8.cn/p/171lmWW.html

建议在控制节点使用ssh登陆,一般情况下centos镜像6.x默认用户为“centos-user”;centos7.x默认用户是“centos”;由于创建虚拟机时我们创建了公钥,所以不需要密码就可以登陆虚拟机,登陆到虚拟机时我们需要修改下密码,命令为:sudo  passwd  用户名

另外当我们ssh进入云主机时,在novnc中我们可以选择用户名root,密码则为我们修改的密码

安装dashboard,登陆web管理界面:(控制节点)

1、安装包:yum install openstack-dashboard -y

2、编辑/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "192.168.2.201"#或者书写controller1

ALLOWED_HOSTS = ['*', ]#表示允许所有主机访问仪表盘

添加此句:SESSION_ENGINE = 'django.contrib.sessions.backends.file'’#表示配置memcached会话存储服务

CACHES = {

    'default': {

'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',

'LOCATION': '192.168.2.202:11211',

}

}

OPENSTACK_KEYSTONE_URL=http://%s:5000/v3% OPENSTACK_HOST#启用第3版认证API

OPENSTACK_API_VERSIONS = {#配置API版本

"identity": 3,

"volume": 2,

"compute": 2,

}

OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"#通过仪表盘创建的用户默认角色配置为user

OPENSTACK_NEUTRON_NETWORK = {#本次选择的网络参数是公共网络,禁用支持3层网络服务

'enable_router': False,

'enable_quotas': False,

'enable_distributed_router': False,

'enable_ha_router': False,

'enable_lb': False,

'enable_firewall': False,

'enable_vpn': False,

'enable_fip_topology_check': False,

TIME_ZONE = "Asia/Shanghai"#配置时区

3、重启web服务器以及会话存储服务:systemctl restart httpd.service memcached.service

4、测试登陆:http://192.168.2.201/dashboard

此处显示的则是创建时的项目、用户等

查看云主机

添加、删除安全规则:

使创建的VM主机联网,配置如下:

1、安装软件包:yum  install  squid #在控制节点

2、修改配置文件/etc/squid如下 #建议修改之前备份一份配置文件

把http_access  deny  all改为http_access allow all #表示所有用户都可以访问这个代理

把http_port  3128改为http_port 192.168.2.201:3128 #IP及端口是squid的代理IP及端口(也就是宿主机的IP)

3、启动前测试,命令如下:

使用命令启动:

查看3128端口是否开启: #其他------netstat -nltp。此命令是查看所有tcp端口

4、虚拟机VM(云主机)上进行squid代理配置

编辑系统环境变量配置文件/etc/profile,在文件最后位置添加即可:export  http_proxy=http://192.168.2.201:3128

刷新配置文件:source  /etc/profile

5、测试虚拟机是否对外访问:

访问:curl  http://www.baidu.com

正常在线使用yum: yum  list

安装块存储(cinder)

创建cinder用户:[root@controller1 ~]# openstack user create --domain default --password-prompt cinder

查看用户列表

添加admin角色到cinder用户上:[root@controller1 ~]# openstack role add --project service --user cinder admin

创建服务实体(块设备存储要求两个服务实体):

openstack service create --name cinder --description "OpenStack Block Storage" volume

openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

查看实体列表:

创建块设备存储服务的API入口点:

实体volume:

openstack endpoint create --region RegionOne volume public http://controller1:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne volume internal http://controller1:8776/v1/%\(tenant_id\)s

openstack endpoint create --region RegionOne volume admin http://controller1:8776/v1/%\(tenant_id\)s

实体volumev2:

openstack endpoint create --region RegionOne volumev2 public http://controller1:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne volumev2 internal http://controller1:8776/v2/%\(tenant_id\)s

openstack endpoint create --region RegionOne volumev2 admin http://controller1:8776/v2/%\(tenant_id\)s

API端点列表查看:

安装软件包:yum install openstack-cinder

编辑修改/etc/cinder/cinder.conf:

[DEFAULT]

...

my_ip = 192.168.2.201

auth_strategy = keystone

rpc_backend = rabbit

[database]

...

connection = mysql+pymysql://cinder:cinder123@controller1/cinder

[oslo_messaging_rabbit]

...

rabbit_host = controller1

rabbit_userid = openstack

rabbit_password = openstack123

[keystone_authtoken]

...

auth_uri = http://controller1:5000

auth_url = http://controller1:35357

memcached_servers = controller1:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cinder

password = cinder123

[oslo_concurrency]

...

lock_path = /var/lib/cinder/tmp

初始化块设备服务的数据库:su -s /bin/sh -c "cinder-manage db sync" cinder

配置计算节点以使用块存储(/etc/nova/nova.conf):

[cinder]

...

os_region_name=RegionOne

重启计算API服务:systemctl restart openstack-nova-api.service

启动块存储并开机自启

systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service

systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

在存储节点执行:

查看是否安装包:[root@cinder1 ~]# yum install lvm2

启动服务:[root@cinder1 ~]# service lvm2-lvmetad start

在“devices”部分,添加一个过滤器,只接受“/dev/sdb”设备,拒绝其他所有设备:

devices {

...

filter = [ "a/sda/","a/sdb/","r/.*/"]

openstack 网络更改版的更多相关文章

  1. openstack网络(neutron)模式之GRE的基本原理

    neutron网络目的是为OpenStack云更灵活的划分网络,在多租户的环境下提供给每个租户独立的网络环境. neutron混合实施了第二层的VLAN和第三层的路由服务,它可为支持的网络提供防火墙, ...

  2. 深入理解openstack网络架构&lpar;4&rpar;-----连接到public network

    原文地址: https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture3 译文转自:http://b ...

  3. 深入理解openstack网络架构&lpar;3&rpar;-----路由

    原文地址: https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture2 译文转自:http://b ...

  4. 深入理解openstack网络架构&lpar;1&rpar;

    原文地址: https://blogs.oracle.com/ronen/entry/diving_into_openstack_network_architecture 译文转载自:http://b ...

  5. openstack 网络架构 nova-network &plus; neutron

    openstack网络架构(nova-network/neutron) openstack网络体系中,网络技术没有创新,但用到的技术点很庞杂,包含bridge.vlan.gre.vxlan.ovs.o ...

  6. openstack 网络简史

    openstack 网络简史 研究openstack有2个月的时间,这段时间从网上获取N多宝贵资料,对我的学习有非常大帮助,在加上我自己的研究,最终对openstack整个网络体系有了个浅显的认识,写 ...

  7. OpenStack云计算(一)——OpenStack 网络

    关于OpenStack OpenStack它是 Rackspace Cloud 和 NASA 负责的一个开源基础架构即服务的云计算项目. OpenStack 是一个由开发者和云计算技术人员的全球协作开 ...

  8. 深入理解 Neutron -- OpenStack 网络实现(1):GRE 模式

    问题导读1.什么是VETH.qvb.qvo?2.qbr的存在的作用是什么?3.router服务的作用是什么? 如果不具有Linux网络基础,比如防火墙如何过滤ip.端口或则对openstack ovs ...

  9. OpenStack 网络:Neutron 初探

    OpenStack Neutron 网络模型 OpenStack nova-network 独立成为单独的组件 Neutron 后,形象的网络模型的多平面网络.混合平面私有网络.如图 3,图 4,图 ...

随机推荐

  1. iBatis&period;net 类的继承extends和懒加载

    <resultMaps> <resultMap id="FullResultMap" class="t_c_team_member_permission ...

  2. phpstorm 使用技巧

    专题1 专题2 专题3 专题4 快捷键

  3. Selenium之(二)Junit单元测试框架

    书目-selenium 实战宝典 章节:第七章 p63-73 1.被测程序 2.测试代码 3.多个测试类整合到一起 4.运行查看结果

  4. 【图文详解】scrapy爬虫与动态页面——爬取拉勾网职位信息(2)

    上次挖了一个坑,今天终于填上了,还记得之前我们做的拉勾爬虫吗?那时我们实现了一页的爬取,今天让我们再接再厉,实现多页爬取,顺便实现职位和公司的关键词搜索功能. 之前的内容就不再介绍了,不熟悉的请一定要 ...

  5. SnapKit代码约束

    let label = UILabel() label.frame = CGRectMake(, , , ) label.backgroundColor = UIColor.cyanColor() l ...

  6. Linux网络管理——远程登录工具

    4. 远程登录工具 .note-content {font-family: "Helvetica Neue",Arial,"Hiragino Sans GB", ...

  7. 我的Android 4 学习系列之使用 Internet 资源

    目录 连接Internet资源 分析XML资源 使用Download Manager下载文件 查询Download manager 使用Account Manager 对 Google App Eng ...

  8. 私有云Rabbitmq 集群部署

    默认openstack使用rabbitmq做信息队列,如果想要是云高可用,那么需要对每个涉及的组件都进行高可用配置,本文介绍如何使用rabbitmq 做高可用 高可用方法 通过 Erlang 的分布式 ...

  9. Selenium 基本元素操作(参考)

    原出处链接:http://www.cnblogs.com/Javame/p/3848258.html 元素操作 查找元素 使用操作如何找到页面元素Webdriver的findElement方法可以用来 ...

  10. Mac terminal commands

    Mac terminal commands 1.install_name_tool修改dylib安装名称的命令 2.codesign 签名及查看 3.xcode 工程编译 4.程序打包app---&g ...