Openstack Train版搭建

时间:2024-10-13 09:03:21

一、环境准备

1.1、服务器准备

主机名 系统 网卡
ct(控制节点) centos7 虚拟:172.16.100.254 nat:192.168.100.254
c2 (计算节点1) centos7 虚拟:172.16.100.252 nat:192.168.100.252

虚拟机必须开启cpu虚拟化

1.2、关闭防火墙与selinux(两台主机都要进行操作) 

目录

一、环境准备

1.1、服务器准备

1.2、关闭防火墙与selinux(两台主机都要进行操作) 

1.3、修改主机名

1.4、配置本地yum源使其用来安装基本操作命令(两台主机进行同样的操作)

1.5、安装wget命令,用以配置阿里源(两三台主机进行同样操作)

1.6、安装所需的软件,保证是最新版的(两台主机进行同样的操作)

1.7、配置主机映射(两台服务器进行同样的操作)

1.8、节点免交互(所有节点都要进行操作)

1.9、配置DNS时间同步

二、安装openstack Train版(两台机器都要进行安装,保证是最新版)

三、控制节点及部分计算节点的配置

2.1、安装、配置MariaDB数据库

2.2、安装远程内存访问服务

2.3、添加Mysql子配置文件

2.4、设置数据库密码123456 用户默认为root

2.5、安装消息队列rabbitmq

2.6、安装配置Etcd

2.7、安装Openstack服务


 systemctl stop firewalld        #关闭防火墙
 systemctl disable firewalld     #永久关闭防火墙
   
 setenforce 0                    #关闭selinux
 vi /etc/selinux/config
 # This file controls the state of SELinux on the system.
 # SELINUX= can take one of these three values:
 #     enforcing - SELinux security policy is enforced.
 #     permissive - SELinux prints warnings instead of enforcing.
 #     disabled - No SELinux policy is loaded.
 SELINUX=disabled    #改为disabled,开机自动关闭selinux
 # SELINUXTYPE= can take one of three values:
 #     targeted - Targeted processes are protected,
 #     minimum - Modification of targeted policy. Only selected processes are protected.
 #     mls - Multi Level Security protection.
 SELINUXTYPE=targeted

1.3、修改主机名

控制节点

hostnamectl set-hostname ct
su

计算节点1

hostnamectl set-hostname c2
su

1.4、配置本地yum源使其用来安装基本操作命令(两台主机进行同样的操作)

[root@c2 /]rm -rf /etc//*              #先删除本地的网络源
[root@c2 /]
[root@c2 /]vi /etc//				#编写新的本地源
	[centos]
	name=centos
	baseurl=file:///mnt
	gpgcheck=0
	enabled=1
:wq

[root@c2 /]lsblk																#查看磁盘镜像文件
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  300M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 17.7G  0 part /
sr0     11:0    1 1024M  0 rom  

// 镜像文件不存在,手动添加。
选择    虚拟机——>设置——>CD/DVD(IDE)——>设备状态全勾选——>连接(选择使用IOS镜像文件——>浏览找到存放在物理机上的镜像)——>确定

[root@c2 /]lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda      8:0    0   20G  0 disk 
├─sda1   8:1    0  300M  0 part /boot
├─sda2   8:2    0    2G  0 part [SWAP]
└─sda3   8:3    0 17.7G  0 part /
sr0     11:0    1  4.3G  0 rom  

[root@c2 /]mount /dev/sr0 /mnt/
mount: /dev/sr0 写保护,将以只读方式挂载

[root@c2 /]df -hT
文件系统       类型      容量  已用  可用 已用% 挂载点
/dev/sda3      xfs        18G  1.2G   17G    7% /
devtmpfs       devtmpfs  1.9G     0  1.9G    0% /dev
tmpfs          tmpfs     1.9G     0  1.9G    0% /dev/shm
tmpfs          tmpfs     1.9G   12M  1.9G    1% /run
tmpfs          tmpfs     1.9G     0  1.9G    0% /sys/fs/cgroup
/dev/sda1      xfs       297M  120M  177M   41% /boot
tmpfs          tmpfs     378M     0  378M    0% /run/user/0
/dev/sr0       iso9660   4.3G  4.3G     0  100% /mnt         //挂载成功
[root@c2 /]

[root@c2 /]yum clean all      // 清空yum源缓存
已加载插件:fastestmirror
正在清理软件源: centos
[root@c2 /]
[root@c2 /]yum repolist     

//  生成新的缓存
已加载插件:fastestmirror
Determining fastest mirrors
centos                                                                | 3.6 kB  00:00:00     
(1/2): centos/group_gz                                                | 166 kB  00:00:00     
(2/2): centos/primary_db                                              | 3.1 MB  00:00:00     
源标识                                      源名称                                      状态
centos                                      centos                                      4,021
repolist: 4,021   // 出现数字代表生成缓存成功
[root@c2 /]

1.5、安装wget命令,用以配置阿里源(两三台主机进行同样操作)

[root@c2 /]yum install -y wget
…………………………
…………………………
  正在安装    : wget-1.14-18.el7.x86_64                                                  1/1 
  验证中      : wget-1.14-18.el7.x86_64                                                  1/1 

已安装:
  wget.x86_64 0:1.14-18.el7
  
[root@c2 /] cd /etc//
[root@c2 ]wget  /repo/

1.6、安装所需的软件,保证是最新版的(两台主机进行同样的操作)

[root@c2 /]yum -y install net-tools bash-completion vim gcc gcc-c++ make pcre  pcre-devel expat-devel cmake  bzip2 lrzsz --nogpgcheck
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: 
 * extras: 
 * updates: 
软件包 net-tools-2.0-0.25.20131004git.el7.x86_64 已安装并且是最新版本
软件包 1:bash-completion-2.1-8. 已安装并且是最新版本
软件包 2:vim-enhanced-7.4.629-8.el7_9.x86_64 已安装并且是最新版本
软件包 gcc-4.8.5-44.el7.x86_64 已安装并且是最新版本
软件包 gcc-c++-4.8.5-44.el7.x86_64 已安装并且是最新版本
软件包 1:make-3.82-24.el7.x86_64 已安装并且是最新版本
软件包 pcre-8.32-17.el7.x86_64 已安装并且是最新版本
软件包 pcre-devel-8.32-17.el7.x86_64 已安装并且是最新版本
软件包 expat-devel-2.1.0-14.el7_9.x86_64 已安装并且是最新版本
软件包 cmake-2.8.12.2-2.el7.x86_64 已安装并且是最新版本
软件包 bzip2-1.0.6-13.el7.x86_64 已安装并且是最新版本
软件包 lrzsz-0.12.20-36.el7.x86_64 已安装并且是最新版本
无须任何处理

[root@c2 /]yum -y install centos-release-openstack-train python-openstackclient openstack-selinux openstack-utils --nogpgcheck
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: 
软件包 centos-release-openstack-train-1-1. 已安装并且是最新版本
软件包 python2-openstackclient-4.0.2-1. 已安装并且是最新版本
软件包 openstack-selinux-0.8.26-1. 已安装并且是最新版本
软件包 openstack-utils-2017.1-1. 已安装并且是最新版本
无须任何处理
[root@c2 /]

软件解释

net-tools:ifconfig命令行工具 bash-completion:辅助自动补全工具 vim:vim工具 gcc gcc-c++:编译环境 make:编译器 pcre pcre-devel:是一个Perl库,包括 perl 兼容的正则表达式库 expat-devel:Expat库,Expat是一个面向流的xml解析器 cmake:CMake是一个跨平台的编译工具,CMkae目前主要使用场景是作为make的上层工具,产生可移植的 makefile文件 lrzsz:可使用rz、sz命令上传、下载数据

OpenStack 的 train 版本仓库源安装包,同时安装 OpenStack 客户端和 openstack-selinux 安装包

1.7、配置主机映射(两台服务器进行同样的操作)

[root@c2 /]echo "172.16.100.252 c2" >> /etc/hosts
[root@c2 /]echo "172.16.100.254 ct" >> /etc/hosts
[root@c2 /]cat /etc/hosts
127.0.0.1   localhost  localhost4 localhost4.localdomain4
::1         localhost  localhost6 localhost6.localdomain6
172.16.100.252 ct
172.16.100.254 c2
[root@c2 /]

1.8、节点免交互(所有节点都要进行操作)

[root@c2 /]ssh-keygen -t rsa               // 创建非对称密钥
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 		// 回车:默认密钥存放位置
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):    // 回车:无需密钥对密码
Enter same passphrase again:    // 回车确认
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:fO0ZiFxs4V0QVchxQmXCUB6fQFKhuJSs1H0mmHudFm8 root@c2
The key's randomart image is:
+---[RSA 2048]----+
|          ..*#@==|
|        oo*ooo=B.|
|       . O=+.+...|
|      .oo+oo= +  |
|       .+ E |
|         ....o.  |
|            o    |
|                 |
|                 |
+----[SHA256]-----+
[root@c2 /]
[root@c2 /]ssh-copy-id ct      //上传公钥到ct控制节点
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ct (172.16.100.252)' can't be established.
ECDSA key fingerprint is SHA256:ghI++HlCm85UJ8SlEZgTONJlpZTiWWfzekzsP7Uk13I.
ECDSA key fingerprint is MD5:42:c0:7f:24:9a:e3:0c:39:ce:11:30:e7:75:bd:c3:99.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ct's password:   // 密码

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ct'"
and check to make sure that only the key(s) you wanted were added.

[root@c2 /]

报错1、

[root@c2 .ssh]ssh-copy-id ct
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
ERROR: @       WARNING: POSSIBLE DNS SPOOFING DETECTED!          @
ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
ERROR: The ECDSA host key for ct has changed,
ERROR: and the key for the corresponding IP address 172.16.100.254
ERROR: is unknown. This could either mean that
ERROR: DNS SPOOFING is happening or the IP address for the host
ERROR: and its host key have changed at the same time.
ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
ERROR: @    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
ERROR: @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
ERROR: IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
ERROR: Someone could be eavesdropping on you right now (man-in-the-middle attack)!
ERROR: It is also possible that a host key has just been changed.
ERROR: The fingerprint for the ECDSA key sent by the remote host is
ERROR: SHA256:i9DVGnRV1H8RZIHtt3d42oyJX2WY4G1fpocsqlZ+4CA.
ERROR: Please contact your system administrator.
ERROR: Add correct host key in /root/.ssh/known_hosts to get rid of this message.
ERROR: Offending ECDSA key in /root/.ssh/known_hosts:1
ERROR: ECDSA host key for ct has changed and you have requested strict checking.
ERROR: Host key verification failed.

输入以下指令:

[root@c2 .ssh]ssh-keygen -R 172.16.100.254
# Host 172.16.100.254 found: line 3
/root/.ssh/known_hosts updated.
Original contents retained as /root/.ssh/known_hosts.old
[root@c2 .ssh]ssh-copy-id ct
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'ct (172.16.100.254)' can't be established.
ECDSA key fingerprint is SHA256:i9DVGnRV1H8RZIHtt3d42oyJX2WY4G1fpocsqlZ+4CA.
ECDSA key fingerprint is MD5:aa:cc:3e:46:5c:83:3c:03:d6:1a:d0:14:00:2d:72:44.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@ct's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'ct'"
and check to make sure that only the key(s) you wanted were added.
[root@c2 .ssh]ssh ct
Last login: Thu Jun  2 03:27:20 2022 from 192.168.100.1
-bash-4.2#

报错2、

终端提示符显示-bash-4.2#

解决方法:

因为ct端丢失文件导致的

1、.bash_profile 2、.bashrc

[root@ct ~]cp /etc/skel/.bashrc /root/ 
[root@ct ~]
[root@ct ~]
[root@ct ~]cp /etc/skel/.bash_profile  /root/

再次连接就能进行正常显示了

1.9、配置DNS时间同步

以控制节点为DNS服务器,用以让其它节点来同步它的时间

[root@ct ~]echo "nameserver 114.114.114.114" >> /etc/  // 配指DNS服務器114.114.114.114
																						這是全國通用的DNS地址,也是國内第一個開放DNS服務的地址
[root@ct ~]yum install -y chrony   // 安装同步时间所用到的软件

[root@ct ~] vi /etc/ 
# Use public servers from the  project.
# Please consider joining the pool (/).
# server  iburst    //  注释掉
# server  iburst   //  注释掉
# server  iburst    //  注释掉
# server  iburst    //  注释掉

server  iburst     // 配置阿里云时钟服务器源
allow 172.16.100.0/24            // 允许172.16.100.0/24网段主机来同步时间

[root@ct ~] systemctl enable chronyd
[root@ct ~] systemctl restart  chronyd
[root@ct ~] chronyc sources   // 查询时间同步信息
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    17    17   +288us[+1475us] +/-   30ms
[root@ct ~]

若关机在开后时间与主机对不上使用date -s “时:分:秒”

计算节点时间同步(两台计算节点进行相同配置)

[root@c2 /] yum install -y chrony
[root@c2 /] vi /etc/ 

# Use public servers from the  project.
# Please consider joining the pool (/).
# server  iburst    //注释掉
# server  iburst    //注释掉
# server  iburst     //注释掉
# server  iburst    //注释掉

server ct iburst    // 使其去同步控制节点的时间


// 启动并查询时间同步状态
[root@c2 /] systemctl enable chronyd
[root@c2 /] systemctl restart  chronyd
[root@c2 /] chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* ct                            3   6    17    24  +6991ns[ +148us] +/-   31ms
[root@c2 /]

防止时间的同步出现不一出现故障,对此进行为每一个节点都配置计划任务,每个一分钟同步一次(两台机器都进行计划任务的编写)

[root@ct ~] crontab -e
*/1 * * * * /usr/bin/chrony sources >> /var/log/
// 保存退出后显示
no crontab for root - using an empty one
crontab: installing new crontab

[root@ct ~] crontab -l  // 查看计划任务
*/1 * * * * /usr/bin/chrony sources >> /var/log/
[root@ct ~]

网络同步方法

网络同步时间

命令:ntpdate -u

若ntpdate命令不存在则安装即可:yum -y install ntp

解释:

ntpdate表示网络同步时间 -u 表示可以越过防火墙与主机同步。可man ntpdate查看手册 是NTP服务器(上海)。笔者亲测有效 如果仍然出现报错,那么有可能是ntp服务器停用了,可以上网百度其它ntp服务器。

在这里插入图片描述 成功设置后查看当前时间:

date -R

在这里插入图片描述 类似的ntp服务器,大家可以自行百度,这里提供给大家几个不同地区的仅供参考: 美国: 复旦: 微软公司授时主机(美国) :time.windows.com 台警大授时中心(*):

修改时区

这里我们时区是正确的中国时区,CST代表的不一定是中国时区,所以得看是否是+0800

网络同步时间并不会帮助同步时区,同步时区的命令如下:

ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

二、安装openstack Train版(两台机器都要进行安装,保证是最新版)

[root@ct ~] yum install -y centos-release-openstack-train
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: 
base                                                                  | 3.6 kB  00:00:00     
centos-ceph-nautilus                                                  | 3.0 kB  00:00:00     
centos-nfs-ganesha28                                                  | 3.0 kB  00:00:00     
centos-openstack-train                                                | 3.0 kB  00:00:00     
centos-qemu-ev                                                        | 3.0 kB  00:00:00     
file:///mnt/repodata/: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/"
正在尝试其它镜像。
extras                                                                | 2.9 kB  00:00:00     
updates                                                               | 2.9 kB  00:00:00     
软件包 centos-release-openstack-train-1-1. 已安装并且是最新版本
无须任何处理
您在 /var/spool/mail/root 中有新邮件
[root@ct ~]yum upgrade -y  // 升级所有节点上得软件包(所有节点都要进行升级)

  //  为您的版本安装合适的 OpenStack 客户端。(这个命令只支持centos7)
  [root@ct ~]yum install python-openstackclient -y 

出现问题

yum源出现:failure: repodata/ from flink-on-cdh: [Errno 256] No more mirrors to try

使用

yum clean all
yum repolist

就能继续使用yum来安装软件了

三、控制节点及部分计算节点的配置

2.1、安装、配置MariaDB数据库

大多数 OpenStack 服务使用 SQL 数据库来存储信息。数据库通常在控制器节点上运行,根据发行版使用 MariaDB。

[root@ct ~] yum -y install mariadb mariadb-server python2-PyMySQL
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: 
file:///mnt/repodata/: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/"
正在尝试其它镜像。
软件包 3:mariadb-10.3.20-3.el7.0.0.rdo1.x86_64 已安装并且是最新版本
软件包 3:mariadb-server-10.3.20-3.el7.0.0.rdo1.x86_64 已安装并且是最新版本
软件包 python2-PyMySQL-0.9.2-2. 已安装并且是最新版本
无须任何处理
[root@ct ~] 

软件介绍

mariadb:是mysql的一个分支,是一款完全兼容mysql的开源软件 mariadb-server:数据库服务 python2-PyMySQL:用于openstack的控制端连接mysql所需要的模块,如果不安装,则无法连接数据库;此包只安装在控制端

2.2、安装远程内存访问服务

[root@ct ~] yum install -y libibverbs
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: 
file:///mnt/repodata/: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/"
正在尝试其它镜像。
软件包 libibverbs-22.4-6.el7_9.x86_64 已安装并且是最新版本
无须任何处理
[root@ct ~]

2.3、添加Mysql子配置文件

[root@ct /] vi /etc//
[mysqld]
bind-address = 192.168.100.254		#controller节点的第一块网卡ip

default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

 //  数据库开机自启动
 [root@ct /]systemctl restart mariadb
您在 /var/spool/mail/root 中有新邮件
[root@ct /]systemctl enable mariadb
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/.
Created symlink from /etc/systemd/system/ to /usr/lib/systemd/system/.
Created symlink from /etc/systemd/system// to /usr/lib/systemd/system/.
[root@ct /]
[root@ct /]

2.4、设置数据库密码123456 用户默认为root

[root@ct /] mysql_secure_installation 

NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
      SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!

In order to log into MariaDB to secure it, we'll need the current
password for the root user.  If you've just installed MariaDB, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none):   // 回车跳过
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MariaDB
root user without the proper authorisation.

Set root password? [Y/n] y  // y确认设置新密码
New password:      // 输入新密码
Re-enter new password:       // 确认新密码
Password updated successfully!
Reloading privilege tables..
 ... Success!


By default, a MariaDB installation has an anonymous user, allowing anyone
to log into MariaDB without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n]       // 回车
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.
 
Disallow root login remotely? [Y/n]   // 回车
 ... Success!

By default, MariaDB comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n]     // 回车
 - Dropping test database...
 ... Success!
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n]    //回车
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MariaDB
installation should now be secure.

Thanks for using MariaDB!

2.5、安装消息队列rabbitmq

OpenStack 使用消息队列来协调服务之间的操作和状态信息。消息队列服务通常在控制器节点上运行。 RabbitMQ 消息队列服务,大多数发行版都支持它。

[root@ct /] yum install -y rabbitmq-server 
已加载插件:fastestmirror
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: mirrors.
file:///mnt/repodata/: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/"
正在尝试其它镜像。
软件包 rabbitmq-server-3.6.16-1. 已安装并且是最新版本
无须任何处理
[root@ct /]

//    启动消息队列,并开机自启动
[root@ct /] systemctl restart 
[root@ct /] systemctl enable 
Created symlink from /etc/systemd/system// to /usr/lib/systemd/system/.
[root@ct /] 

//   添加openstack用户并允许用户的配置、写入和读取访问权限 openstack
[root@ct /] rabbitmqctl add_user openstack 123456
Creating user "openstack"
[root@ct /] rabbitmqctl set_permissions openstack '.*' '.*' '.*'
Setting permissions for user "openstack" in vhost "/"
[root@ct /]

2.6、安装配置Etcd

OpenStack 服务可能会使用 Etcd,一种分布式可靠的键值存储,用于分布式密钥锁定、存储配置、跟踪服务实时性和其他场景。 etcd 服务运行在控制器节点上

[root@ct ~] yum install -y etcd
已加载插件:fastestmirror
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: mirrors.
file:///mnt/repodata/: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/"
正在尝试其它镜像。
软件包 etcd-3.3.11-2..x86_64 已安装并且是最新版本
无须任何处理
[root@ct ~] 
[root@ct ~] vi /etc/etcd/     // 修改配置文件
   ***//将ETCD_INITIAL_CLUSTER, ETCD_INITIAL_ADVERTISE_PEER_URLS, ETCD_ADVERTISE_CLIENT_URLS, 设置为ETCD_LISTEN_CLIENT_URLS控制器节点的管理 IP 地址,以允许其他节点通过管理网络访问://***
#[Member]
#ETCD_CORS=""
ETCD_DATA_DIR="/var/lib/etcd/"				#将这行注释掉
#ETCD_WAL_DIR=""
ETCD_LISTEN_PEER_URLS="http://192.168.100.254:2380"		#将这行改为局域网IP
ETCD_LISTEN_CLIENT_URLS="http://192.168.100.254:2379"		#将这行改为局域网IP
#ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
ETCD_NAME="ct"									#改为ct
#ETCD_SNAPSHOT_COUNT="100000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000"
#ETCD_QUOTA_BACKEND_BYTES="0"
#ETCD_MAX_REQUEST_BYTES="1572864"
#ETCD_GRPC_KEEPALIVE_MIN_TIME="5s"
#ETCD_GRPC_KEEPALIVE_INTERVAL="2h0m0s"
#ETCD_GRPC_KEEPALIVE_TIMEOUT="20s"
#
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.100.254:2380" #将这行改为局域网IP
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.100.254:2379"		#将这行改为局域网IP
#ETCD_DISCOVERY=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
#ETCD_DISCOVERY_SRV=""
ETCD_INITIAL_CLUSTER="ct=http://192.168.100.254:2380" #将这行改为局域网IP
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"		#将这行改为这样
ETCD_INITIAL_CLUSTER_STATE="new"					#将注释取消掉
#ETCD_STRICT_RECONFIG_CHECK="true"
#ETCD_ENABLE_V2="true"

启动时报错(查看日志显示)

couldn't find local name "ct" in the initial cluster configuration


//将配置文件中这两行得名称进行对应起来
ETCD_NAME="ct"	
ETCD_INITIAL_CLUSTER="ct=http://192.168.100.254:2380"
[root@ct log] systemctl restart etcd          // 再次启动就不会报错了
^[[A[root@ct log] systemctl enable etcd
Created symlink from /etc/systemd/system// to /usr/lib/systemd/system/.

2.7、安装Openstack服务

2.7.1、keystone服务安装(控制节点)

在安装keystone服务前需要先在数据库中创建keystone服务的存储库,并且授予一定的权限

使用root身份登陆数据库,创建keystone数据库

 mysql -u root -p123456

报错

[root@ct /] mysql -uroot -p123456
mysql: unknown variable 'bind-address=192.168.100.254'

配置文件中开始部分的[mysqld]少写了一个d

创建keystone数据库,并赋予适当的权限

MariaDB [(none)]> create database keystone;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant all privileges on keystone.* to 'keystone'@'%' identified by '123456';
Query OK, 0 rows affected (0.000 sec)

安装keystone和配置组件

[root@ct /]yum install openstack-keystone httpd mod_wsgi -y
已加载插件:fastestmirror
Repository base is listed more than once in the configuration
Repository updates is listed more than once in the configuration
Repository extras is listed more than once in the configuration
Repository centosplus is listed more than once in the configuration
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: 
file:///mnt/repodata/: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/"
正在尝试其它镜像。
软件包 1:openstack-keystone-16.0.2-1. 已安装并且是最新版本
软件包 httpd-2.4.6-97..5.x86_64 已安装并且是最新版本
软件包 mod_wsgi-3.4-18.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@ct /]
[root@ct /] vi /etc/keystone/ 

在该**[database]**部分中,配置数据库访问
[database]
# ...
connection = mysql+pymysql://keystone:123456@ct/keystone

在**[token]**部分中,配置 Fernet 令牌提供程序:
[token]
# ...
provider = fernet

填充身份服务数据库

[root@ct /] su -s /bin/sh -c "keystone-manage db_sync" keystone

出现内部服务器错误[HTTP 500]

配置文件内连接mysql的命令出现了错误connection = mysql+pymysql://keystone:123456@ct/keystone 要与主机和数据库对应起来

初始化fernet密钥库

[root@ct /] keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@ct /] keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导身份服务

[root@ct /] keystone-manage bootstrap --bootstrap-password 123456 \
> --bootstrap-admin-url http://ct:5000/v3/ \
> --bootstrap-internal-url http://ct:5000/v3/ \
> --bootstrap-public-url http://ct:5000/v3/ \
> --bootstrap-region-id RegionOne

2.7.2、配置apache httpd服务

编辑/etc/httpd/conf/文件并配置 ServerName选项以引用控制器节点

[root@ct /] vi /etc/httpd/conf/ 
ServerName ct

创建/usr/share/keystone/文件链接

ln -s /usr/share/keystone/ /etc/httpd//

修改/etc/httpd/conf/为granted

[root@ct /] vi /etc/httpd/conf/ 
<Directory />
    AllowOverride none
    Require all granted
</Directory>

重启httpd服务,并设置开机自启

[root@ct /] systemctl restart httpd
[root@ct /] systemctl enable httpd
Created symlink from /etc/systemd/system// to /usr/lib/systemd/system/.
[root@ct /]

通过设置适当的环境变量来配置管理帐户

[root@ct ~] cat >> ~/.bashrc << EOF
> export OS_USERNAME=admin    //控制台登陆用户名
> export OS_PASSWORD=123456     //控制台登陆密码
> export OS_PROJECT_NAME=admin
> export OS_USER_DOMAIN_NAME=Default
> export OS_PROJECT_DOMAIN_NAME=Default
> export OS_AUTH_URL=http://ct:5000/v3
> export OS_IDENTITY_API_VERSION=3
> export OS_IMAGE_API_VERSION=2
> EOF

[root@ct ~] source ~/.bashrc

[root@ct ~] openstack user list
+----------------------------------+-------+
| ID                               | Name  |
+----------------------------------+-------+
| 09a752a6b18d48f2ae1f472599e94c5a | admin |
+----------------------------------+-------+

创建OpenStack 域、项目、用户和角色

创建一个项目(project),创建在指定的domain(域)中,指定描述信息,project名称为service(可使用openstack domain list 查询)

[root@ct ~] openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 70a6fc6eb82f48dfa3d66d5389dcea3e |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+

创建角色(可使用openstack role list查看)

[root@ct ~] openstack project create --domain default --description "Service Project" service
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | Service Project                  |
| domain_id   | default                          |
| enabled     | True                             |
| id          | 70a6fc6eb82f48dfa3d66d5389dcea3e |
| is_domain   | False                            |
| name        | service                          |
| options     | {}                               |
| parent_id   | default                          |
| tags        | []                               |
+-------------+----------------------------------+
[root@ct ~] openstack role create user
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | None                             |
| domain_id   | None                             |
| id          | d0e42f995f29415c87705af0e7754ddc |
| name        | user                             |
| options     | {}                               |
+-------------+----------------------------------+

查看openstack 角色列表

[root@ct ~] openstack role list
+----------------------------------+--------+
| ID                               | Name   |
+----------------------------------+--------+
| 06feb1be0b964169bffa214af2435c56 | reader |
| 34755268537741189a3f9242010e1e68 | member |
| d0e42f995f29415c87705af0e7754ddc | user   |
| d9523d7587d54236ab48b6e9f1e7462f | admin  |
+----------------------------------+--------+

admin为管理员 member为 租户 user:用户

查看是否可以不指定密码就可以获取到token信息(验证认证服务)

[root@ct ~] openstack token issue
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field      | Value                                                                                                                                                                                   |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| expires    | 2021-02-18T09:27:10+0000                                                                                                                                                                |
| id         | gAAAAABgLiTedvBu5lIbeJPJ-gUoWoIJx_NpRWcaTFjWf-oN_x5q6AhkYN0WUBvlLKR8nO9RJRJmczdvOlD9h7Kl-Cp-d3Fvd3knzrhY8nEKKW2TA16JTd6KmN9UeczQtQL9nLJn5wnum8AQ6OLp_mfYukFMC7tlBKDfYa8Eugxoj164BwTfeTg |
| project_id | 667d2c1d9fca46a690b830e6864580c9                                                                                                                                                        |
| user_id    | 09a752a6b18d48f2ae1f472599e94c5a                                                                                                                                                        |
+------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

2.7.3、glance服务安装

在安装glance服务前需要先在数据库中创建glance服务的存储库,并且授予一定的权限

MariaDB [(none)]> create database glance;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> grant all privileges on glance.* to 'glance'@'%' identified by '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> flush privileges;
    创建OpenStack的Glance用户
    创建用户前,需要首先执行管理员环境变量脚本(配置keystone 组件定义过)
openstack user create --domain default --password GLANCE_PASS glance		 
openstack role add --project service --user glance admin					 
openstack service create --name glance --description "OpenStack Image" image

创建镜像服务 API 端点

openstack endpoint create --region RegionOne image public http://ct:9292
openstack endpoint create --region RegionOne image internal http://ct:9292
openstack endpoint create --region RegionOne image admin http://ct:9292

安装 openstack-glance 软件包

yum -y install openstack-glance 

glance有两个配置文件: /etc/glance/ /etc/glance/

cp -a /etc/glance/{,.bak}          #备份文件
cp -a /etc/glance/{,.bak}     #备份文件
grep -Ev '^$|#' /etc/glance/ > /etc/glance/       #查看
grep -Ev '^$|#' /etc/glance/ > /etc/glance/  #查看

添加配置

openstack-config --set /etc/glance/ database connection mysql+pymysql://glance:123456@ct/glance
openstack-config --set /etc/glance/ keystone_authtoken www_authenticate_uri http://ct:5000
openstack-config --set /etc/glance/ keystone_authtoken auth_url http://ct:5000
openstack-config --set /etc/glance/ keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/glance/ keystone_authtoken auth_type password
openstack-config --set /etc/glance/ keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/ keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/ keystone_authtoken project_name service
openstack-config --set /etc/glance/ keystone_authtoken username glance
openstack-config --set /etc/glance/ keystone_authtoken password 123456
openstack-config --set /etc/glance/ paste_deploy flavor keystone
openstack-config --set /etc/glance/ glance_store stores file,http
openstack-config --set /etc/glance/ glance_store default_store file
openstack-config --set /etc/glance/ glance_store filesystem_store_datadir /var/lib/glance/images/


[root@ct glance] cat 
[DEFAULT]
[cinder]
[cors]
[database]
connection = mysql+pymysql://glance:123456@ct/glance
[file]
[]
[]
[]
[]
[.vmware_datastore.store]

[glance_store]
stores = file,http					#存储类型,file:文件,http:基于api调用的方式,把镜像放到其他存储上
default_store = file					#默认存储方式
filesystem_store_datadir = /var/lib/glance/images/	##指定镜像存放的本地目录

[image_format]
[keystone_authtoken]
www_authenticate_uri = http://ct:5000			##指定认证的keystone的URI
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service				#glance用户针对service项目拥有admin权限
username = glance
password = 123456

[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]

[paste_deploy]
flavor = keystone					#指定提供认证的服务器为keystone

[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]

修改 配置

openstack-config --set /etc/glance/ database connection  mysql+pymysql://glance:123456@t/glance
openstack-config --set /etc/glance/ keystone_authtoken www_authenticate_uri   http://ct:5000
openstack-config --set /etc/glance/ keystone_authtoken auth_url  http://ct:5000
openstack-config --set /etc/glance/ keystone_authtoken memcached_servers  ct:11211
openstack-config --set /etc/glance/ keystone_authtoken auth_type  password
openstack-config --set /etc/glance/ keystone_authtoken project_domain_name  Default
openstack-config --set /etc/glance/ keystone_authtoken user_domain_name  Default
openstack-config --set /etc/glance/ keystone_authtoken project_name  service
openstack-config --set /etc/glance/ keystone_authtoken username  glance
openstack-config --set /etc/glance/ keystone_authtoken password  123456
openstack-config --set /etc/glance/ paste_deploy flavor  keystone

[root@ct glance]cat 
[DEFAULT]
[database]
connection = mysql+pymysql://glance:123456@ct/glance

[keystone_authtoken]
www_authenticate_uri = http://ct:5000
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = 123456

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_policy]

[paste_deploy]
flavor = keystone

[profiler]

初始化glance数据库,生成相关表结构

su -s /bin/sh -c "glance-manage db_sync" glance

开启glance服务

systemctl enable 
systemctl restart 

查看端口,检查服务是否正常启动

netstat -natp | grep 9292

赋予服务对存储设备的可写权限

-h:值对符号连接/软链接的文件修改

chown -hR glance:glance /var/lib/glance/

下载镜像将镜像上传检验是否能够使镜像上传成功

wget /0.3.5/cirros-0.3.5-x86_64
openstack image create --file cirros-0.3.5-x86_64 --disk-format qcow2 --container-format bare --public cirros

报错:Errot finding address for httpd://controller:9292/v2/schemas/image……

# cd /var/log/glance     进入glance日志文件
 
# cat /var/log/glance/    使用cat命令查看对应glance日志文件(为例子)

查看日志文件是否有报错

二、报错分析

由日志文件报错可以看错是glance配置文件有问题

仔细核对配置文件参数是否有错,特别是 auth_url 三、解决方法

1.修改配置文件错误

2.完成后重启glance服务,生效修改

 systemctl restart  

重新上传镜像,OK

查看上传的镜像

openstack image list

2.7.4、placement服务安装

创建数据库实例和数据库用户

[root@ct ~]# mysql -uroot -p123456

MariaDB [(none)]> CREATE DATABASE placement;
Query OK, 1 row affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '123456';
Query OK, 0 rows affected (0.000 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.001 sec)

MariaDB [(none)]> exit

创建Placement服务用户和API的endpoint

创建placement用户

[root@ct ~] openstack user create --domain default --password 123456 placement
 +---------------------+----------------------------------+
 | Field               | Value                            |
 +---------------------+----------------------------------+
 | domain_id           | default                          |
 | enabled             | True                             |
 | id                  | c46bb0ad1f794a0e9a32e8c4dd5e3002 |
 | name                | placement                        |
 | options             | {}                               |
 | password_expires_at | None                             |
 +---------------------+----------------------------------+

给与placement用户对service项目拥有admin权限

openstack role add --project service --user placement admin

创建一个placement服务,服务类型为placement

[root@controller ~]  openstack service create --name placement --description "Placement API" placement
 +-------------+----------------------------------+
 | Field       | Value                            |
 +-------------+----------------------------------+
 | description | Placement API                    |
 | enabled     | True                             |
 | id          | c40c9d63e11a427cac215bbbb630da97 |
 | name        | placement                        |
 | type        | placement                        |
 +-------------+----------------------------------+

注册API端口到placement的service中;注册的信息会写入到mysql中

[root@ct ~] openstack endpoint create --region RegionOne placement public http://ct:8778
 +--------------+----------------------------------+
 | Field        | Value                            |
 +--------------+----------------------------------+
 | enabled      | True                             |
 | id           | 3b498d3a66024a3395e9869556063db5 |
 | interface    | public                           |
 | region       | RegionOne                        |
 | region_id    | RegionOne                        |
| service_id   | c40c9d63e11a427cac215bbbb630da97 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

[root@ct ~] openstack endpoint create --region RegionOne placement internal http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 668b2c9ea7294329b66d10281605be54 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c40c9d63e11a427cac215bbbb630da97 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

[root@ct ~] openstack endpoint create --region RegionOne placement admin http://ct:8778
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | bce7894222d14766af11a1e930973b3b |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | c40c9d63e11a427cac215bbbb630da97 |
| service_name | placement                        |
| service_type | placement                        |
| url          | http://ct:8778                   |
+--------------+----------------------------------+

安装placement服务

[root@ct ~] yum -y install openstack-placement-api

修改placement配置文件

[root@ct ~] cp /etc/placement/{,.bak}
[root@ct ~] grep -Ev '^$|#' /etc/placement/ > /etc/placement/
[root@ct ~] openstack-config --set /etc/placement/ placement_database connection mysql+pymysql://placement:123456@ct/placement
[root@ct ~] openstack-config --set /etc/placement/ api auth_strategy keystone
[root@ct ~] openstack-config --set /etc/placement/ keystone_authtoken auth_url  http://ct:5000/v3
[root@ct ~] openstack-config --set /etc/placement/ keystone_authtoken memcached_servers ct:11211
[root@ct ~]openstack-config --set /etc/placement/ keystone_authtoken auth_type password
[root@ct ~] openstack-config --set /etc/placement/ keystone_authtoken project_domain_name Default
[root@ct ~] openstack-config --set /etc/placement/ keystone_authtoken user_domain_name Default
[root@ct ~] openstack-config --set /etc/placement/ keystone_authtoken project_name service
[root@ct ~] openstack-config --set /etc/placement/ keystone_authtoken username placement
[root@ct ~] openstack-config --set /etc/placement/ keystone_authtoken password 123456

[root@ct ~] cd /etc/placement/
[root@ct placement] cat 
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://ct:5000/v3
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = 123456
[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:123456@ct/placement
[profiler]

导入数据库

[root@ct ~]su -s /bin/sh -c "placement-manage db sync" placement

修改Apache配置文件: (安装完placement服务后会自动创建该文件-虚拟主机配置 )

[root@ct ~] cd /etc/httpd//
[root@ct ] vi  
配置最末尾添加
<Directory /usr/bin>
<IfVersion >= 2.4>
       Require all granted
</IfVersion>       
<IfVersion < 2.4>
       Order allow,deny
       Allow from all
</IfVersion>       
</Directory>

重新启动apache

systemctl restart httpd

curl 测试访问

[root@ct ~] curl ct:8778
{"versions": [{"status": "CURRENT", "min_version": "1.0", "max_version": "1.36", "id": "v1.0", "links": [{"href": "", "rel": "self"}]}]}

查看端口占用(netstat、lsof)

[root@ct ~] netstat -natp | grep 8778
tcp        0      0 192.168.100.11:52780    192.168.100.11:8778     TIME_WAIT   -                   
tcp6       0      0 :::8778                 :::*                    LISTEN      24925/httpd  

检查placement状态

[root@ct ~] placement-status upgrade check
 +----------------------------------+
 | Upgrade Check Results            |
 +----------------------------------+
 | Check: Missing Root Provider IDs |
 | Result: Success                  |
 | Details: None                    |
 +----------------------------------+
 | Check: Incomplete Consumers      |
 | Result: Success                  |
 | Details: None                    |
 +----------------------------------+

2.7.5、nova服务安装

控制节点

创建nova数据库,并执行授权操作

[root@ct ~] mysql -uroot -p
MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0;
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';
MariaDB [(none)]> flush privileges;
MariaDB [(none)]> exit

创建nova想关的数据库、授予对数据库的适当访问权限

MariaDB [(none)]> create database nova_api;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> create database nova;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> create database nove_cell0;
Query OK, 1 row affected (0.000 sec)
MariaDB [(none)]> 
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.001 sec)
MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by '123456'; Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> grant all privileges on nova.* to 'nova'@'%' identified by '123456'; Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by '123456';
Query OK, 0 rows affected (0.000 sec)
MariaDB [(none)]> grant all privileges on nova_cell0.* to 'nova'@'%' identified by '123456'; Query OK, 0 rows affected (0.000 sec)

来源admin凭据来访问仅管理员CLI命令

[root@ct ~] source ~/.bashrc

创建nova用户

[root@ct ~] openstack user create --domain default --password-prompt nova
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field               | Value                            |
+---------------------+----------------------------------+
| domain_id           | default                          |
| enabled             | True                             |
| id                  | 2f1bcde50731400fa59f5139385009a6 |
| name                | nova                             |
| options             | {}                               |
| password_expires_at | None                             |
+---------------------+----------------------------------+

adminnova用户添加角色

[root@ct ~] openstack role add --project service --user nova admin

创建nova服务实体

[root@ct ~] openstack service create --name nova --description "OpenStack Compute" compute
+-------------+----------------------------------+
| Field       | Value                            |
+-------------+----------------------------------+
| description | OpenStack Compute                |
| enabled     | True                             |
| id          | 499856c62dc8466d9bacb65a25cf5692 |
| name        | nova                             |
| type        | compute                          |
+-------------+----------------------------------+

创建 Compute API 服务端点

[root@ct ~]  openstack endpoint create --region RegionOne compute public http://ct:8774/v2.1 
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 45b23ead375645b78151d82fbc6a8576 |
| interface    | public                           |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 499856c62dc8466d9bacb65a25cf5692 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+
[root@ct ~] openstack endpoint create --region RegionOne compute internal  http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | b165c19111e94aea92894bb4ad21fd39 |
| interface    | internal                         |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 499856c62dc8466d9bacb65a25cf5692 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+
[root@ct ~] openstack endpoint create --region RegionOne compute admin  http://ct:8774/v2.1
+--------------+----------------------------------+
| Field        | Value                            |
+--------------+----------------------------------+
| enabled      | True                             |
| id           | 1e218b6534aa45278dea9e0be7e486dc |
| interface    | admin                            |
| region       | RegionOne                        |
| region_id    | RegionOne                        |
| service_id   | 499856c62dc8466d9bacb65a25cf5692 |
| service_name | nova                             |
| service_type | compute                          |
| url          | http://ct:8774/v2.1              |
+--------------+----------------------------------+
您在 /var/spool/mail/root 中有新邮件
[root@ct ~]

安装和配置组件

安装软件包

yum install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler -y

编辑/etc/nova/文件并完成以下操作

[root@ct ~] cp -a /etc/nova/{,.bak}
[root@ct ~] grep -Ev '^$|#' /etc/nova/ > /etc/nova/

修改
openstack-config --set /etc/nova/ DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/ DEFAULT my_ip 172.16.100.254 			##修改为 ct的IP(内部IP,即VM IP)
openstack-config --set /etc/nova/ DEFAULT use_neutron true
openstack-config --set /etc/nova/ DEFAULT firewall_driver 
openstack-config --set /etc/nova/ DEFAULT transport_url rabbit://openstack:123456@ct
openstack-config --set /etc/nova/ api_database connection mysql+pymysql://nova:123456@ct/nova_api
openstack-config --set /etc/nova/ database connection mysql+pymysql://nova:123456@ct/nova
openstack-config --set /etc/nova/ placement_database connection mysql+pymysql://placement:123456@ct/placement
openstack-config --set /etc/nova/ api auth_strategy keystone
openstack-config --set /etc/nova/ keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/ keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/ keystone_authtoken auth_type password
openstack-config --set /etc/nova/ keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/ keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/ keystone_authtoken project_name service
openstack-config --set /etc/nova/ keystone_authtoken username nova
openstack-config --set /etc/nova/ keystone_authtoken password 123456
openstack-config --set /etc/nova/ vnc enabled true
openstack-config --set /etc/nova/ vnc server_listen ' $my_ip'
openstack-config --set /etc/nova/ vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/ glance api_servers http://ct:9292
openstack-config --set /etc/nova/ oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/ placement region_name RegionOne
openstack-config --set /etc/nova/ placement project_domain_name Default
openstack-config --set /etc/nova/ placement project_name service
openstack-config --set /etc/nova/ placement auth_type password
openstack-config --set /etc/nova/ placement user_domain_name Default
openstack-config --set /etc/nova/ placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/ placement username placement
openstack-config --set /etc/nova/ placement password 123456


#查看
[root@ct ~] cat /etc/nova/

[DEFAULT]
enabled_apis = osapi_compute,metadata		#指定支持的api类型
my_ip = 172.16.100.254				#定义本地IP
use_neutron = true					#通过neutron获取IP地址
firewall_driver = 
transport_url = rabbit://openstack:123456@ct	#指定连接的rabbitmq

[api]
auth_strategy = keystone				#指定使用keystone认证

[api_database]
connection = mysql+pymysql://nova:123456@ct/nova_api

[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]

[database]
connection = mysql+pymysql://nova:123456@ct/nova

[devices]
[ephemeral_storage_encryption]
[filter_scheduler]

[glance]
api_servers = http://ct:9292

[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]

[keystone_authtoken]				#配置keystone的认证信息
auth_url = http://ct:5000/v3				#到此url去认证
memcached_servers = ct:11211			#memcache数据库地址:端口
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[libvirt]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]

[oslo_concurrency]					#指定锁路径
lock_path = /var/lib/nova/tmp			#锁的作用是创建虚拟机时,在执行某个操作的时候,需要等此步骤执行完后才能执行下一个步骤,不能并行执行,保证操作是一步一步的执行

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]

[pci]
[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = 123456

[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]						#此处如果配置不正确,则连接不上虚拟机的控制台
enabled = true		
server_listen =  $my_ip				#指定vnc的监听地址
server_proxyclient_address =  $my_ip			#server的客户端地址为本机地址;此地址是管理网的地址

[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

[placement_database]
connection = mysql+pymysql://placement:123456@ct/placement

初始化nova_api数据库

[root@ct ~] su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库

nova服务内部把资源划分到不同的cell中,把计算节点划分到不同的cell中;openstack内部基于cell把计算节点进行逻辑上的分组

[root@ct ~] su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
#创建cell1单元格;
[root@ct ~] su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
#初始化nova数据库;可以通过 /var/log/nova/ 日志判断是否初始化成功
[root@ct ~] su -s /bin/sh -c "nova-manage db sync" nova
#可使用以下命令验证cell0和cell1是否注册成功
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova #验证cell0和cell1组件是否注册成功

启动Nova服务

[root@ct ~] systemctl enable    
[root@ct ~] systemctl restart    

检查nova服务端口,并使用curl进行访问

[root@ct ~] netstat -tnlup|egrep '8774|8775'
[root@ct ~] curl http://ct:8774

计算节点

安装和配置组件

[root@c2 ~]yum install openstack-nova-compute -y
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: 
 * centos-ceph-nautilus: 
 * centos-nfs-ganesha28: 
 * centos-openstack-train: 
 * centos-qemu-ev: 
 * extras: 
 * updates: 
file:///mnt/repodata/: [Errno 14] curl#37 - "Couldn't open file /mnt/repodata/"
正在尝试其它镜像。
软件包 1:openstack-nova-compute-20.6.0-1. 已安装并且是最新版本
无须任何处理
[root@c2 ~]

编辑/etc/nova/文件并完成以下操作

cp -a /etc/nova/{,.bak}
grep -Ev '^$|#' /etc/nova/ > /etc/nova/
openstack-config --set /etc/nova/ DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/ DEFAULT transport_url rabbit://openstack:123456@ct
openstack-config --set /etc/nova/ DEFAULT my_ip 172.16.100.252 				#此处修改为对应节点的内部(VM)IP
openstack-config --set /etc/nova/ DEFAULT use_neutron true
openstack-config --set /etc/nova/ DEFAULT firewall_driver 
openstack-config --set /etc/nova/ api auth_strategy keystone
openstack-config --set /etc/nova/ keystone_authtoken auth_url http://ct:5000/v3
openstack-config --set /etc/nova/ keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/nova/ keystone_authtoken auth_type password
openstack-config --set /etc/nova/ keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/ keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/ keystone_authtoken project_name service
openstack-config --set /etc/nova/ keystone_authtoken username nova
openstack-config --set /etc/nova/ keystone_authtoken password 123456
openstack-config --set /etc/nova/ vnc enabled true
 openstack-config --set /etc/nova/ vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/ vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/ vnc novncproxy_base_url http://172.16.100.254:6080/vnc_auto.html
openstack-config --set /etc/nova/ glance api_servers http://ct:9292
openstack-config --set /etc/nova/ oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/ placement region_name RegionOne
openstack-config --set /etc/nova/ placement project_domain_name Default
openstack-config --set /etc/nova/ placement project_name service
openstack-config --set /etc/nova/ placement auth_type password
openstack-config --set /etc/nova/ placement user_domain_name Default
openstack-config --set /etc/nova/ placement auth_url http://ct:5000/v3
openstack-config --set /etc/nova/ placement username placement
openstack-config --set /etc/nova/ placement password 123456
openstack-config --set /etc/nova/ libvirt virt_type qemu


#配置文件内容如下:
[root@c1 ~] cd /etc/nova/
[root@c1 nova] cat 

[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:123456@ct
my_ip = 172.16.100.252
use_neutron = true
firewall_driver = 

[api]
auth_strategy = keystone

[api_database]
[barbican]
[cache]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]

[glance]
api_servers = http://ct:9292

[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]

[keystone_authtoken]
auth_url = http://ct:5000/v3
memcached_servers = ct:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = 123456

[libvirt]
virt_type = qemu

[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]

[oslo_concurrency]
lock_path = /var/lib/nova/tmp

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[pci]

[placement]
region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://ct:5000/v3
username = placement
password = 123456

[powervm]
[privsep]
[profiler]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]

[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address =  $my_ip
novncproxy_base_url = http://172.16.100.254:6080/vnc_auto.html			#比较特殊的地方,需要手动添加IP地址,否则之后搭建成功后,无法通过UI控制台访问到内部虚拟机

[workarounds]
[wsgi]
[xenserver]
[xvp]
[zvm]

开启服务

 systemctl enable  
 systemctl restart  

【controller节点操作】

查看compute节点是否注册到controller上,通过消息队列;需要在controller节点(即ct节点)执行

检查你的ct节点是否支持虚拟机的硬件加速

[root@ct ~] openstack compute service list --service nova-compute

扫描当前openstack中有哪些计算节点可用,发现后会把计算节点创建到cell中,后面就可以在cell中创建虚拟机;相当于openstack内部对计算节点进行分组,把计算节点分配到不同的cell中

[root@ct ~] vim /etc/nova/
[scheduler]
discover_hosts_in_cells_interval = 300			#每300秒扫描一次

[root@ct ~] systemctl restart 

验证计算节点服务

#检查 nova 的各个服务是否都是正常,以及 compute 服务是否注册成功
[root@ct ~] openstack compute service list


#查看各个组件的 api 是否正常
[root@ct ~] openstack catalog list


#查看是否能够拿到镜像
[root@ct ~] openstack image list


#查看cell的api和placement的api是否正常,只要其中一个有误,后期无法创建虚拟机
[root@ct ~] nova-status upgrade check
[root@ct ~] egrep -c '(vmx|svm)' /proc/cpuinfo
0

如果此命令返回值,则您的计算节点支持硬件加速,这通常不需要额外的配置。 如果此命令返回值0,则您的计算节点不支持硬件加速,您必须配置libvirt为使用 QEMU 而不是 KVM。 如果不支持硬件虚拟机硬件加速则需要修改controller节点的/etc/nova/中的[libvirt]部分

建议在VMware中设置不开启硬件虚拟化,建议在配置文件中开启qemu 这个看个人电脑情况,我的只有使用qemu才行)

[libvirt]
# ...
virt_type = qemu

2.7.6、neutron服务安装

创建数据库neutron,并进行授权

mysql -u root -p123123
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';
flush privileges;
exit

创建neutron用户,用于在keystone做认证

openstack user create --domain default --password 123456 neutron

将neutron用户添加到service项目中拥有管理员权限

 openstack role add --project service --user neutron admin

创建network服务,服务类型为network

openstack service create --name neutron --description "OpenStack Networking" network

注册API到neutron服务,给neutron服务关联端口,即添加endpoint

 openstack endpoint create --region RegionOne network public http://ct:9696
 openstack endpoint create --region RegionOne network internal http://ct:9696	
 openstack endpoint create --region RegionOne network admin http://ct:9696

安装提供者网络(桥接)

 yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables conntrack-tools

更改主配置文件

cp -a /etc/neutron/{,.bak}
grep -Ev '^$|#' /etc/neutron/ > /etc/neutron/
openstack-config --set /etc/neutron/ database connection mysql+pymysql://neutron:123456@ct/neutron
openstack-config --set /etc/neutron/ DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/ DEFAULT service_plugins router
openstack-config --set  /etc/neutron/ DEFAULT allow_overlapping_ips true
openstack-config --set /etc/neutron/ DEFAULT transport_url rabbit://openstack:123456@ct
openstack-config --set /etc/neutron/ DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/ DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/ DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/ keystone_authtoken www_authenticate_uri http://ct:5000
openstack-config --set /etc/neutron/ keystone_authtoken auth_url http://ct:5000
openstack-config --set /etc/neutron/ keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/neutron/ keystone_authtoken auth_type password
openstack-config --set /etc/neutron/ keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/ keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/ keystone_authtoken project_name service
openstack-config --set /etc/neutron/ keystone_authtoken username neutron
openstack-config --set /etc/neutron/ keystone_authtoken password 123456
openstack-config --set /etc/neutron/ oslo_concurrency lock_path /var/lib/neutron/tmp
openstack-config --set  /etc/neutron/ nova auth_url http://ct:5000
openstack-config --set  /etc/neutron/ nova auth_type password
openstack-config --set  /etc/neutron/ nova project_domain_name default
openstack-config --set  /etc/neutron/ nova user_domain_name default
openstack-config --set  /etc/neutron/ nova region_name RegionOne
openstack-config --set  /etc/neutron/ nova project_name service
openstack-config --set  /etc/neutron/ nova username nova
openstack-config --set  /etc/neutron/ nova password 123456

[root@ct neutron] cat 
[DEFAULT]
core_plugin = ml2						#启用二层网络插件
service_plugins = router					#启用三层网络插件
allow_overlapping_ips = true
transport_url = rabbit://openstack:123456@ct		#配置rabbitmq连接
auth_strategy = keystone					#认证的方式:keystone
notify_nova_on_port_status_changes = true			#当网络接口发生变化时,通知给计算节点	
notify_nova_on_port_data_changes = true			#当端口数据发生变化,通知计算节点
[cors]
[database]						#配置数据库连接
connection = mysql+pymysql://neutron:123456@ct/neutron
[keystone_authtoken]					#配置keystone认证信息
www_authenticate_uri = http://ct:5000
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456
[oslo_concurrency]						#配置锁路径
lock_path = /var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]
[nova]							#neutron需要给nova返回数据
auth_url = http://ct:5000					#到keystone认证nova
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova						#通过nova的用户名和密码到keystone验证nova的token
password = 123456

修改 ML2 plugin 配置文件 ml2_conf.ini

cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf. > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers  flat,vlan,vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types vxlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers  linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers  port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks  provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_vxlan vni_ranges 1:1000
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset  true

[DEFAULT]

[ml2]
type_drivers = flat,vlan,vxlan				#配置类型驱动;单一扁平网络(桥接)和vlan;让二层网络支持桥接,支持基于vlan做子网划分
tenant_network_types = vxlan				#租户网络类型(vxlan)
mechanism_drivers = linuxbridge,l2population		#启用Linuxbridge和l2机制,(l2population机制是为了简化网络通信拓扑,减少网络广播):
extension_drivers = port_security			#启用端口安全扩展驱动程序,基于iptables实现访问控制;但配置了扩展安全组会导致一些端口限制,造成一些服务无法启动 

[ml2_type_flat]
flat_networks = provider				#配置公共虚拟网络为flat网络

[ml2_type_vxlan]
vni_ranges = 1:1000				#为私有网络配置VXLAN网络识别的网络范围

[securitygroup]
enable_ipset = true					#启用 ipset 增加安全组的方便性

修改 linux bridge network provider 配置文件 linuxbridge_agent.ini

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent. > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:ens37   ## 网卡名称
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 172.16.100.254  ##控制节点IP地址
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  .iptables_firewall.IptablesFirewallDriver

修改内核参数

echo '-nf-call-iptables=1' >> /etc/
echo '-nf-call-ip6tables=1' >> /etc/
modprobe br_netfilter	#表示向内核加入参数
sysctl -p

配置Linuxbridge接口驱动和外部网络网桥

cp -a /etc/neutron/l3_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/l3_agent. > /etc/neutron/l3_agent.ini
openstack-config --set /etc/neutron/l3_agent.ini DEFAULT interface_driver linuxbridge

cat l3_agent.ini
[DEFAULT]
interface_driver = linuxbridge

修改dhcp_agent 配置文件

 cp -a /etc/neutron/dhcp_agent.ini{,.bak}
 grep -Ev '^$|#' /etc/neutron/dhcp_agent. > /etc/neutron/dhcp_agent.ini 
 openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
 openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver 
 openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true
 
[root@ct neutron] cat dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge	#指定默认接口驱动为linux网桥
dhcp_driver = 	#指定DHCP驱动
enable_isolated_metadata = true			#开启iso元数据

配置元数据代理、用于配置桥接与自服务网络的通用配置

 cp -a /etc/neutron/metadata_agent.ini{,.bak}
 grep -Ev '^$|#' /etc/neutron/metadata_agent. > /etc/neutron/metadata_agent.ini
 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host ct
 openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

修改nova配置文件,用于neutron交互

openstack-config --set /etc/nova/ neutron url http://ct:9696
openstack-config --set /etc/nova/ neutron auth_url http://ct:5000
openstack-config --set /etc/nova/ neutron auth_type password
openstack-config --set /etc/nova/ neutron project_domain_name default
openstack-config --set /etc/nova/ neutron user_domain_name default
openstack-config --set /etc/nova/ neutron region_name RegionOne
openstack-config --set /etc/nova/ neutron project_name service
openstack-config --set /etc/nova/ neutron username neutron
openstack-config --set /etc/nova/ neutron password 123456
openstack-config --set /etc/nova/ neutron service_metadata_proxy true
openstack-config --set /etc/nova/ neutron metadata_proxy_shared_secret METADATA_SECRET

创建ML2插件文件符号连接

网络服务初始化脚本需要/etc/neutron/指向ML2插件配置文件的符号链接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/

初始化数据库

 su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/ --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

重启计算节点nova-api服务

systemctl restart 

开启neutron服务、设置开机自启动并检查端口

systemctl enable    

systemctl start    

netstat -anutp |grep 9696

因为配置了第三层L3网络服务、所以需要启动第三层服务

systemctl restart 
systemctl enable 

计算节点

pset:iptables的扩展,允许匹配规则的集合而不仅仅是一个IP

yum -y install openstack-neutron-linuxbridge ebtables ipset conntrack-tools

修改文件

cp -a /etc/neutron/{,.bak}
grep -Ev '^$|#' /etc/neutron/ > /etc/neutron/
openstack-config --set /etc/neutron/ DEFAULT transport_url rabbit://openstack:123456@ct
openstack-config --set /etc/neutron/ DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/ keystone_authtoken www_authenticate_uri http://ct:5000
openstack-config --set /etc/neutron/ keystone_authtoken auth_url http://ct:5000
openstack-config --set /etc/neutron/ keystone_authtoken memcached_servers ct:11211
openstack-config --set /etc/neutron/ keystone_authtoken auth_type password
openstack-config --set /etc/neutron/ keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/ keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/ keystone_authtoken project_name service
openstack-config --set /etc/neutron/ keystone_authtoken username neutron
openstack-config --set /etc/neutron/ keystone_authtoken password 123456
openstack-config --set /etc/neutron/ oslo_concurrency lock_path /var/lib/neutron/tmp

[root@c1 neutron] cat 

[DEFAULT]					#neutron的server端与agent端通讯也是通过rabbitmq进行通讯的
transport_url = rabbit://openstack:123456@ct
auth_strategy = keystone				#认证策略:keystone
[cors]
[database]

[keystone_authtoken]				#指定keystone认证的信息
www_authenticate_uri = http://ct:5000
auth_url = http://ct:5000
memcached_servers = ct:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = 123456

[oslo_concurrency]					#配置锁路径(管理线程库)
lock_path = /var/lib/neutron/tmp

[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_middleware]
[oslo_policy]
[privsep]
[ssl]

配置Linux网桥代理

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent. > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings  provider:ens37
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan local_ip 172.16.100.252
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan l2_population true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group  true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver  .iptables_firewall.IptablesFirewallDriver

[root@c1 ml2] cat linuxbridge_agent.ini
[DEFAULT]
[linux_bridge]
physical_interface_mappings = provider:ens37

[vxlan]
enable_vxlan = true							#开启Vxlan网络
local_ip = 172.16.100.252			
l2_population = true						#L2 Population 是用来提高 VXLAN 网络扩展能力的组件

[securitygroup]
enable_security_group = true						#开启安全组
firewall_driver = .iptables_firewall.IptablesFirewallDriver	#指定安全组驱动文件

修改内核

echo '-nf-call-iptables=1' >> /etc/		#允许虚拟机的数据通过物理机出去
echo '-nf-call-ip6tables=1' >> /etc/
modprobe br_netfilter		#modprobe:用于向内核中加载模块或者从内核中移除模块。modprobe -r 表示移除
sysctl -p

修改配置文件

openstack-config --set /etc/nova/ neutron auth_url http://ct:5000
openstack-config --set /etc/nova/ neutron auth_type password
openstack-config --set /etc/nova/ neutron project_domain_name default
openstack-config --set /etc/nova/ neutron user_domain_name default
openstack-config --set /etc/nova/ neutron region_name RegionOne
openstack-config --set /etc/nova/ neutron project_name service
openstack-config --set /etc/nova/ neutron username neutron
openstack-config --set /etc/nova/ neutron password 123456

验证服务组件【ct节点】

openstack extension list --network
openstack network agent list

2.7.7、Dashboard服务

yum -y install openstack-dashboard

修改local_setting本地控制台的配置文件

[root@ct ~] cd /etc/openstack-dashboard/
[root@ct openstack-dashboard] ls
cinder_policy.json  keystone_policy.json  neutron_policy.json  nova_policy.json
glance_policy.json  local_settings        nova_policy.d

修改local_setting本地控制台的配置文件

[root@ct openstack-dashboard] vim local_settings 

39 ALLOWED_HOSTS = ['*']
94 CACHES = {
95     'default': {
96         'BACKEND': '',
97         'LOCATION': 'ct:11211',
98     },
99 }
104 SESSION_ENGINE = ''
118 OPENSTACK_HOST = "ct"
119 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST
120 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True                 //添加 以下内容 
121 OPENSTACK_API_VERSIONS = {
122     "identity": 3,
123     "image": 2,
124     "volume": 3,
125 }
126 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
127 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
135     'enable_fip_topology_check': False,   //修改为False
//在136行下添加以下3行
137     'enable_lb': False,
138     'enable_firewall': False,
139     'enable_vpn': False,
158 TIME_ZONE = "Asia/Shanghai"   //修改时区为伤害


## 注释:
import os								//使用Python导入一个模块
from  import ugettext_lazy as _
from openstack_dashboard.settings import HORIZON_CONFIG
DEBUG = False							//不开启调式	
ALLOWED_HOSTS = ['*']						//只允许通过列表中指定的域名访问dashboard;允许通过指定的IP地址及域名访问dahsboard;
								['*']表示允许所有域名
LOCAL_PATH = '/tmp'
SECRET_KEY='f8ac039815265a99b64f'
SESSION_ENGINE = ''		//指定session引擎
CACHES = {							
    'default': {
         'BACKEND': '',
         'LOCATION': 'ct:11211',	//指定memcache地址及端口
    }
}
//以下配置session信息存放到memcache中;session信息不仅可以存放到memcache中,也可以存放到其他地方
EMAIL_BACKEND = ''
OPENSTACK_HOST = "ct"	
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST	
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True			//让dashboard支持域
OPENSTACK_API_VERSIONS = {
    "identity": 3,
    "image": 2,
    "volume": 3,
}
//配置openstack的API版本
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"


OPENSTACK_NEUTRON_NETWORK = {	
    'enable_auto_allocated_network': False,
    'enable_distributed_router': False,
    'enable_fip_topology_check': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_vpn': False,
    'enable_ipv6': True,
    'enable_quotas': True,
    'enable_rbac_policy': True,
    'enable_router': True,
    'default_dns_nameservers': [],
    'supported_provider_types': ['*'],
    'segmentation_id_range': {},
    'extra_provider_types': {},
    'supported_vnic_types': ['*'],
    'physical_networks': [],
}
//定义使用的网络类型,[*]表示

TIME_ZONE = "Asia/Shanghai"

重启服务 重新生成并重启Apache服务 (注:由于dashborad会重新复制代码文件,重启apache会比较慢)

[root@ct ~] cd /usr/share/openstack-dashboard
[root@ct openstack-dashboard] python  make_web_conf --apache > /etc/httpd//
[root@ct openstack-dashboard] systemctl enable 
Created symlink from /etc/systemd/system// to /usr/lib/systemd/system/.
[root@ct openstack-dashboard] systemctl restart 

打开浏览器,在地址栏中输入“http://172.16.100.254”(c1节点ip),进入Dashboard登录页面。 在登录页面依次填写:“域:default、用户名:admin、密码:123456”(在~.bashrc中已定义) 完成后,进行登陆

日志文件/var/log/httpd/openstackopenstack_dashboard

Openstack :警告:......服务“身份"没有政策规则

[root@ct] cd /usr/share/openstack-dashboard
[root@ct openstack-dashboard] python  make_web_conf --apache > /etc/httpd//
[root@ct openstack-dashboard] ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf
[root@ct openstack-dashboard] systemctl restart 
[root@ct openstack-dashboard] systemctl restart httpd

再次连接又报了一个错误

[Tue Jun 07 19:05:13.658474 2022] [:error] [pid 15556] [client 172.16.100.1:55873] Daemon process called 'keystone-public' cannot be accessed by this WSGI application: /usr/bin/keystone-wsgi-public, referer: http://172.16.100.254/project/routers/

#dashboard能够正常打开 #对应日志提示

[Wed Feb 26 20:30:04.788847 2020] [:error] [pid 29353] [client 192.168.31.1:64330] Daemon process called 'keystone-public' cannot be accessed by this WSGI application: /usr/bin/keystone-wsgi-public, referer: http://192.168.31.200/project/

原因

#dashboard 路径指向问题 解决方案

#重建dashboard配置,官网没有指明部署时配置过就不需要再配置

cd /usr/share/openstack-dashboard
python  make_web_conf --apache > /etc/httpd//

#登录到dashboard将出现权限错误和显示混乱,需要建立策略的软链接部署时配置过就不需要再配置

ln -s /etc/openstack-dashboard /usr/share/openstack-dashboard/openstack_dashboard/conf

在local_settings底下新增根目录指向

#vim /etc/openstack-dashboard/local_settings

WEBROOT = '/dashboard/'

vim /etc/httpd//

#将原有的配置注释掉,添加以下配置
#WSGIScriptAlias / /usr/share/openstack-dashboard/openstack_dashboard/
#Alias /static /usr/share/openstack-dashboard/static
WSGIScriptAlias /dashboard /usr/share/openstack-dashboard/openstack_dashboard/wsgi/
Alias /dashboard/static /usr/share/openstack-dashboard/static

重启httpd及memcached

systemctl restart 
systemctl restart