IT数码 购物 网址 头条 软件 日历 阅读 图书馆
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
图片批量下载器
↓批量下载图片,美女图库↓
图片自动播放器
↓图片自动播放器↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁
 
   -> 大数据 -> OpenStack多节点企业私有云平台部署 -> 正文阅读

[大数据]OpenStack多节点企业私有云平台部署

部署OpenStack多节点企业私有云平台

基础环境

  • 三台虚拟机,至少4G2C,开启虚拟化

  • 修改主机名

    hostnamectl set-hostname controller

    cat hostnamectl set-hostname compute01

    hostnamectl set-hostname block01

  • 关闭NetworkManager

    systemctl stop NetworkManager

    systemctl disable NetworkManager

  • 关闭防火墙

    systemctl stop firewalld

    systemctl disable firewalld

  • 关闭selinux

    sed -i "s/.*SELINUX=.*/SELINUX=disabled/g" /etc/selinux/config

  • 时间同步chrony

    yum -t install chrony

    systemctl start chronyd

    chronyc sources -v

  • host映射

    cat >> /etc/hosts << EOF

    172.16.10.10 controller

    172.16.10.11 compute01

    172.16.10.12 block01

    EOF

一、安装yum源

  • 在controller节点和compute01节点执行

?yum -y install centos-release-openstack-train
?yum -y install python-openstackclient
?yum -y install openstack-selinux
?yum -y install openstack-utils

二、部署基础环境

  • 在controller节点执行

2.1 部署数据库

2.1.1 安装数据库

?yum -y install mariadb mariadb-server python2-PyMySQL

2.1.2 修改配置文件 /etc/my.cnf.d/openstack.cnf

?[root@controller ~]# vim /etc/my.cnf.d/openstack.cnf 
?[mysqld]
?# 绑定controller节点的IP
?bind-address = 172.16.10.10
?# 默认存储引擎
?default-storage-engine = innodb
?# 每张表独立表空间文件
?innodb_file_per_table = on
?# 最大连接数
?max_connections = 4096
?# 默认字符集
?collation-server = utf8_general_ci
?character-set-server = utf8

2.1.3 启动mariadb服务

?systemctl start mariadb
?systemctl enable mariadb

2.1.4 配置mysql

?[root@controller ~]# mysql_secure_installation  ##配置mysql,设置mysql登录密码为123456
?NOTE: RUNNING ALL PARTS OF THIS SCRIPT IS RECOMMENDED FOR ALL MariaDB
? ? ?  SERVERS IN PRODUCTION USE!  PLEASE READ EACH STEP CAREFULLY!
??
?In order to log into MariaDB to secure it, we'll need the current
?password for the root user.  If you've just installed MariaDB, and
?you haven't set the root password yet, the password will be blank,
?so you should just press enter here.
??
?Enter current password for root (enter for none): ? ## 直接按回车
?OK, successfully used password, moving on...
??
?Setting the root password ensures that nobody can log into the MariaDB
?root user without the proper authorisation.
??
?Set root password? [Y/n] Y
?New password: ? ? ? ? ?## 设置密码123456
?Re-enter new password: ? ## 重复密码123456
?Password updated successfully!
?Reloading privilege tables..
? ... Success!
??
??
?By default, a MariaDB installation has an anonymous user, allowing anyone
?to log into MariaDB without having to have a user account created for
?them.  This is intended only for testing, and to make the installation
?go a bit smoother.  You should remove them before moving into a
?production environment.
??
?Remove anonymous users? [Y/n] Y
? ... Success!
??
?Normally, root should only be allowed to connect from 'localhost'.  This
?ensures that someone cannot guess at the root password from the network.
??
?Disallow root login remotely? [Y/n] n
? ... skipping.
??
?By default, MariaDB comes with a database named 'test' that anyone can
?access.  This is also intended only for testing, and should be removed
?before moving into a production environment.
??
?Remove test database and access to it? [Y/n] Y
??
? - Dropping test database...
? ? ... Success!
? - Removing privileges on test database...
? ? ... Success!
??
?Reloading the privilege tables will ensure that all changes made so far
?will take effect immediately.
??
?Reload privilege tables now? [Y/n] Y
? ... Success!
??
?Cleaning up...
??
?All done!  If you've completed all of the above steps, your MariaDB
?installation should now be secure.
??
?Thanks for using MariaDB!

2.2 部署rabbitmq

2.2.1 安装并启动rabbitmq

?[root@controller ~]# yum -y install rabbitmq-server
?[root@controller ~]# systemctl start rabbitmq-server
?[root@controller ~]# systemctl enable rabbitmq-server
??

2.2.2 创建用户并授权

?[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS  ## 创建用户及密码
?Creating user "openstack"
?[root@controller ~]# rabbitmqctl set_permissions  openstack ".*" ".*" ".*"  ## 授权
?Setting permissions for user "openstack" in vhost "/"

2.3 部署memcached

2.3.1 安装配置并启动memached

?# 安装
?[root@controller ~]# yum -y install memcached python-memcached
?# 修改配置文件
?[root@controller ~]# sed -i "s/OPTIONS=\"-l 127.0.0.1,::1\"/OPTIONS=\"-l 127.0.0.1,::1,controller\"/g" /etc/sysconfig/memcached
?# 启动
?[root@controller ~]# systemctl start memcached
?[root@controller ~]# systemctl enable memcached

2.4 部署etcd

2.4.1 安装配置etcd

# 安装
[root@controller ~]# yum -y install etcd
# 配置
[root@controller ~]# mv /etc/etcd/etcd.conf /etc/etcd/etcd.conf_bak
[root@controller ~]# vim /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://172.16.10.10:2380"
ETCD_LISTEN_CLIENT_URLS="http://172.16.10.10:2379"
ETCD_NAME="controller"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://172.16.10.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://172.16.10.10:2379"
ETCD_INITIAL_CLUSTER="controller=http://172.16.10.10:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
# 启动
[root@controller ~]# systemctl start etcd
[root@controller ~]# systemctl enable etcd

三、部署keystone服务

3.1 数据库用户创建及授权

mysql -uroot -p123456 -e "create database keystone;"
mysql -uroot -p123456 -e "grant all privileges on keystone.* to 'keystone'@'localhost' identified by 'KEYSTONEDB_PASS';"
mysql -uroot -p123456 -e "grant all privileges on keystone.* to 'keystone'@'%' identified by 'KEYSTONEDB_PASS';"

3.2 安装openstack-keystone httpd mod_wsgi

yum -y install openstack-keystone httpd mod_wsgi 

3.3 修改配置文件使用的是openstack-config --set与vim编辑原理相同

cp -a /etc/keystone/keystone.conf{,.bak}
grep -Ev "^$|#" /etc/keystone/keystone.conf.bak > /etc/keystone/keystone.conf
openstack-config --set /etc/keystone/keystone.conf database connection mysql+pymysql://keystone:KEYSTONEDB_PASS@controller/keystone
openstack-config --set /etc/keystone/keystone.conf token provider fernet

3.4 初始化数据库

su -s /bin/sh -c "keystone-manage db_sync" keystone

3.5 初始化keystone

  • Fernet keys 是用于 API token 的安全信息格式。下面命令用于初始化 Fernet keys

keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

3.6 配置身份认证

keystone-manage bootstrap --bootstrap-password ADMIN_PASS --bootstrap-admin-url http://controller:5000/v3/ --bootstrap-internal-url http://controller:5000/v3/ --bootstrap-public-url http://controller:5000/v3/ --bootstrap-region-id RegionOne

3.7 配置httpd

echo "ServerName controller" >> /etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
# 启动
systemctl start httpd
systemctl enable httpd

3.8 设置环境变量

cat >> ~/.bashrc << EOF
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
EOF
[root@controller ~]# source ~/.bashrc 

3.9 创建service项目、用户角色

#创建service项目
[root@controller ~]# openstack project create --domain default --description "Service Project" service
#创建用户角色
[root@controller ~]# openstack role create user
#查看
[root@controller ~]# openstack role list  ##查看
#请求验证是否有问题
[root@controller ~]# openstack token issue   ##请求验证是否有问题

四、部署glance

  • controller节点安装

4.1 创建glance数据库及用户和授权

mysql -uroot -p123456 -e "create database glance;"
mysql -uroot -p123456 -e "grant all privileges on glance.* to 'glance'@'localhost' identified by 'GLANCE_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on glance.* to 'glance'@'%' identified by 'GLANCE_DBPASS';"

4.2 创建用户,创建角色,创建服务

source ~/.bashrc
openstack user create --domain default --password GLANCE_DBPASS glance
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image

4.3 创建api端点

openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292

4.4 安装glance包

yum -y install openstack-glance

4.5 修改配置文件

cp -a /etc/glance/glance-api.conf{,.bak}
cp -a /etc/glance/glance-registry.conf{,.bak}
grep -Ev '^$|#' /etc/glance/glance-api.conf.bak > /etc/glance/glance-api.conf
grep -Ev '^$|#' /etc/glance/glance-registry.conf.bak > /etc/glance/glance-registry.conf
openstack-config --set /etc/glance/glance-api.conf database connection mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken auth_type password
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken project_name service
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken username glance
openstack-config --set /etc/glance/glance-api.conf keystone_authtoken password GLANCE_DBPASS
openstack-config --set /etc/glance/glance-api.conf paste_deploy flavor keystone
openstack-config --set /etc/glance/glance-api.conf glance_store stores file,http
openstack-config --set /etc/glance/glance-api.conf glance_store default_store file
openstack-config --set /etc/glance/glance-api.conf glance_store filesystem_store_datadir /var/lib/glance/images/

4.6 初始化数据库

su -s /bin/sh -c "glance-manage db_sync" glance

4.7 启动服务

systemctl enable openstack-glance-api
systemctl start openstack-glance-api
#查看端口
netstat -nlpt | grep 9292

4.8 验证

# 上传镜像
openstack image create "cirros" --file cirros-0.5.2-x86_64-disk.img --disk-format qcow2 --container-format bare --public
# 查看镜像
openstack image list

五、部署placement

5.1 创建数据库及用户和授权

mysql -uroot -p123456 -e "create database placement;"
mysql -uroot -p123456 -e "grant all privileges on placement.* to 'placement'@'localhost' identified by 'PLACEMENT_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on placement.* to 'placement'@'%' identified by 'PLACEMENT_DBPASS';"

5.2 创建服务、角色

# 创建角色
openstack user create --domain default --password PLACEMENT_DBPASS placement
# 创建服务
openstack role add --project service --user placement admin
openstack service create --name placement --description "Placement API" placement

5.3 创建端点

openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778

5.4 安装placement包

yum -y install openstack-placement-api 

5.5 修改配置文件

cp /etc/placement/placement.conf{,.bak}
grep -Ev '^$|#' /etc/placement/placement.conf.bak > /etc/placement/placement.conf
openstack-config --set /etc/placement/placement.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement

cat > /etc/placement/placement.conf << EOF
[DEFAULT]
[api]
auth_strategy = keystone
[cors]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = placement
password = PLACEMENT_DBPASS

[oslo_policy]
[placement]
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[profiler]
EOF

5.6 初始化数据库

su -s /bin/sh -c "placement-manage db sync" placement

5.7 修改00-placement-api.conf配置文件

vim /etc/httpd/conf.d/00-placement-api.conf
在最后面加上如下内容:
<Directory /usr/bin>
    <IfVersion >= 2.4>
        Require all granted
    </IfVersion>
    <IfVersion < 2.4>
        Order allow,deny
        Allow from all
    </IfVersion>
</Directory>

5.8 重启http

systemctl restart httpd

5.9 检查状态

placement-status upgrade check

六、部署Nova

  • 在controller节点

6.1 创建数据库并授权

mysql -uroot -p123456 -e "create database nova_api;"
mysql -uroot -p123456 -e "create database nova;"
mysql -uroot -p123456 -e "create database nova_cell0;"
mysql -uroot -p123456 -e "grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on nova_api.* to 'nova'@'%' identified by 'NOVA_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on nova.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on nova.* to 'nova'@'%' identified by 'NOVA_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on nova_cell0.* to 'nova'@'localhost' identified by 'NOVA_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on nova_cell0.* to 'nova'@'%' identified by 'NOVA_DBPASS';"

6.2 创建项目、角色

# 创建用户
openstack user create --domain default --password NOVA_DBPASS nova
# 创建项目
openstack role add --project service --user nova admin
openstack service create --name nova --description "OpenStack Compute" compute

6.3 创建端点

openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

6.4 安装Nova

yum -y install openstack-nova-api openstack-nova-conductor openstack-nova-novncproxy openstack-nova-scheduler

6.5 修改配置文件

cp -a /etc/nova/nova.conf{,.bak}
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.16.10.10  ## 注意这个IP是controller的IP
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 192.168.207.131  
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf api_database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
openstack-config --set /etc/nova/nova.conf database connection mysql+pymysql://nova:NOVA_DBPASS@controller/nova
openstack-config --set /etc/nova/nova.conf placement_database connection mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone 
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_DBPASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_DBPASS

6.6 初始化数据库

su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova

6.7 启动服务

systemctl enable openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service

6.8 在compute01节点配置nova

  • 在compute01节点执行

6.8.1 安装openstack-nova-compute

yum -y install openstack-nova-compute

6.8.2修改配置文件

cp -a /etc/nova/nova.conf{,.bak}    
grep -Ev '^$|#' /etc/nova/nova.conf.bak > /etc/nova/nova.conf
openstack-config --set /etc/nova/nova.conf DEFAULT enabled_apis osapi_compute,metadata
openstack-config --set /etc/nova/nova.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/nova/nova.conf DEFAULT my_ip 172.16.10.11   ##设置为compute01的IP地址
openstack-config --set /etc/nova/nova.conf DEFAULT use_neutron true
openstack-config --set /etc/nova/nova.conf DEFAULT firewall_driver nova.virt.firewall.NoopFirewallDriver
openstack-config --set /etc/nova/nova.conf api auth_strategy keystone
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/nova/nova.conf keystone_authtoken auth_type password
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken user_domain_name Default
openstack-config --set /etc/nova/nova.conf keystone_authtoken project_name service
openstack-config --set /etc/nova/nova.conf keystone_authtoken username nova
openstack-config --set /etc/nova/nova.conf keystone_authtoken password NOVA_DBPASS
openstack-config --set /etc/nova/nova.conf vnc enabled true
openstack-config --set /etc/nova/nova.conf vnc server_listen 0.0.0.0
openstack-config --set /etc/nova/nova.conf vnc server_proxyclient_address ' $my_ip'
openstack-config --set /etc/nova/nova.conf vnc novncproxy_base_url http://controller:6080/vnc_auto.html
openstack-config --set /etc/nova/nova.conf glance api_servers http://controller:9292
openstack-config --set /etc/nova/nova.conf oslo_concurrency lock_path /var/lib/nova/tmp
openstack-config --set /etc/nova/nova.conf placement region_name RegionOne
openstack-config --set /etc/nova/nova.conf placement project_domain_name Default
openstack-config --set /etc/nova/nova.conf placement project_name service
openstack-config --set /etc/nova/nova.conf placement auth_type password
openstack-config --set /etc/nova/nova.conf placement user_domain_name Default
openstack-config --set /etc/nova/nova.conf placement auth_url http://controller:5000/v3
openstack-config --set /etc/nova/nova.conf placement username placement
openstack-config --set /etc/nova/nova.conf placement password PLACEMENT_DBPASS
openstack-config --set /etc/nova/nova.conf libvirt virt_type qemu

# 判断计算机是否支持虚拟机硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
# 返回0则计算节点不支持硬件加速,并且必须配置 libvirt 为使用 QEMU,而不是 KVM,需要编辑/etc/nova/nova.conf 文件中的[libvirt]部分
vi /etc/nova/nova.conf
[libvirt]
virt_type = qemu

6.8.3 启动服务

systemctl start libvirtd.service openstack-nova-compute.service
systemctl enable libvirtd.service openstack-nova-compute.service

6.9 添加compute节点

  • 在controller执行

# 重启nova服务
systemctl start openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
# 添加计算节点到 cell 数据库
openstack compute service list --service nova-compute
# 发现计算节点
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

七、部署neutron

  • 在controller节点

7.1 创建数据库并授权

mysql -uroot -p123456 -e "create database neutron;"
mysql -uroot -p123456 -e "grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'NEUTRON_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on neutron.* to 'neutron'@'%' identified by 'NEUTRON_DBPASS';"

7.2 创建项目、角色

# 角色
openstack user create --domain default --password NEUTRON_DBPASS neutron
# 项目
openstack role add --project service --user neutron admin
openstack service create --name neutron --description "OpenStack Networking" network

7.3 创建端点

openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696

7.4 安装neutron

yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

7.5 修改配置文件

cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf database connection mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
openstack-config --set /etc/neutron/neutron.conf DEFAULT core_plugin ml2
openstack-config --set /etc/neutron/neutron.conf DEFAULT service_plugins router
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_status_changes true
openstack-config --set /etc/neutron/neutron.conf DEFAULT notify_nova_on_port_data_changes true
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_DBPASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

7.6 在 /etc/neutron/neutron.conf文件添加以下内容

vim /etc/neutron/neutron.conf
在最下面添加以下内容:
[nova]
auth_url = 
http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_DBPASS

7.7 修改 ML2 plugin 配置文件 ml2_conf.ini

cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak > /etc/neutron/plugins/ml2/ml2_conf.ini
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 type_drivers flat,vlan
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 tenant_network_types  
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 mechanism_drivers linuxbridge,l2population
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2 extension_drivers port_security
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini ml2_type_flat flat_networks provider
openstack-config --set /etc/neutron/plugins/ml2/ml2_conf.ini securitygroup enable_ipset true

7.8 修改 linux bridge network provider 配置文件 linuxbridge_agent.ini

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33  ## ens33 指本地外部网卡
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

mv /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini_bak
cat > /etc/neutron/l3_agent.ini << EOF
[DEFAULT]
interface_driver = linuxbridge
EOF

7.9 修改内核参数

echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf

modprobe br_netfilter
sysctl -p

7.10 配置neutron/dhcp_agent.ini

cp -a /etc/neutron/dhcp_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak > /etc/neutron/dhcp_agent.ini
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT interface_driver linuxbridge
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
openstack-config --set /etc/neutron/dhcp_agent.ini DEFAULT enable_isolated_metadata true

7.11 配置元数据代理,用于和 nova 通讯

cp -a /etc/neutron/metadata_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak > /etc/neutron/metadata_agent.ini
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT nova_metadata_host controller
openstack-config --set /etc/neutron/metadata_agent.ini DEFAULT metadata_proxy_shared_secret METADATA_SECRET

7.12 修改 nova 的配置文件,用于 neutron 交互

openstack-config --set /etc/nova/nova.conf neutron url http://controller:9696
openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_DBPASS
openstack-config --set /etc/nova/nova.conf neutron service_metadata_proxy true
openstack-config --set /etc/nova/nova.conf neutron metadata_proxy_shared_secret METADATA_SECRET

7.13 创建软链接

ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

7.14 同步数据库

su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

7.15 重启nova服务

systemctl restart openstack-nova-api.service

7.16 启动neutron服务

systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
netstat -napt |grep 9696  ##查看端口是否启用

7.17 在compute01部署neutron

  • 在compute01执行

7.17.1 安装

yum -y install openstack-neutron-linuxbridge ebtables ipset

7.17.2 修改配置文件/etc/neutron/neutron.conf

cp -a /etc/neutron/neutron.conf{,.bak}
grep -Ev '^$|#' /etc/neutron/neutron.conf.bak > /etc/neutron/neutron.conf
openstack-config --set /etc/neutron/neutron.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/neutron/neutron.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken auth_type password
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken project_name service
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken username neutron
openstack-config --set /etc/neutron/neutron.conf keystone_authtoken password NEUTRON_DBPASS
openstack-config --set /etc/neutron/neutron.conf oslo_concurrency lock_path /var/lib/neutron/tmp

7.17.3 修改配置文件linuxbridge_agent.ini

cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini linux_bridge physical_interface_mappings provider:ens33 ##修改为外网网卡名称
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini vxlan enable_vxlan false
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup enable_security_group true
openstack-config --set /etc/neutron/plugins/ml2/linuxbridge_agent.ini securitygroup firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 

7.17.4 修改内核参数

echo 'net.bridge.bridge-nf-call-iptables=1' >> /etc/sysctl.conf
echo 'net.bridge.bridge-nf-call-ip6tables=1' >> /etc/sysctl.conf

modprobe br_netfilter
sysctl -p

7.17.5 修改配置文件/etc/nova/nova.conf

openstack-config --set /etc/nova/nova.conf neutron auth_url http://controller:5000
openstack-config --set /etc/nova/nova.conf neutron auth_type password
openstack-config --set /etc/nova/nova.conf neutron project_domain_name default
openstack-config --set /etc/nova/nova.conf neutron user_domain_name default
openstack-config --set /etc/nova/nova.conf neutron region_name RegionOne
openstack-config --set /etc/nova/nova.conf neutron project_name service
openstack-config --set /etc/nova/nova.conf neutron username neutron
openstack-config --set /etc/nova/nova.conf neutron password NEUTRON_DBPASS

7.17.6 重启nova服务

systemctl restart openstack-nova-compute.service 

7.17.7 启动neutron服务

systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service

7.18 验证 Neutron 组件服务

  • 在 controller 节点执行以下操作,验证 Neutron 组件服务

openstack extension list --network
openstack network agent list

八、部署dashboard

  • 在compute01节点执行

8.1 安装openstack-dashboard httpd

yum -y install openstack-dashboard httpd

8.2 修改配置文件

#上传local_settings配置文件
cat local_settings > /etc/openstack-dashboard/local_settings

8.3重新生成 openstack-dashboard.conf 并重启 Apache 服务(由于 dashborad 会重新复制代码文件,重启 apache 会比较慢)

cd /usr/share/openstack-dashboard
python manage.py make_web_conf --apache > /etc/httpd/conf.d/openstack-dashboard.conf

# 重启http服务
systemctl enable httpd.service
systemctl restart httpd.service

# 重启 controller 节点的 memcache 服务
systemctl restart memcached.service

九、部署cinder

  • controller节点执行

9.1 创建数据库并授权

mysql -uroot -p123456 -e "create database cinder;"
mysql -uroot -p123456 -e "grant all privileges on cinder.* to 'cinder'@'localhost' identified by 'CINDER_DBPASS';"
mysql -uroot -p123456 -e "grant all privileges on cinder.* to 'cinder'@'%' identified by 'CINDER_DBPASS';"

9.2 创建 Cinder 服务凭据

openstack user create --domain default --password CINDER_DBPASS cinder
openstack role add --project service --user cinder admin
openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2
openstack service create --name cinderv3 --description "OpenStack Block Storage" volumev3
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 public http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 internal http://controller:8776/v3/%\(project_id\)s
openstack endpoint create --region RegionOne volumev3 admin http://controller:8776/v3/%\(project_id\)s

9.3 安装 Cinder 相关软件包

yum -y install openstack-cinder

9.4 修改配置文件

cp /etc/cinder/cinder.conf{,.bak}
grep -Ev '#|^$' /etc/cinder/cinder.conf.bak>/etc/cinder/cinder.conf
openstack-config --set /etc/cinder/cinder.conf database connection mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
openstack-config --set /etc/cinder/cinder.conf DEFAULT transport_url rabbit://openstack:RABBIT_PASS@controller
openstack-config --set /etc/cinder/cinder.conf DEFAULT auth_strategy keystone
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken www_authenticate_uri http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_url http://controller:5000
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken memcached_servers controller:11211
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken auth_type password
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken user_domain_name default
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken project_name service
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken username cinder
openstack-config --set /etc/cinder/cinder.conf keystone_authtoken password CINDER_DBPASS
openstack-config --set /etc/cinder/cinder.conf DEFAULT my_ip 172.16.10.10 ## 修改为 controller的IP
openstack-config --set /etc/cinder/cinder.conf oslo_concurrency lock_path /var/lib/cinder/tmp

9.5 同步 cinder 数据库

su -s /bin/sh -c "cinder-manage db sync" cinder

9.6 配置nova

openstack-config --set /etc/nova/nova.conf cinder os_region_name RegionOne

9.7 重启nova服务并启动cinder服务

systemctl restart openstack-nova-api.service
# 启动 Cinder 服务
systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

9.8 在block01节点上部署cinder服务并创建卷

  • 在block01 节点执行

9.8.1 配置 block01 节点的 YUM 源、安装 Cinder 相关软件和配置 LVM 服务

yum install centos-release-openstack-train -y
yum install python-openstackclient -y
yum install openstack-selinux -y
yum install openstack-cinder targetcli python-keystone -y
yum install lvm2 device-mapper-persistent-data -y

9.8.2 启动服务

systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service

9.8.3 创建lv

# 创建pv
pvcreate /dev/sdb
# 创建vg
vgcreate cinder-volumes /dev/sdb
# 创建lv
vim /etc/lvm/lvm.conf
 第142行添加如下内容:
 filert = ["a/sdb/", "r/.*/" ]

vim /etc/cinder/cinder.conf
全部删除并添加以下内容:
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip = 172.16.10.12
enabled_backends = lvm
glance_api_servers = http://controller:9292
[database]
connection = mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = CINDER_DBPASS
[lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
target_protocol = iscsi
target_helper = lioadm
[nova]
[oslo_concurrency]
lock_path = /var/lib/cinder/tmp

9.8.4 启动cinder

systemctl enable openstack-cinder-volume.service target.service
systemctl start openstack-cinder-volume.service target.service

9.9 在controller节点验证

  • 在controller节点执行

openstack volume service list

十、创建网络

# 创建网络
openstack network create --share --external --provider-physical-network \
provider --provider-network-type flat vm-network
# 创建子网,需要注意要和你机器的是一个网段
openstack subnet create --network vm-network --allocation-pool \
start=172.16.10.100,end=172.16.10.200 --dns-nameserver 172.16.10.2 \
--gateway 172.16.10.2 --subnet-range 172.16.10.0/24 vm-subnetwork

十一、创建实例类型

openstack flavor create --id 0 --vcpus 1 --ram 512 --disk 1 m1.nano
  大数据 最新文章
实现Kafka至少消费一次
亚马逊云科技:还在苦于ETL?Zero ETL的时代
初探MapReduce
【SpringBoot框架篇】32.基于注解+redis实现
Elasticsearch:如何减少 Elasticsearch 集
Go redis操作
Redis面试题
专题五 Redis高并发场景
基于GBase8s和Calcite的多数据源查询
Redis——底层数据结构原理
上一篇文章      下一篇文章      查看所有文章
加:2021-09-07 10:53:46  更:2021-09-07 10:54:12 
 
开发: C++知识库 Java知识库 JavaScript Python PHP知识库 人工智能 区块链 大数据 移动开发 嵌入式 开发工具 数据结构与算法 开发测试 游戏开发 网络协议 系统运维
教程: HTML教程 CSS教程 JavaScript教程 Go语言教程 JQuery教程 VUE教程 VUE3教程 Bootstrap教程 SQL数据库教程 C语言教程 C++教程 Java教程 Python教程 Python3教程 C#教程
数码: 电脑 笔记本 显卡 显示器 固态硬盘 硬盘 耳机 手机 iphone vivo oppo 小米 华为 单反 装机 图拉丁

360图书馆 购物 三丰科技 阅读网 日历 万年历 2024年11日历 -2024/11/23 20:22:02-

图片自动播放器
↓图片自动播放器↓
TxT小说阅读器
↓语音阅读,小说下载,古典文学↓
一键清除垃圾
↓轻轻一点,清除系统垃圾↓
图片批量下载器
↓批量下载图片,美女图库↓
  网站联系: qq:121756557 email:121756557@qq.com  IT数码