openstack P版稳定集群分布式部署-手动
心血来潮,最全的部署步骤,网上绝对没有,你们自己搞最快也要一周,按照我的方式搞 1-2天吧
坑已经排干净了, 一些概念自己百度学习,部署照着做没问题,绝对原创,我写这个我都写吐血了快;
相关文件链接:https://pan.baidu.com/s/1LhY74nyRtATVhkDS84OKhQ 提取码:f56t
1, 静态IP(NetworkManager服务可以关闭)
2,主机名与绑定 ?vi /etc/hosts 192.168.1.11 controller ?// 控制节点 192.168.1.12 compute ? // 计算节点 192.168.1.13 cinder ? ?// 存储节点
3, 关闭防火墙和selinux? # systemctl stop firewalld? # systemctl disable firewalld? # yum install iptables-services -y? # systemctl restart iptables? # systemctl enable iptables? # iptables -F? # iptables -F -t nat? # iptables -F -t mangle? # iptables -F -t raw
4, 时间同步 1. yum install ntp 2. ntpdate?
5. 所有节点准备yum源(在centos默认源的基础上再加上以下两个yum源) a. yum install yum-plugin-priorities -y b. cd /home/openstack ? // 这个文件夹下需要有几个文件,从我分享的百度网盘下载centos-release-openstack-pike-1-1.el7.x86_64.rpm ?、cirros-0.3.5-x86_64-disk.img 、 openstack-newton.tar c. rpm -ivh centos-release-openstack-pike-1-1.el7.x86_64.rpm ?--nodeps --force d. vi /etc/yum.repos.d/CentOS-OpenStack-pike.repo ?把 baseurl=http://mirror.centos.org/centos/7/cloud/$basearch/openstack-pike/ 替换成 baseurl=https://mirror.tuna.tsinghua.edu.cn/cc/7/cloud/x86_64/openstack-pike/ 只替换前两个 baseurl,其他的不用换,也用不到 e. ?yum repolist
6. 所有节点安装Openstack基础工具 a. yum install python-openstackclient openstack-selinux openstack-utils -y
7.计算节点安装基本软件包,以下所有的操作没有特意说明 都是在控制节点进行,如果在计算、cinder节点我会标注,切记; yum install qemu-kvm libvirt bridge-utils -y ln -sv /usr/libexec/qemu-kvm /usr/bin/
8. 安装支撑性服务,在控制节点安装 ?yum install mariadb mariadb-server python2-PyMySQL -y ? vim /etc/my.cnf.d/openstack.cnf ?// 增加配置文件 [mysqld] bind-address = # ip为控制节 点管理网段IP default-storage-engine = innodb innodb_file_per_table = on max_connections = 4096 collation-server = utf8_general_ci character-set-server = utf8
启动服务:systemctl restart mariadb ? ? systemctl enable mariadb
密码初始化: mysql_secure_installation ? 自己记住就行,我暂时用123456
9. rabbitmq部署 ?在控制节点安装 yum install erlang socat rabbitmq-server -y systemctl restart rabbitmq-server systemctl enable rabbitmq-server netstat -ntlup |grep?
10. 给rabbitmq 新建用户给openstack rabbitmqctl list_users ?// 查看当前用户 rabbitmqctl add_user openstack 123456 ?// 用户名Openstack 密码123456 rabbitmqctl set_user_tags openstack administrator ?// 给openstack这个用户授权超管 rabbitmqctl set_permissions openstack ".*" ".*" ".*" ?// 设置读写权限 rabbitmq-plugins enable rabbitmq_management ? // 开启rabbitmq_management插件 netstat -ntlup |grep 15672 // 查看是否开通
11. 安装memcache yum install memcached python-memcached -y vim /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="64" OPTIONS="-l 192.168.122.11,::1" ?// 这里的IP改为控制节点的内网IP
systemctl restart memcached systemctl enable memcached netstat -ntlup |grep :11211
12. 安装认证服务 keystone,keystone以后在其他项目里用于权限也是OK的 a. 创建数据库并创建keystone的专属用户 mysql -p123456 create database keystone; grant all on keystone.* to 'keystone'@'localhost' identified by '123456'; grant all on keystone.* to 'keystone'@'%' identified by '123456'; flush privileges; mysql -h controller -u keystone -p123456 -e 'show databases' ? // 进行验证 有输出代表OK yum install openstack-keystone httpd mod_wsgi -y ? // keystone基于httpd启动 httpd需要mod_wsgi模块才能运行python开发的程序 cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak ?// 配置keystone的配置文件 vim /etc/keystone/keystone.conf 405行 改transport_url =rabbit://openstack:123456@controller:5672 ? // 连接rabbitmq 661行 改connection =mysql+pymysql://keystone:123456@controller/keystone ?// 连接Maria 数据库 2774行 ?打开注释 ?provider = fernet grep -n '^[a-Z]' /etc/keystone/keystone.conf //列出刚才所改的配置 最终确认 要细心 b. 初始化Keystone数据库数据 mysql -h controller -u keystone -p123456 -e 'use keystone;show tables;' //没有输出就对了 su -s /bin/sh -c "keystone-manage db_sync" keystone ?// 把keystone自带的数据导进去,需要一定的时间 不要中断 就等着 // su -s表示给bash环境,因为keystone默认不是/bin/bash // su -c keystone表示以keystone用户身份执行命令 mysql -h controller -u keystone -p123456 -e 'use keystone;show tables;' |wc -l ?// 大概有39个表就对了 c. 初始化keystone的认证信息 keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone keystone-manage credential_setup --keystone-user keystone --keystone-group keystone // 在/etc/keystone/目录产生以下两个目录表示初始化成功 credential-keys 和 fernet-keys d. 初始化openstack管理账号的API数据 keystone-manage bootstrap --bootstrap-password 123456 \ --bootstrap-admin-url http://controller:35357/v3/ \ --bootstrap-internal-url http://controller:5000/v3/ \ --bootstrap-public-url http://controller:5000/v3/ \ --bootstrap-region-id RegionOne e. vi /etc/httpd/conf/httpd.conf 第95行 改成 ServerName controller:80 ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/ systemctl restart httpd systemctl enable httpd netstat -ntlup |grep http ?// 确认下是否启动,有输出代表OK 5000 80 35357端口
13 创建domain,project,user和role角色 a. vim admin-openstack.sh // 创建临时admin用户的变量脚本 export OS_USERNAME=admin export OS_PASSWORD=123456 export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
执行 source admin-openstack.sh
b. 创建Project项目 openstack project list 输出代表正常:+----------------------------------+-------+ | ID ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? | Name ?| +----------------------------------+-------+ | e07563a130a24dad9ed862cb06857f26 | admin | +----------------------------------+-------+
c.创建service项目 openstack project create --domain default --description "Service Project" service
d. 创建demo项目 openstack project create --domain default --description "Demo Project" demo
e.创建demo用户 openstack user create --domain default --password 123456 demo openstack user list
f.创建role openstack role list openstack role create user openstack role list //把demo用户加入到user角色中 openstack role add --project demo --user demo user
g. 验证之前的步骤是否生效 unset OS_AUTH_URL OS_PASSWORD openstack --os-auth-url http://controller:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue //如果提示输入密码 ?输入 123456 并且有输出就代表正常了 //使用demo用户验证 openstack --os-auth-url http://controller:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue //如果提示输入密码 ?输入 123456 并且有输出就代表正常了 // 用浏览器输入 ?http://控制节点的内网IP:35357 ?能访问就代表现阶段的配置都是正确的
14. 修改用户环境变量脚本 vim demo-openstack.sh export OS_USERNAME=demo export OS_PASSWORD=daniel.com export OS_PROJECT_NAME=demo export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
执行 ?source demo-openstack.sh openstack token issue
15. 镜像服务glance, 这个比较重要,创建ecs虚拟机的时候选择的系统镜像就来自这里 a. 创建数据库 mysql -p123456 create database glance; grant all on glance.* to'glance'@'localhost' identified by '123456'; grant all on glance.* to'glance'@'%' identified by '123456'; flush privileges; mysql -h controller -u glance -p123456 -e 'show databases'
b.权限配置 source admin-openstack.sh openstack user create --domain default --password 123456 glance openstack user list openstack role add --project service --user glance admin // 把glance用户加入到Service项目的admin角色组 openstack service create --name glance --description "OpenStack Image" image ?// 创建 glance服务 openstack service list openstack endpoint create --region RegionOne image public http://controller:9292 openstack endpoint create --region RegionOne image internal http://controller:9292 openstack endpoint create --region RegionOne image admin http://controller:9292 openstack endpoint list
c.glance安装流程 yum install openstack-glance -y cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak cp /etc/glance/glance-registry.conf /etc/glance/glance-registry.conf.bak
e.修改配置文件 vim /etc/glance/glance-api.conf 1823行 connection =mysql+pymysql://glance:123456@controller/glance 1943行 stores = file,http ?//解除注释 1975行 ?? ?default_store = file // 解除注释 2294行 filesystem_store_datadir = /var/lib/glance/images //解除注释 3283行 [keystone_authtoken] 这句下面添加一段配置 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = 123456
4235行 flavor = keystone ?//解除注释: 配置完成保存退出 执行 grep -Ev '#|^$' /etc/glance/glance-api.conf ?//查看自己刚才改的配置 认真对比一下
vim /etc/glance/glance-registry.conf 1141行 修改 connection =mysql+pymysql://glance:123456@controller/glance 1234行 [keystone_authtoken] 后面添加一段 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = glance password = 123456
2160行左右 flavor = keystone //解除注释
grep -Ev '#|^$' /etc/glance/glance-registry.conf ?// 检查自己的配置
f. 初始化并导入数据到glance数据库 su -s /bin/sh -c "glance-manage db_sync" glance mysql -h controller -u glance -p123456 -e 'use glance; show tables' //大约有15张表
g.启动服务 systemctl restart openstack-glance-api systemctl enable openstack-glance-api systemctl restart openstack-glance-registry systemctl enable openstack-glance-registry netstat -ntlup |grep -E '9191|9292' ?// 9191 9292端口起来了 就代表现阶段正常
h.上传镜像到glance, iso或者是img文件 source admin-openstack.sh openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public //上面这条命令是把一个系统镜像上传到glance,最后的public代表是可用 openstack image list // 验证是否上传成功,看到有就代表OK
16. 计算组件nova,在控制节点部署 Nova mysql -p123456 create database nova_api; create database nova; create database nova_cell0; grant all on nova_api.* to'nova'@'localhost' identified by '123456'; grant all on nova_api.* to 'nova'@'%'identified by '123456'; grant all on nova.* to'nova'@'localhost' identified by '123456'; grant all on nova.* to 'nova'@'%'identified by '123456'; grant all on nova_cell0.* to'nova'@'localhost' identified by '123456'; grant all on nova_cell0.* to'nova'@'%' identified by '123456'; flush privileges; quit
mysql -h controller -u nova -p123456 -e 'show databases' ?//验证
b.权限配置 openstack user create --domain default --password 123456 nova openstack user list openstack role add --project service --user nova admin //把nova用户加入到Service项目的admin角色组 openstack service create --name nova --description "OpenStack Compute" compute ? //创建nova服务 openstack service list
c.配置nova服务的api记录地址,仔细配置很重要 openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1 openstack endpoint list
d.创建placement用户,用于资源的追踪记录 openstack user create --domain default --password 123456 placement openstack user list openstack role add --project service --user placement admin ?//把placement用户加入到Service项目的admin角色组 openstack service create --name placement --description "Placement API" placement ?//创建placement服务 openstack service list
e.创建placement服务的api地址记录 openstack endpoint create --region RegionOne placement public http://controller:8778 openstack endpoint create --region RegionOne placement internal http://controller:8778 openstack endpoint create --region RegionOne placement admin http://controller:8778 openstack endpoint list
f.在控制节点安装nova相关软件 yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y cp /etc/nova/nova.conf /etc/nova/nova.conf.bak cp /etc/httpd/conf.d/00-nova-placement-api.conf /etc/httpd/conf.d/00-nova-placement-api.conf.bak
g.修改相关的配置文件 vim /etc/nova/nova.conf 2753行 enabled_apis=osapi_compute,metadata ?//解除注释 3479行 connection=mysql+pymysql://nova:123456@controller/nova_api 4453行 connection=mysql+pymysql://nova:123456@controller/nova 3130行 transport_url=rabbit://openstack:123456@controller 3193 auth_strategy=keystone 解除注释 5771行 [keystone_authtoken] 下面 添加一段 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = 123456
1817行 use_neutron=true //解除注释 2479 ?firewall_driver=nova.virt.libvirt.firewall.IptablesFirewallDriver //解除注释
9897行上下吧 找到[vnc] 下面 enabled=true ?//解除注释 vncserver_listen=控制节点内网IP vncserver_proxyclient_address=控制节点内网IP 5067行 api_servers=http://controller:9292 7489 lock_path=/var/lib/nova/tmp 8304行上下 找到 [placemement] 在下面加一段 os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:35357/v3 username = placement password = 123456
保存退出执行 grep -Ev '^#|^$' /etc/nova/nova.conf
h. 配置00-nova-placement-api.conf配置文件 vi /etc/httpd/conf.d/00-nova-placement-api.conf 将下面的一段加到 16行 </VirtualHost> 上面 <Directory /usr/bin> ? <IfVersion >= 2.4> ? ? Require all granted ? </IfVersion> ? <IfVersion < 2.4> ? ? Order allow,deny ? ? Allow from all ? </IfVersion> </Directory>
systemctl restart httpd ?// 重启httpd服务
i. 导入数据 su -s /bin/sh -c "nova-manage api_db sync" nova su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova su -s /bin/sh -c "nova-manage db sync" nova ?//这个命令会有一些警告,直接忽略,我已经尝试解决警告 比较繁琐 nova-manage cell_v2 list_cells ?// 验证 mysql -h controller -u nova -p123456 -e 'use nova;show tables;' |wc -l ? //111个表左右就对了 mysql -h controller -u nova -p123456 -e 'use nova_api;show tables;' |wc -l ?//33个表左右就对接了 mysql -h controller -u nova -p123456 -e 'use nova_cell0;show tables;' |wc -l ? // 111个表左右就对了
j. 启动服务 systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service openstack catalog list
17. 在计算节点compute部署,以下操作默认都在compute节点执行 vi /etc/yum.repos.d/CentOS-Base.repo ?最后面加下面的一段配置: [Virt] name=CentOS-$releasever - Base #mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os&infra=$infra baseurl=http://mirrors.sohu.com/centos/7.5.1804/virt/x86_64/kvm-common/ #baseurl=http://mirror.centos.org/centos/$releasever/os/$basearch/ gpgcheck=0 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
执行 yum install openstack-nova-compute sysfsutils -y cp /etc/nova/nova.conf /etc/nova/nova.conf.bak 把控制节点下的 /etc/nova/nova.conf 复制到 计算节点下的 /etc/nova/nova.conf 然后在计算节点上 vi /etc/nova/nova.conf 改几处地方: a. [vnc]下的几个参数有所不同 vncserver_proxyclient_address接的IP为compute节点管理网络IP enabled = True vncserver_listen = 0.0.0.0 novncproxy_base_url =http://192.168.122.11:6080/vnc_auto.html
b.[libvirt]参数组下面找到virt_type 改为:? virt_type=qemu 不能使用kvm,因为我们本来就在kvm里面搭建的云平台,cat /proc/cpuinfo |egrep 'vmx|svm'是查不出来的? 但如果是生产环境用物理服务器搭建就应该为virt_type=kvm,此处要注意
c.启动服务 systemctl start libvirtd.service openstack-nova-compute.service systemctl enable libvirtd.service openstack-nova-compute.service
d.在控制controller节点上操作 openstack compute service list su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova ?//新增计算节点记录,增加到nova数据库中 nova-status upgrade check ? ? //检验API是否正常
18. 在控制节点 安装网络组件 neutron mysql -p123456 create database neutron; grant all on neutron.* to'neutron'@'localhost' identified by '123456'; grant all on neutron.* to'neutron'@'%' identified by '123456'; flush privileges; quit mysql -h controller -u neutron -p123456 -e 'show databases'; source admin-openstack.sh openstack user create --domain default --password 123456 neutron openstack user list openstack role add --project service --user neutron admin ?//把neutron用户加到Service项目的admin角色组 openstack service create --name neutron --description "OpenStack Networking" network ?//创建Neutron服务 openstack service list
b.配置neutron服务的api地址记录 openstack endpoint create --region RegionOne network public http://controller:9696 openstack endpoint create --region RegionOne network internal http://controller:9696 openstack endpoint create --region RegionOne network admin http://controller:9696 openstack endpoint list
c.在控制节点安装 neutron yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak vi /etc/neutron/neutron.conf 27行 auth_strategy = keystone //解除注释 30 ?core_plugin = ml2 33 ?service_plugins = router 85 allow_overlapping_ips = true //解除注释 98 notify_nova_on_port_status_changes = true ?//解注 102 notify_nova_on_port_data_changes = true ?//解注 553 transport_url =rabbit://openstack:123456@controller 560 rpc_backend = rabbit ? //解注 710 connection =mysql+pymysql://neutron:123456@controller/neutron 794 ?[keystone_authtoken] ?这句不改,下面加一段 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 123456
1022 [nova] 这句不改,添加下面的一段 到[nova]下面 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 123456
1141行上下 ?lock_path = /var/lib/neutron/tmp 保存退出 grep -Ev '#|^$' /etc/neutron/neutron.conf
vi /etc/neutron/plugins/ml2/ml2_conf.ini 132 ?type_drivers = flat,vlan,vxlan 137 tenant_network_types = vxlan 141 mechanism_drivers = linuxbridge,l2population? 146 extension_drivers = port_security 182 flat_networks = provider 235 vni_ranges = 1:1000 259 enable_ipset = true 保存退出 grep -Ev '#|^$' /etc/neutron/plugins/ml2/ml2_conf.ini
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini 142 physical_interface_mappings = provider:eth1 ? //注 改为eth1 或者eth0 能上外网的网卡 175 enable_vxlan = true 196 local_ip = 192.168.122.11 ?// 控制节点的内网IP 220 l2_population = true 155 firewall_driver =neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 160 enable_security_group = true 保存退出 grep -Ev '#|^$' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vi /etc/neutron/l3_agent.ini 16 interface_driver = linuxbridge 保存退出
vi ?/etc/neutron/dhcp_agent.ini 16 interface_driver = linuxbridge 37 dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq ?//解注 46 enable_isolated_metadata = true 保存退出
vi /etc/neutron/metadata_agent.ini 23 nova_metadata_host = controller 35 metadata_proxy_shared_secret = metadata_daniel 保存退出
vi /etc/nova/nova.conf [neutron] ?在[neutron]配置段下添加下面一段 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 123456 service_metadata_proxy = true metadata_proxy_shared_secret = metadata_daniel 保存退出
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron ? //这一步时间会长一些 systemctl restart openstack-nova-api.service systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-l3-agent.service
19.在计算节点compute部署neutron服务 yum install openstack-neutron-linuxbridge ebtables ipset -y cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak
vi /etc/neutron/neutron.conf 27 auth_strategy = keystone 553 transport_url = rabbit://openstack:123456@controller
794 [keystone_authtoken] 在 [keystone_authtoken]下添加下面一段配置 auth_uri = http://controller:5000 auth_url = http://controller:35357 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 123456
1135左右 lock_path = /var/lib/neutron/tmp 保存退出 grep -Ev '#|^$' /etc/neutron/neutron.conf
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini 142 physical_interface_mappings = provider:eth0 ?// 注意 这个eth0 或者 eth1要看下你能上外网的网卡 175 enable_vxlan = true 196 local_ip = 192.168.122.12 ?//本机管理网络的IP(重点注意) 220 l2_population = true 155 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver 160 enable_security_group = true 保存退出 grep -Ev '#|^$' /etc/neutron/plugins/ml2/linuxbridge_agent.ini
vi ?/etc/nova/nova.conf [neutron] 在[neutron]下添 加下面一段 url = http://controller:9696 auth_url = http://controller:35357 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = 123456 保存退出 grep -Ev '#|^$' /etc/nova/nova.conf
systemctl restart openstack-nova-compute.service systemctl start neutron-linuxbridge-agent.service systemctl enable neutron-linuxbridge-agent.service
在控制controller节点上验证 openstack network agent list
20. 在控制节点安装dashbord horizon yum install openstack-dashboard -y cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.bak
vi /etc/openstack-dashboard/local_settings 38 ?ALLOWED_HOSTS = ['*',] 允许所有,方便测试,生产环境只允许特定IP 64 OPENSTACK_API_VERSIONS = { ?????"identity": 3, ?????"image": 2, ?????"volume": 2, ?????"compute": 2, ?} ? 75 OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True ?// true代表多域支持 比如阿里云的华北1 华北2 97 OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default' ?//默认域 的名称 153 SESSION_ENGINE = 'django.contrib.sessions.backends.cache' ?加这 一句
154 CACHES = { ?????'default': { ????????'BACKEND':'django.core.cache.backends.memcached.MemcachedCache', ?????????'LOCATION': 'controller:11211', 表示 把会话给controller的memcache ?????}, ?} ? 161 #CACHES = {? 配置了上面一段,则注释这一段 # ??'default': { # ??????'BACKEND':'django.core.cache.backends.locmem.LocMemCache', # ??}, #}
183 OPENSTACK_HOST = "controller"? 184 OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST ? // 改为v3版,而不是v3.0版 185 OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user" ? //默认角色
313 OPENSTACK_NEUTRON_NETWORK = { ?????'enable_router': True, ?????'enable_quotas': True, ?????'enable_ipv6': True, ?????'enable_distributed_router': True, ?????'enable_ha_router': True, ?????'enable_fip_topology_check': True,? // 全打开,我们用的是第2种网络类型
453 TIME_ZONE = "Asia/Shanghai" 保存退出
vi ?/etc/httpd/conf.d/openstack-dashboard.conf 4 ?WSGIApplicationGroup %{GLOBAL}? // 第4行加上这一句,否则后面dashboard访问不了 保存 退出
systemctl restart httpd memcached
浏览器访问:http://控制节点的IP/dashboard/auth/login/?next=/dashboard/ 域 default ?账号 admin 密码 123456
到此 一个基本版的 集群版的 openstack搭建好了 ,后续会继续更新 部署Openstack 分布式存储cinder
?
|