openstack集群部署—Nova计算节点

初始化

在所有计算节点,关闭防火墙,selinux,配置hosts,并安装openstack客户端包

1
2
3
yum install centos-release-openstack-rocky -y
yum upgrade -y
yum install python-openstackclient openstack-utils openstack-selinux -y

部署

安装nova-compute

1
2
# 在全部计算节点安装nova-compute服务,以compute01节点为例
[root@compute01 ~]# yum install openstack-nova-compute -y

配置nova.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
# 在全部计算节点操作,以computer01节点为例;
# 注意”my_ip”参数,根据节点修改;
# 注意nova.conf文件的权限:root:nova
[root@compute01 ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak
[root@compute01 ~]# egrep -v "^$|^#" /etc/nova/nova.conf
[DEFAULT]
my_ip=172.30.200.41
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
enabled_apis=osapi_compute,metadata
# 前端采用haproxy时,服务连接rabbitmq会出现连接超时重连的情况,可通过各服务与rabbitmq的日志查看;
# transport_url=rabbit://openstack:rabbitmq_pass@controller:5673
# rabbitmq本身具备集群机制,官方文档建议直接连接rabbitmq集群;但采用此方式时服务启动有时会报错,原因不明;如果没有此现象,强烈建议连接rabbitmq直接对接集群而非通过前端haproxy
transport_url=rabbit://openstack:rabbitmq_pass@controller01:5672,controller02:5672
[api]
auth_strategy=keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[compute]
[conductor]
[console]
[consoleauth]
[cors]
[crypto]
[database]
[devices]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers=http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[ironic]
[key_manager]
[keystone]
[keystone_authtoken]
auth_url = http://controller:5000
memcached_servers = controller01:11211,controller02:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = 123456
[libvirt]
# 通过“egrep -c '(vmx|svm)' /proc/cpuinfo”命令查看主机是否支持硬件加速,返回1或者更大的值表示支持,返回0表示不支持;
# 支持硬件加速使用”kvm”类型,不支持则使用”qemu”类型;
# 一般虚拟机不支持硬件加速
virt_type=qemu
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name=RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:5000/v3
username = placement
password = 123456
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[upgrade_levels]
[vault]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled=true
vncserver_listen=0.0.0.0
vncserver_proxyclient_address=$my_ip
# 因某些未做主机绑定的客户端不能访问”controller”名字,改为使用具体的ip地址
novncproxy_base_url=http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]

启动服务

1
2
3
4
5
6
7
8
9
10
11
# 全部计算节点操作;
# 开机启动
[root@compute01 ~]# systemctl enable libvirtd.service openstack-nova-compute.service

# 启动
[root@compute01 ~]# systemctl restart libvirtd.service
[root@compute01 ~]# systemctl restart openstack-nova-compute.service

查看状态
[root@compute01 ~]# systemctl status libvirtd.service
[root@compute01 ~]# systemctl status openstack-nova-compute.service

向cell数据库添加计算节点

1
2
3
4
5
# 在任意控制节点操作
[root@controller01 ~]# . admin-openrc

# 确认数据库中含有主机
[root@controller01 ~]# openstack compute service list --service nova-compute

手工发现计算节点

1
2
# 手工发现计算节点主机,即添加到cell数据库
[root@controller01 ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

自动发现计算节点

1
2
3
4
5
6
7
8
9
10
# 在全部控制节点操作;
# 为避免新加入计算节点时,手动执行注册操作”nova-manage cell_v2 discover_hosts”,可设置控制节点定时自动发现主机;
# 涉及控制节点nova.conf文件的[scheduler]字段;
# 如下设置自动发现时间为5min,可根据实际环境调节
[root@controller01 ~]# vim /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval=300

# 重启nova服务,配置生效
[root@controller01 ~]# systemctl restart openstack-nova-api.service
Donate