openstack集群部署—Cinder存储节点

部署cinder存储节点

安装cinder

存储节点为ceph的节点,一般会安装在mon所在的节点上

1
2
# 在全部存储节点安装cinder服务,以compute01节点为例
[root@compute01 ~]# yum install -y openstack-cinder targetcli python-keystone
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
# 在全部存储节点操作,以compute01节点为例;
# 注意”my_ip”参数,根据节点修改;
# 注意cinder.conf文件的权限:root:cinder
[root@compute01 ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak
[root@compute01 ~]# egrep -v "^$|^#" /etc/cinder/cinder.conf
[DEFAULT]
state_path = /var/lib/cinder
my_ip = 存储节点ip
glance_api_servers = http://controller:9292
auth_strategy = keystone
enabled_backends = ceph
transport_url=rabbit://openstack:123456@controller01:5672,controller02:5672
[backend]
[backend_defaults]
[barbican]
[brcd_fabric_example]
[cisco_fabric_example]
[coordination]
[cors]
[database]
connection = mysql+pymysql://cinder:123456@controller01/cinder
[fc-zone-manager]
[healthcheck]
[key_manager]
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller01:11211,controller02:11211
auth_type = password
project_domain_id = default
user_domain_id = default
project_name = service
username = cinder
password = cinder
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path = $state_path/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[profiler]
[sample_remote_file_source]
[service_user]
[ssl]
[vault]

设置开机自启动

1
2
3
## 全部存储节点设置
# 开机启动
[root@compute01 ~]# systemctl enable openstack-cinder-volume.service target.service

对接ceph做准备

创建pool

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
# Ceph默认使用pool的形式存储数据,pool是对若干pg进行组织管理的逻辑划分,pg里的对象被映射到不同的osd,因此pool分布到整个集群里。
# 可以将不同的数据存入1个pool,但如此操作不便于客户端数据区分管理,因此一般是为每个客户端分别创建pool。
#创建三个pool,volumes,images,vms
#我们是90个osd,2个副本,这样结合官网公式,算出pg数
[root@computer01 ceph]# ceph osd pool create volumes 2048
pool 'volumes' created
[root@computer01 ceph]# ceph osd pool create vms 1024
pool 'vms' created
[root@computer01 ceph]# ceph osd pool create images 256
pool 'images' created



##新创建的池必须在使用之前进行初始化。使用该rbd工具初始化池:

rbd pool init volumes
rbd pool init images
rbd pool init vms

安装Ceph客户端

1
2
3
4
5
6
# glance-api服务所在节点需要安装python-rbd;
# 这里glance-api服务运行在3个控制节点,以controller01节点为例
[root@controller01 ~]# yum install python-rbd -y

# cinder-volume与nova-compute服务所在节点需要安装ceph-common;cinder-backup也需要安装;
[root@compute01 ~]# yum install ceph-common -y

授权设置

创建用户

1
2
3
4
5
# ceph默认启用cephx authentication(见ceph.conf),需要为nova/cinder与glance客户端创建新的用户并授权;
# 可在ceph的管理节点上分别为运行cinder-volume与glance-api服务的节点创建client.glance与client.cinder用户并设置权限;
# 针对pool设置权限,pool名对应创建的pool
[root@computer01 ~]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[root@computer01 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'

推送client.glance秘钥

1
2
3
4
5
6
7
8
# 将创建client.glance用户生成的秘钥推送到运行glance-api服务的节点
[root@computer01 ceph]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[root@computer01 ceph]# ceph auth get-or-create client.glance | ssh root@controller01 tee /etc/ceph/ceph.client.glance.keyring
[root@computer01 ceph]# ceph auth get-or-create client.glance | ssh root@controller02 tee /etc/ceph/ceph.client.glance.keyring

# 同时修改秘钥文件的属主与用户组
[root@controller01 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@controller02 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring

推送client.cinder秘钥

1
2
3
4
5
6
7
# 将创建client.cinder用户生成的秘钥推送到运行cinder-volume服务的节点
[root@computer01 ceph]# ceph auth get-or-create client.cinder | ssh root@computer03 tee /etc/ceph/ceph.client.cinder.keyring
[root@computer01 ceph]# ceph auth get-or-create client.cinder | ssh root@computer03 tee /etc/ceph/ceph.client.cinder.keyring
[root@computer01 ceph]# ceph auth get-or-create client.cinder | ssh root@computer03 tee /etc/ceph/ceph.client.cinder.keyring

# 同时修改秘钥文件的属主与用户组
chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring

推送client.cinder秘钥(nova-compute)

1
2
3
ceph auth get-or-create client.cinder | ssh {your-nova-compute-server} sudo tee /etc/ceph/ceph.client.cinder.keyring

chown cinder:cinder

libvirt秘钥

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
##nova-compute所在节点需要将client.cinder用户的秘钥文件存储到libvirt中;当基于ceph后端的cinder卷被attach到虚拟机实例时,libvirt需要用到该秘钥以访问ceph集群;

[root@computer01 ceph]# ceph auth get-key client.cinder | ssh root@computer13 tee /etc/ceph/client.cinder.key

##将秘钥加入libvirt
# 首先生成1个uuid,全部计算节点可共用此uuid(其他节点不用操作此步);
# uuid后续配置nova.conf文件时也会用到,请保持一致
uuidgen
457eb676-33da-42ec-9a8c-9293d545c337

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
Donate