ceph Nautilus-14.2.22版本部署

发布时间 2023-10-12 02:12:46作者: 尹正杰

温馨提示:
centos 7支持的最高版本时ceph 15.2.17 octopus,如果想要使用更高版本请绕道。
必须选择Ubuntu 20.04 LTS 或者Centos 8+
但是,经实际测试,ceph 15.2.17 octopus的MGR组件改用python3改写,这倒是在部署"ceph-mgr-dashboard"组件时会失败,官方推荐使用cephadm来部署解决。
当然,也可以降低ceph的版本,比如降低版本为ceph 14.2.22 Nautilus,该"ceph-mgr-dashboard"组件依赖的依旧是python2的环境。

- ceph基础环境准备
1.配置主机解析
cat >> /etc/hosts <<EOF
10.0.0.141 ceph141
10.0.0.142 ceph142
10.0.0.143 ceph143
EOF

2.安装常用的工具
yum -y install wget unzip chrony ntpdate


3.配置时间同步
3.1 ceph141作为服务端
[root@ceph141 ~]# vim /etc/chrony.conf
# 注释原有的时间服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ceph141 iburst
...
allow 10.0.0.0/24
local stratum 10
[root@ceph141 ~]#
[root@ceph141 ~]# echo "*/10 * * * * /usr/sbin/ntpdate ntp.aliyun.com" >> /var/spool/cron/root
[root@ceph141 ~]#
[root@ceph141 ~]# crontab -l
*/10 * * * * /usr/sbin/ntpdate ntp.aliyun.com
[root@ceph141 ~]#


3.2 ceph142和ceph143作为客户端
[root@ceph142 ~]# vim /etc/chrony.conf
# 注释原有的时间服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ceph141 iburst
...


[root@ceph143 ~]# vim /etc/chrony.conf
# 注释原有的时间服务器
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ceph141 iburst
...


3.3 检查时间是否同步
systemctl enable --now chronyd
timedatectl set-ntp true
timedatectl set-timezone Asia/Shanghai
chronyc activity -v


4.ceph141节点配置免密登录
4.1 生成密钥
ssh-keygen -t rsa -f ~/.ssh/id_rsa -P '' -q

4.2 拷贝密钥到其它节点
for i in `seq 1 3`;do ssh-copy-id ceph14$i;done

4.3 让所有节点公用同一套密钥
scp -rp ~/.ssh ceph142:~
scp -rp ~/.ssh ceph143:~



5."ceph141"节点安装"ceph-deploy"工具,用于后期部署ceph集群
5.1 准备国内的软件源(含基础镜像软件源和epel源)
curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

5.2 配置ceph软件源
cat > /etc/yum.repos.d/ceph.repo << EOF
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/\$basearch
gpgcheck=0
[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/noarch
gpgcheck=0
[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.aliyun.com/ceph/rpm-nautilus/el7/SRPMS
gpgcheck=0
EOF

5.3 安装ceph-deploy工具
yum -y install ceph-deploy

 

6. 准备硬盘
每个节点准备4 * 2TB硬盘。如果之前忘记添加了也没有关系,添加后执行如下命令就可以自动识别硬盘。

for i in `seq 0 2`; do echo "- - -" > /sys/class/scsi_host/host${i}/scan;done

 


- 初始化mon
1 安装"distribute"软件包,会用到该软件包的"pkg_resources"模块。(建议所有节点安装)
yum -y install gcc python-setuptools python-devel wget
wget https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip --no-check-certificate
unzip distribute-0.7.3.zip
cd distribute-0.7.3
python setup.py install


2 创建ceph-deploy部署目录
mkdir -pv /yinzhengjie/softwares/ceph-cluster/ && cd /yinzhengjie/softwares/ceph-cluster/


3 初始化mon
ceph-deploy install --no-adjust-repos ceph141 ceph142 ceph143
ceph-deploy new ceph141 ceph142 ceph143
ceph-deploy mon create-initial
ceph-deploy admin ceph141 ceph142 ceph143


- 初始化osd,如果在此阶段,卡主时间较长,超过1分钟,则可以尝试重启操作系统,基本上就会被解决,怀疑是跟磁盘热加载的问题。
ceph-deploy osd create --data /dev/sdb ceph141
ceph-deploy osd create --data /dev/sdc ceph141
ceph-deploy osd create --data /dev/sdd ceph141
ceph-deploy osd create --data /dev/sde ceph141

ceph-deploy osd create --data /dev/sdb ceph142
ceph-deploy osd create --data /dev/sdc ceph142
ceph-deploy osd create --data /dev/sdd ceph142
ceph-deploy osd create --data /dev/sde ceph142

ceph-deploy osd create --data /dev/sdb ceph143
ceph-deploy osd create --data /dev/sdc ceph143
ceph-deploy osd create --data /dev/sdd ceph143
ceph-deploy osd create --data /dev/sde ceph143

 

- 初始化mgr,如果不初始化,则看不到集群的存储可用空间大小
ceph-deploy mgr create ceph141 ceph142 ceph143


- 查看ceph集群状态
[root@ceph141 ceph-cluster]# ceph -s
cluster:
id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
health: HEALTH_WARN
mons are allowing insecure global_id reclaim

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 7m)
mgr: ceph141(active, since 32s), standbys: ceph142, ceph143
osd: 12 osds: 12 up (since 2m), 12 in (since 2m)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs:

[root@ceph141 ceph-cluster]#

 

报错解决办法:
# ceph config set mon auth_allow_insecure_global_id_reclaim false

再次查看ceph集群状态(正常)
[root@ceph141 ceph-cluster]# ceph -s
cluster:
id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 7m)
mgr: ceph141(active, since 79s), standbys: ceph142, ceph143
osd: 12 osds: 12 up (since 3m), 12 in (since 3m)

data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs:

[root@ceph141 ceph-cluster]#


- 查看Ceph版本
[root@ceph141 ceph-cluster]# ceph -v
ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
[root@ceph141 ceph-cluster]#



- 部署radosgw服务,如果遇到卡顿就重启虚拟机,基本上该步骤10s内能搞定
[root@ceph141 ceph-cluster]# ceph-deploy rgw create ceph141 ceph142 ceph143
[root@ceph141 ceph-cluster]#
[root@ceph141 ceph-cluster]# ceph -s
cluster:
id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 55s)
mgr: ceph142(active, since 40s), standbys: ceph143, ceph141
osd: 12 osds: 12 up (since 49s), 12 in (since 11m)
rgw: 3 daemons active (ceph141, ceph142, ceph143)

task status:

data:
pools: 4 pools, 128 pgs
objects: 187 objects, 1.2 KiB
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs: 128 active+clean

io:
client: 89 KiB/s rd, 0 B/s wr, 88 op/s rd, 58 op/s wr

[root@ceph141 ceph-cluster]#
[root@ceph141 ceph-cluster]# ss -ntl | grep 7480
LISTEN 0 128 *:7480 *:*
LISTEN 0 128 [::]:7480 [::]:*
[root@ceph141 ceph-cluster]#


- 安装mds
[root@ceph141 ceph-cluster]# ceph-deploy mds create ceph141 ceph142 ceph143
[root@ceph141 ceph-cluster]#
[root@ceph141 ceph-cluster]# ceph -s
cluster:
id: 1a9394f4-5983-4818-aa2d-f81ba06702ab
health: HEALTH_OK

services:
mon: 3 daemons, quorum ceph141,ceph142,ceph143 (age 2m)
mgr: ceph142(active, since 2m), standbys: ceph143, ceph141
mds: 3 up:standby
osd: 12 osds: 12 up (since 2m), 12 in (since 13m)
rgw: 3 daemons active (ceph141, ceph142, ceph143)

task status:

data:
pools: 4 pools, 128 pgs
objects: 187 objects, 1.2 KiB
usage: 12 GiB used, 23 TiB / 23 TiB avail
pgs: 128 active+clean

[root@ceph141 ceph-cluster]#
[root@ceph141 ceph-cluster]# ceph mds stat
3 up:standby
[root@ceph141 ceph-cluster]# ceph osd pool create cephfs-metadata 32 32
pool 'cephfs-metadata' created
[root@ceph141 ceph-cluster]# ceph osd pool create cephfs-data 64 64
pool 'cephfs-data' created
[root@ceph141 ceph-cluster]# ceph fs new yinzhengjie-cephfs cephfs-metadata cephfs-data
new fs with metadata pool 5 and data pool 6
[root@ceph141 ceph-cluster]# ceph fs ls
name: yinzhengjie-cephfs, metadata pool: cephfs-metadata, data pools: [cephfs-data ]
[root@ceph141 ceph-cluster]# ceph fs status yinzhengjie-cephfs
yinzhengjie-cephfs - 0 clients
==================
+------+----------+---------+----------+-------+-------+
| Rank | State | MDS | Activity | dns | inos |
+------+----------+---------+----------+-------+-------+
| 0 | creating | ceph143 | | 10 | 13 |
+------+----------+---------+----------+-------+-------+
+-----------------+----------+-------+-------+
| Pool | type | used | avail |
+-----------------+----------+-------+-------+
| cephfs-metadata | metadata | 256k | 7595G |
| cephfs-data | data | 0 | 7595G |
+-----------------+----------+-------+-------+
+-------------+
| Standby MDS |
+-------------+
| ceph142 |
| ceph141 |
+-------------+
MDS version: ceph version 14.2.22 (ca74598065096e6fcbd8433c8779a2be0c889351) nautilus (stable)
[root@ceph141 ceph-cluster]#