【ceph运维】删除mds

发布时间 2023-09-12 21:06:09作者: 苏格拉底的落泪

删除mds

1. 集群状态:

[root@ceph02 ~]# ceph -s
  cluster:
    id:     9de7d2fb-245a-4b9c-8c1f-b452110fb61f
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph01
    mgr: ceph01(active)
    mds: cephfs-1/1/1 up  {0=ceph02=up:active}, 1 up:standby
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   4 pools, 48 pgs
    objects: 21 objects, 2.19KiB
    usage:   1.24GiB used, 43.7GiB / 45.0GiB avail
    pgs:     48 active+clean

2. 停止mds服务

[root@ceph02 ~]# systemctl stop ceph-mds@ceph02

3、删除该mds在集群里面的认证信息

[root@ceph02 ~]# ceph auth del mds.ceph02

3、禁用该mds服务(如果没有此步骤,下次开机时会自动启动该mds服务)

[root@ceph02 ~]# systemctl disable ceph-mds@ceph02
Removed symlink /etc/systemd/system/ceph-mds.target.wants/ceph-mds@ceph02.service.

4、删除该mds服务的相关数据

[root@ceph02 ~]# rm -rf /var/lib/ceph/mds/ceph-ceph02

5、查看集群状态正常

[root@ceph02 ~]# ceph -s
  cluster:
    id:     9de7d2fb-245a-4b9c-8c1f-b452110fb61f
    health: HEALTH_WARN
            insufficient standby MDS daemons available

  services:
    mon: 1 daemons, quorum ceph01
    mgr: ceph01(active)
    mds: cephfs-1/1/1 up  {0=ceph01=up:active}
    osd: 3 osds: 3 up, 3 in

  data:
    pools:   4 pools, 48 pgs
    objects: 22 objects, 8.95KiB
    usage:   1.24GiB used, 43.7GiB / 45.0GiB avail
    pgs:     48 active+clean

 

参考资料

1. 正确删除mds服务