一、ceph-deploy设置
1.1 Debian/Ubuntu
wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb https://download.ceph.com/debian-mimic/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
sudo apt update
sudo apt install ceph-deploy
1.2 RHEL/CentOS
sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
cat << EOM > /etc/yum.repos.d/ceph.repo
[ceph-noarch]
name=Ceph noarch packages
baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://download.ceph.com/keys/release.asc
EOM
sudo yum update
sudo yum install ceph-deploy
附:阿里源
[Ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/$basearch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[Ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.aliyun.com/ceph/rpm-mimic/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.aliyun.com/ceph/keys/release.asc
priority=1
二、Ceph节点设置
2.1 安装Chrony(在每个Ceph节点上)
sudo yum install -y chrony
sudo vim /etc/chrony.conf
server 172.16.0.1 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
2.2 安装SSH服务器(在每个Ceph节点上)
sudo apt install openssh-server
或:
sudo yum install openssh-server
sudo vim /etc/ssh/sshd_config
PermitRootLogin yes
UseDNS no
2.3 创建ceph-deploy用户(在每个Ceph节点上)
ssh user@ceph-server
sudo useradd -d /home/{username} -m {username}
sudo passwd {username}
echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
sudo chmod 0440 /etc/sudoers.d/{username}
2.4 启用无密码SSH(在管理节点上)
ssh-keygen
ssh-copy-id {username}@node1
ssh-copy-id {username}@node2
ssh-copy-id {username}@node3
vim ~/.ssh/config
Host node1
Hostname node1
User {username}
Host node2
Hostname node2
User {username}
Host node3
Hostname node3
User {username}
三、快速创建存储集群
mkdir my-cluster
cd my-cluster
3.1 重新开始(可选)
ceph-deploy purge {ceph-node} [{ceph-node}]
ceph-deploy purgedata {ceph-node} [{ceph-node}]
ceph-deploy forgetkeys
rm ceph.*
3.2 创建集群
初始化集群:
ceph-deploy new {initial-monitor-node(s)}
例如:
ceph-deploy new node1
附:ceph.conf配置文件示例
-----------------------------------
[global]
#
# 集群ID,自动生成,必须唯一,默认为空
fsid = 1de5bfb3-b073-4974-b56b-64566bc598a7
# 集群公共网,自动生成,--public-network选项指定
public_network = 192.168.22.0/24
cluster network = 10.0.0.0/24
# 集群mon初始化成员,自动生成
mon_initial_members = ceph1
mon_host = 192.168.22.11
# 集群默认需要身份认证,自动生成
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
#
##以下为自定义配置
#
# 日志存储空间,默认值为5120MB
osd journal size = 10240
# 存储池默认副本数,默认值为3
osd pool default size = 3
# 存储池默认最小副本数,默认值为0,size-(size/2)
osd pool default min size = 2
# 存储池默认PG数,默认值为8
osd pool default pg num = 8
# 存储池默认PGP数,默认值为8
osd pool default pgp num = 8
# 防止OSD重启后更新crush运行图
osd crush update on start = false
# 用于chooseleaf CRUSH规则的存储桶类型,默认值为1,通常是包含一个或多个Ceph OSD守护进程的主机
osd crush chooseleaf type = 1
# OSD full之前使用的磁盘空间百分比,默认值为.95
mon osd full ratio = .95
# OSD nearfull之前使用的磁盘空间百分比,默认值为.85
mon osd nearfull ratio = .85
# 在OSD full之前也被认为是回填使用的磁盘空间百分比,默认值为.90
osd backfill full ratio = .90
# 允许进出单个OSD的最大回填数量,默认值为1
osd max backfills = 1
osd recovery max active = 1
osd recovery max single start = 1
osd recovery sleep = 0.5
#
[mon]
#
# 监视器之间允许的时钟漂移,以秒为单位,默认值为0.05s
mon clock drift allowed = 2
# 时钟漂移警告的指数退避,默认值为5s
mon clock drift warn backoff = 10
# 监视器可以删除池,默认false,即禁止删除池
mon allow pool delete = false
#
[osd]
#
# 可以执行预定清理时的下限时间,默认值为0
osd scrub begin hour = 0
# 可以执行预定清理时的上限时间,默认值为24
osd scrub end hour = 7
#
[client]
#
# 启用RADOS块设备(RBD)的缓存,默认启用
rbd cache = true
# 从直写模式开始,并在收到第一个刷新请求后切换回写。
# 启用此选项是一种保守但安全的设置,以防在rbd上运行的VM太旧而无法发送刷新,例如2.6.32之前的Linux中的virtio驱动程序,默认启用
rbd cache writethrough until flush = true
-----------------------------------
安装软件包:
ceph-deploy install {ceph-node} [...]
例如:
ceph-deploy install node1 node2 node3
部署初始mon并收集密钥:
ceph-deploy mon create-initial
完成此过程后,本地目录应具有以下keyring:
ceph.client.admin.keyring
ceph.bootstrap-mgr.keyring
ceph.bootstrap-osd.keyring
ceph.bootstrap-mds.keyring
ceph.bootstrap-rgw.keyring
ceph.bootstrap-rbd.keyring
分发配置文件和管理密钥:
ceph-deploy admin {ceph-node(s)}
例如:
ceph-deploy admin node1 node2 node3
创建mgr:
ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
创建osd:
ceph-deploy osd create --data {device} {ceph-node}
例如:
ceph-deploy osd create --data /dev/vdb node1
ceph-deploy osd create --data /dev/vdb node2
ceph-deploy osd create --data /dev/vdb node3
# 在ceph luminous中创建bluestore的过程为指定--data,--block-db,--block-wal,单个磁盘时只指定--data
查看集群状态:
ssh node1 sudo ceph health
ssh node1 sudo ceph -s
3.3 扩展集群
添加mds:
ceph-deploy mds create {ceph-node}
例如:
ceph-deploy mds create node1
添加mon:
ceph-deploy mon add {ceph-nodes}
例如:
ceph-deploy mon add node2 node3
查看仲裁状态:
ceph quorum_status --format json-pretty
添加mgr:
ceph-deploy mgr create node2 node3
添加rgw:
ceph-deploy rgw create {gateway-node}
例如:
ceph-deploy rgw create node1
附:rgw配置示例
-----------------------------------
[client.node1]
rgw host = node1
rgw dns name = oss.test.com
rgw frontends = "civetweb port=80 num_threads=512 request_timeout_ms=30000"
-----------------------------------
3.4 存储/检索对象数据
ceph osd map {poolname} {object-name}
echo {Test-data} > testfile.txt
ceph osd pool create mytest 8
rados put {object-name} {file-path} --pool=mytest
rados put test-object-1 testfile.txt --pool=mytest
rados -p mytest ls
ceph osd map {pool-name} {object-name}
ceph osd map mytest test-object-1
Ceph应输出对象的位置。例如:
osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
删除测试数据:
rados rm test-object-1 --pool=mytest
ceph osd pool rm mytest
四、块设备快速入门
4.1 安装Ceph
验证是否具有适当版本的Linux内核:
lsb_release -a
uname -r
在管理节点操作:
ceph-deploy install ceph-client
ceph-deploy admin ceph-client
在客户端节点操作:
sudo chmod +r /etc/ceph/ceph.client.admin.keyring
4.2 创建块设备池
ceph osd pool create {pool-name} {pg-num} [{pgp-num}] [replicated] [crush-rule-name] [expected-num-objects]
ceph osd pool create {pool-name} {pg-num} {pgp-num} erasure [erasure-code-profile] [crush-rule-name] [expected_num_objects]
ceph osd pool application enable {pool-name} {application-name}
rbd pool init <pool-name>
4.3 配置块设备
rbd create foo --size 4096 --image-feature layering [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
sudo rbd map foo --name client.admin [-m {mon-IP}] [-k /path/to/ceph.client.admin.keyring]
sudo mkfs.ext4 -m0 /dev/rbd/rbd/foo
sudo mkdir /mnt/ceph-block-device
sudo mount /dev/rbd/rbd/foo /mnt/ceph-block-device
cd /mnt/ceph-block-device
五、文件系统快速入门
5.1 先决条件
验证是否具有适当版本的Linux内核:
lsb_release -a
uname -r
在管理节点操作:
ceph-deploy install ceph-client
确保Ceph存储集群正在运行且处于active + clean状态。此外,请确保至少有一个Ceph元数据服务器正在运行:
ceph -s [-m {monitor-ip-address}] [-k {path/to/ceph.client.admin.keyring}]
5.2 创建文件系统
注:已经创建了MDS,但它不会变为活动状态,直到创建一些池和文件系统。
ceph osd pool create cephfs_data <pg_num>
ceph osd pool create cephfs_metadata <pg_num>
ceph fs new <fs_name> cephfs_metadata cephfs_data
5.3 创建密钥文件
获取特定用户的密钥:
cat ceph.client.admin.keyring
[client.admin]
key = AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
创建密钥文件:
vim admin.secret
AQCj2YpRiAe6CxAA7/ETt7Hcl9IyxyYciVs47w==
注:确保文件权限适合用户,但不适合对其他用户可见。
5.4 内核驱动
sudo mkdir /mnt/mycephfs
sudo mount -t ceph {ip-address-of-monitor}:6789:/ /mnt/mycephfs
sudo mount -t ceph 192.168.0.1:6789:/ /mnt/mycephfs -o name=admin,secretfile=admin.secret
5.5 用户空间中的文件系统(FUSE)
sudo mkdir ~/mycephfs
sudo ceph-fuse -m {ip-address-of-monitor}:6789 ~/mycephfs
sudo ceph-fuse -k ./ceph.client.admin.keyring -m 192.168.0.1:6789 ~/mycephfs
六、对象存储快速入门
6.1 安装对象网关
ceph-deploy install --rgw <client-node> [<client-node> ...]
6.2 创建对象网关实例
ceph-deploy rgw create <client-node>
6.3 配置对象网关实例
配置示例:
[client.rgw.client-node]
rgw_host = client-node
rgw_dns_name = oss.test.com
rgw_frontends = "civetweb port=80 num_threads=512 request_timeout_ms=30000"
重启实例:
sudo systemctl restart ceph-radosgw.service
sudo service radosgw restart id=rgw.<short-hostname>
防火墙放开端口:
sudo firewall-cmd --list-all
sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
sudo firewall-cmd --reload
访问:
http://<client-node>:80
输出类似信息:
<?xml version="1.0" encoding="UTF-8"?>
<ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Owner>
<ID>anonymous</ID>
<DisplayName></DisplayName>
</Owner>
<Buckets>
</Buckets>
</ListAllMyBucketsResult>