部署高可用radowsgw存储网关,s3cmd实现bucket的管理,Nginx+RGW 的动静分离,基于 prometheus 监控 ceph 集群运行状态

发布时间 2023-03-29 16:17:49作者: 滴滴滴

一. 掌握对象存储的特点及使用场景
#RadosGW 存储特点

通过对象存储网关将数据存储为对象,每个对象除了包含数据,还包含数据自身的元数据。
对象通过 Object ID 来检索,无法通过普通文件系统的挂载方式通过文件路径加文件名称操作来直接访问对象,只能通过 API 来访问,或者第三方客户端(实际上也是对 API 的封装)。
对象的存储不是垂直的目录树结构,而是存储在扁平的命名空间中,Amazon S3 将这个扁平命名空间称为 bucket,而 swift 则将其称为容器。
无论是 bucket 还是容器,都不能再嵌套(在 bucket 不能再包含 bucket)。
bucket 需要被授权才能访问到,一个帐户可以对多个 bucket 授权,而权限可以不同。
方便横向扩展、快速检索数据。
不支持客户端挂载,且需要客户端在访问的时候指定文件名称。
不是很适用于文件过于频繁修改及删除的场景。
ceph 使用 bucket 作为存储桶(存储空间),实现对象数据的存储和多用户隔离,数据存储在bucket 中,用户的权限也是针对 bucket 进行授权,可以设置用户对不同的 bucket 拥有不同的权限,以实现权限管理。
#bucket特性:
11. 存储空间(bucket)是用于存储对象(Object)的容器,所有的对象都必须隶属于某个存储空间,可以设置和修改存储空间属性用来控制地域、访问权限、生命周期等,这些属性设置直
接作用于该存储空间内所有对象,因此可以通过灵活创建不同的存储空间来完成不同的管理功能。
12. 同一个存储空间的内部是扁平的,没有文件系统的目录等概念,所有的对象都直接隶属于其对应的存储空间。
13. 每个用户可以拥有多个存储空间
14. 存储空间的名称在 OSS 范围内必须是全局唯一的,一旦创建之后无法修改名称。
15. 存储空间内部的对象数目没有限制。
#radosgw 架构图


二. 在两台主机部署 radowsgw 存储网关以实现高可用环境
#安装 radosgw服务并初始化

root@ceph-mgr1:~# apt install radosgw
root@ceph-mgr2:~# apt install radosgw

#在 ceph deploy 服务器将 ceph-mgr1 初始化为 radosGW 服务:

[cephadmin@ceph-deploy ~]$ cd ceph-cluster/
[cephadmin@ceph-deploy ceph-cluster]$ ceph-deploy rgw create ceph-mgr2
[cephadmin@ceph-deploy ceph-cluster]$ ceph-deploy rgw create ceph-mgr1

#验证radosgw服务状态

#验证radosgw服务进程 :

root@ceph-mgr1:~# ps -ef|grep radosgw
ceph 3940 1 0 20:00 ? 00:00:17 /usr/bin/radosgw -f --cluster ceph --name client.rgw.ceph-mgr1 --setuser ceph --setgroup ceph
root 4640 2944 0 20:48 pts/0 00:00:00 grep --color=auto radosgw

#radosgw的存储池类型

#查看默认 radosgw 的存储池信息:

cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin zone get --rgw-zone=default --rgw-zonegroup=default
#验证RGW zone 信息

cephadmin@ceph-deploy:~$ radosgw-admin zone get --rgw-zone=default
{
"id": "41494222-626e-4bfa-9d1a-7bf35c863cba",
"name": "default",
"domain_root": "default.rgw.meta:root",
"control_pool": "default.rgw.control",
"gc_pool": "default.rgw.log:gc",
"lc_pool": "default.rgw.log:lc",
"log_pool": "default.rgw.log",
"intent_log_pool": "default.rgw.log:intent",
"usage_log_pool": "default.rgw.log:usage",
"roles_pool": "default.rgw.meta:roles",
"reshard_pool": "default.rgw.log:reshard",
"user_keys_pool": "default.rgw.meta:users.keys",
"user_email_pool": "default.rgw.meta:users.email",
"user_swift_pool": "default.rgw.meta:users.swift",
"user_uid_pool": "default.rgw.meta:users.uid",
"otp_pool": "default.rgw.otp",
"system_key": {
"access_key": "",
"secret_key": ""
},
"placement_pools": [
{
"key": "default-placement",
"val": {
"index_pool": "default.rgw.buckets.index",
"storage_classes": {
"STANDARD": {
"data_pool": "default.rgw.buckets.data"
}
},
"data_extra_pool": "default.rgw.buckets.non-ec",
"index_type": 0
}
}
],
"realm_id": "",
"notif_pool": "default.rgw.log:notif"
}


#radosgw服务高可用配置

#自定义http端口 :
配置文件可以在 ceph deploy 服务器修改然后统一推送,或者单独修改每个 radosgw 服务器的配置为统一配置,然后重启 RGW 服务。

[root@ceph-mgr2 ~]# vim /etc/ceph/ceph.conf
#在最后面添加针对当前节点的自定义配置如下:
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = civetweb port=9900

#重启服务

[root@ceph-mgr2 ~]# systemctl restart ceph-radosgw@rgw.ceph-mgr2.service

#安装并配置lvs:

root@ceph-client4-ubuntu1804:~# cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
acassen
}
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 192.168.200.1
smtp_connect_timeout 30
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
state MASTER
interface eth1
garp_master_delay 10
smtp_alert
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
# optional label. should be of the form "realdev:sometext" for
# compatibility with ifconfig.
192.168.6.188 dev eth1 label eth1:0
}
}

#安装并配置反向代理:

[root@ceph-client1-centos7 ~]# vim /etc/haproxy/haproxy.cfg
listen ceph-rgw
bind 192.168.6.188:80
mode tcp
server rgw1 192.168.6.104:9900 check inter 3s fall 3 rise 5
server rgw2 192.168.6.105:9900 check inter 3s fall 3 rise 5

#测试http反向代理

#radosgw https :
在 rgw 节点生成签名证书并配置 radosgw 启用 SSL
#自签名证书

[root@ceph-mgr2 ~]# cd /etc/ceph/
[root@ceph-mgr2 ceph]# mkdir certs
[root@ceph-mgr2 ceph]# cd certs/
[root@ceph-mgr2 certs]# openssl genrsa -out civetweb.key 2048
[root@ceph-mgr2 certs]# openssl req -new -x509 -key civetweb.key -out
civetweb.crt -subj "/CN=rgw.magedu.net"
[root@ceph-mgr2 certs]# cat civetweb.key civetweb.crt > civetweb.pem
[root@ceph-mgr2 certs]# tree
.
├── civetweb.crt
├── civetweb.key
└── civetweb.pem
0 directories, 3 files

#SSL配置 :

[root@ceph-mgr2 certs]# vim /etc/ceph/ceph.conf
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = "civetweb port=9900+9443s
ssl_certificate=/etc/ceph/certs/civetweb.pem"
[root@ceph-mgr2 certs]# systemctl restart ceph-radosgw@rgw.ceph-mgr2.service

#验证https端口

#验证访问:

#负载均衡配置监听及 realserver

#ceph http access
listen ceph-rgw
bind 192.168.6.188:80
mode tcp
server 192.168.6.104:9900 check inter 3s fall 3 rise 5
server 192.168.6.105:9900 check inter 3s fall 3 rise 5
#ceph https access
listen ceph-rgw
bind 192.168.6.188:443
mode tcp
server 192.168.6.104:9443 check inter 3s fall 3 rise 5
server 192.168.6.105:9443 check inter 3s fall 3 rise 5

#重启负载均衡

root@ceph-client4-ubuntu1804:~# systemctl restart haproxy
#测试访问 :

#日志及其它优化配置 :

[root@ceph-mgr2 certs]# mkdir /var/log/radosgw
[root@ceph-mgr2 certs]# chown ceph.ceph /var/log/radosgw

#当前配置

[root@ceph-mgr2 ceph]# vim ceph.conf
[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = "civetweb port=9900+9443s
ssl_certificate=/etc/ceph/certs/civetweb.pem
error_log_file=/var/log/radosgw/civetweb.error.log
access_log_file=/var/log/radosgw/civetweb.access.log
request_timeout_ms=30000 num_threads=200"

#https://docs.ceph.com/en/mimic/radosgw/config-ref/
num_threads 默认值等于 rgw_thread_pool_size=100
#重启服务

[root@ceph-mgr2 certs]# systemctl restart ceph-radosgw@rgw.ceph-mgr2.service
#访问测试:

[root@ceph-mgr2 certs]# curl -k https://172.31.6.105:9443

#验证日志:

[root@ceph-mgr2 certs]# tail /var/log/radosgw/civetweb.access.log
172.31.6.108 - - [01/Jan/2021:17:02:28 +0800] "GET / HTTP/1.1" 200 413 - curl/7.29.0
172.31.6.108 - - [01/Jan/2021:17:02:29 +0800] "GET / HTTP/1.1" 200 413 - curl/7.29.0

三. 基于 s3cmd 实现 bucket 的管理及数据的上传和下载
#RGW Server 配 置
在实际的生产环境,RGW1 和 RGW2 的配置参数是完全一样的

vim /etc/ceph/ceph.conf
[client.rgw.ceph-mgr1]
rgw_host = ceph-mgr1
rgw_frontends = civetweb port=9900
rgw_dns_name = rgw.magedu.net

[client.rgw.ceph-mgr2]
rgw_host = ceph-mgr2
rgw_frontends = civetweb port=9900
rgw_dns_name = rgw.magedu.net

#创建RGW账户

cephadmin@ceph-deploy:~/ceph-cluster$ radosgw-admin user create --uid=”user1” --display-name="user1"
{
"user_id": "”user1”",
"display_name": "user1",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "”user1”",
"access_key": "Y1942UDT3SKLD4OCZ9SR",
"secret_key": "e1qni0F2cJWAXpkX6YBA1zxo5syLL9J98QMvvDfH"
}
],


#安装s3cmd客户端

cephadmin@ceph-deploy:/home/ceph/ceph-cluster$ sudo apt install s3cmd

#s3cmd客户端添加域名解析

cephadmin@ceph-deploy:/home/ceph/ceph-cluster$ cat /etc/hosts
127.0.0.1 localhost
127.0.1.1 ubuntu.example.local ubuntu
#.................................................
172.31.6.108 ceph-node3.example.local ceph-node3
172.31.6.109 ceph-node4.example.local ceph-node4
172.31.6.105 rgw.magedu.net

#配置命令执行环境

root@ceph-deploy:~# s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for
using the env variables.
Access Key: Y1942UDT3SKLD4OCZ9SR # 输 入 用户 户 access key
Secret Key: e1qni0F2cJWAXpkX6YBA1zxo5syLL9J98QMvvDfH # 输 入 用户 户 secret key
Default Region [US]: #region 选 项, 直 接 回 车
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: rgw.magedu.net:9900 #RGW 域 名
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and
"%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket
[%(bucket)s.s3.amazonaws.com]: rgw.magedu.net:9900/%(bucket) #bucket 域 名 格 式
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: #密码
Path to GPG program [/usr/bin/gpg]: #gpg 命 令 路 径 , 用 于 认 证 管 理
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No # 是 否 使用 用 https
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name: #代理
New settings: # 最 终 配 置
New settings:
Access Key: Y1942UDT3SKLD4OCZ9SR
Secret Key: e1qni0F2cJWAXpkX6YBA1zxo5syLL9J98QMvvDfH
Default Region: US
S3 Endpoint: rgw.magedu.net:9900
DNS-style bucket+hostname:port template for accessing a bucket:
rgw.magedu.net:9900/%(bucket)
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0
Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y # 是 否 保 存 以 上 配 置
Configuration saved to '/var/lib/ceph/.s3cfg'


#验证认证文件

root@ceph-deploy:~# cat /root/.s3cfg
[default]
access_key = Y1942UDT3SKLD4OCZ9SR
access_token =
add_encoding_exts =
add_headers =
bucket_location = US
ca_certs_file =
cache_file =
check_ssl_certificate = True
check_ssl_hostname = True
cloudfront_host = cloudfront.amazonaws.com
default_mime_type = binary/octet-stream
delay_updates = False
delete_after = False
delete_after_fetch = False
delete_removed = False
dry_run = False
enable_multipart = True
encoding = UTF-8
encrypt = False
expiry_date =
expiry_days =
expiry_prefix =
follow_symlinks = False
force = False
get_continue = False
gpg_command = /usr/bin/gpg
gpg_decrypt = %(gpg_command)s -d --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_encrypt = %(gpg_command)s -c --verbose --no-use-agent --batch --yes --passphrase-fd %(passphrase_fd)s -o %(output_file)s %(input_file)s
gpg_passphrase =
guess_mime_type = True
host_base = rgw.magedu.net:9900
host_bucket = rgw.magedu.net:9900/%(bucket)
human_readable_sizes = False
invalidate_default_index_on_cf = False
invalidate_default_index_root_on_cf = True
invalidate_on_cf = False
kms_key =
limit = -1
limitrate = 0
list_md5 = False
log_target_prefix =
long_listing = False
max_delete = -1
mime_type =
multipart_chunk_size_mb = 15
multipart_max_chunks = 10000
preserve_attrs = True
progress_meter = True
proxy_host =
proxy_port = 0
put_continue = False
recursive = False
recv_chunk = 65536
reduced_redundancy = False
requester_pays = False
restore_days = 1
restore_priority = Standard
secret_key = e1qni0F2cJWAXpkX6YBA1zxo5syLL9J98QMvvDfH
send_chunk = 65536
server_side_encryption = False
signature_v2 = False
signurl_use_https = False
simpledb_host = sdb.amazonaws.com
skip_existing = False
socket_timeout = 300
stats = False
stop_on_error = False
storage_class =
urlencoding_mode = normal
use_http_expect = False
use_https = False
use_mime_magic = True
verbosity = WARNING
website_endpoint = http://%(bucket)s.s3-website-%(location)s.amazonaws.com/
website_error =
website_index = index.html


#命令行客户端s3cmd验证数据上传
#创建 bucket以验证权限

root@ceph-deploy:~# s3cmd la
root@ceph-deploy:~# s3cmd mb s3://css
Bucket 's3://css/' created
root@ceph-deploy:~# s3cmd mb s3://images
Bucket 's3://images/' created
root@ceph-deploy:~# wget https://img1.jcloudcs.com/portal/brand/2021/fl1-2.jpg
--2023-01-09 22:37:39-- https://img1.jcloudcs.com/portal/brand/2021/fl1-2.jpg
Resolving img1.jcloudcs.com (img1.jcloudcs.com)... 121.226.246.3
Connecting to img1.jcloudcs.com (img1.jcloudcs.com)|121.226.246.3|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1294719 (1.2M) [image/jpeg]
Saving to: ‘fl1-2.jpg’

fl1-2.jpg 100%[==============================================================================>] 1.23M 2.71MB/s in 0.5s

2023-01-09 22:37:40 (2.71 MB/s) - ‘fl1-2.jpg’ saved [1294719/1294719]

root@ceph-deploy:~# ls
fl1-2.jpg

#验证上传数据

root@ceph-deploy:~# s3cmd put fl1-2.jpg s3://images/jpg/
upload: 'fl1-2.jpg' -> 's3://images/jpg/fl1-2.jpg' [1 of 1]
1294719 of 1294719 100% in 1s 658.20 kB/s done
root@ceph-deploy:~# s3cmd ls s3://images/
DIR s3://images/jpg/
root@ceph-deploy:~# s3cmd ls s3://images/jpg/
2023-01-09 14:39 1294719 s3://images/jpg/fl1-2.jpg

#验证下载文件

root@ceph-deploy:~# s3cmd get s3://images/jpg/fl1-2.jpg /opt/
download: 's3://images/jpg/fl1-2.jpg' -> '/opt/fl1-2.jpg' [1 of 1]
1294719 of 1294719 100% in 0s 37.46 MB/s done

#删除文件

root@ceph-deploy:/opt# s3cmd ls s3://images/jpg/
2023-01-09 14:39 1294719 s3://images/jpg/fl1-2.jpg
root@ceph-deploy:/opt# s3cmd rm s3://images/jpg/fl1-2.jpg
delete: 's3://images/jpg/fl1-2.jpg'
root@ceph-deploy:/opt# s3cmd ls s3://images/jpg/
root@ceph-deploy:/opt# s3cmd ls s3://images/

四. 基于 Nginx+RGW 的动静分离及短视频案例
#查看bucket命令

root@ceph-deploy:/home/jack#s3cmd ls s3://
2023-01-09 14:36 s3://css
2023-01-09 14:37 s3://images
2023-01-10 14:27 s3://video
2023-01-11 12:54 s3://videos

#创建bucket policy文件

root@ceph-deploy:/home/jack#cat video-bucket_policy
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::videos/*"
]
}]
}

#对video bucket应用策略文件

root@ceph-deploy:/home/jack#s3cmd setpolicy video-bucket_policy s3://videos
s3://video/: Policy updated

#下载mp4文件

root@ceph-deploy:/home/jack#wget video.699pic.com/videos/04/93/05/b_X2mpQyRS9Rmy1641049305.mp4
root@ceph-deploy:/home/jack#mv b_X2mpQyRS9Rmy1641049305.mp4 yanhua.mp4

#上传yanyan.mp4对象到videos bucket里

root@ceph-deploy:/home/jack#s3cmd put yanhua.mp4 s3://videos

#在客户端上安装nginx

先安装依赖文件
root@ceph-client3-ubuntu1804:~# apt install iproute2 ntpdate tcpdump telnet traceroute nfs-kernel-server nfs-common lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute gcc openssh-server lrzsz tree openssl libssl-dev libpcre3 libpcre3-dev zlib1g-dev ntpdate tcpdump telnet traceroute iotop unzip zip

root@ceph-client3-ubuntu1804:~# cd /usr/local/src/
root@ceph-client3-ubuntu1804:/usr/local/src# wget https://nginx.org/download/nginx-1.22.0.tar.gz
root@ceph-client3-ubuntu1804:/usr/local/src# tar xvf nginx-1.22.0.tar.gz
root@ceph-client3-ubuntu1804:/usr/local/src# cd nginx-1.22.0/

root@ceph-client3-ubuntu1804:/usr/local/src# ./configure --prefix=/apps/nginx \
--user=nginx \
--group=nginx \
--with-http_ssl_module \
--with-http_v2_module \
--with-http_realip_module \
--with-http_stub_status_module \
--with-http_gzip_static_module \
--with-pcre \
--with-stream \
--with-stream_ssl_module \
--with-stream_realip_module

root@ceph-client3-ubuntu1804:/usr/local/src#make && make install


#配置nginx conf文件 如下

root@ceph-client4-ubuntu1804:/apps/nginx/conf# cat nginx.conf
user root;
upstream videos {
server 192.168.6.104:9900;
server 192.168.6.105:9900;
server {
listen 80;
server_name rgw.magedu.net rgw.magedu.com;
proxy_set_header Host $host;
location ~* \.(mp4|avi)$ {
proxy_pass http://videos;

#启动nginx服务

/apps/nginx/sbin/nginx -c /apps/nginx/conf/nginx.conf

#配置笔记本hosts文件,添加域名

192.168.6.204 rgw.magedu.net

#浏览器登录http://rgw.magedu.net/videos/yanhua.mp4 url访问videos bucket里yanhua.mp4文件,成功


五. 启用 ceph dashboard 并基于 prometheus 监控 ceph 集群运行状态
#部署prometheus
root@ceph-mon1#mkdir /apps
root@ceph-mon1:#cd /apps/
root@ceph-mon1:/apps/#tar xvf prometheus-2.40.5.linux-amd64.tar.gz
root@ceph-mon1:/apps/#ln -sv /apps/prometheus-2.40.5.linux-amd64
root@ceph-mon1:/apps/#cd prometheus
root@ceph-mon1:/apps/#vim /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus Server
Documentation=https://prometheus.io/docs/introduction/overview/
After=network.target
[Service]
Restart=on-failure
WorkingDirectory=/apps/prometheus/
ExecStart=/apps/prometheus/prometheus --config.file=/apps/prometheus/prometheus.yml --web.enable-lifecycle
[Install]
WantedBy=multi-user.target

root@ceph-mon1:/apps/#systemctl daemon-reload && systemctl restart prometheus && systemctl enable prometheus.service

#部署node_exporter
 root@prometheus-node1:~# mkdir /apps
 root@prometheus-node1:~# cd /apps
#tar xvf node_exporter-1.5.0.linux-amd64.tar.gz
#ln -sv /apps/node_exporter-1.5.0.linux-amd64 /apps/node_exporter
‘/apps/node_exporter’ -> ‘/apps/node_exporter-1.5.0.linux-amd64’
#ll /apps/node_exporter/
total 17820
-rw-r–r-- 1 3434 3434 11357 Dec 5 19:15 LICENSE
-rwxr-xr-x 1 3434 3434 18228926 Dec 5 19:10 node_exporter
-rw-r–r-- 1 3434 3434 463 Dec 5 19:15 NOTICE

 创建service文件:
 root@prometheus-node1:~# vim /etc/systemd/system/node-exporter.service
[Unit]
Description=Prometheus Node Exporter
After=network.target
[Service]
ExecStart=/apps/node_exporter/node_exporter
[Install]
WantedBy=multi-user.target
 root@prometheus-node1:~#systemctl daemon-reload && systemctl restart node-exporter && systemctl enable node-exporter.service

#配置prometheus server数据并验证
scrape_configs:

The job name is added as a label job=<job_name> to any timeseries scraped from this config.
job_name: “prometheus”

metrics_path defaults to ‘/metrics’
scheme defaults to ‘http’.
static_configs:

targets: [“192.168.6.106:9100”,“192.168.6.107:9100”,“192.168.6.108:9100”,“192.168.6.109:9100”]
root@ceph-mon1:/apps/prometheus#systemctl restart prometheus.service

#通过prometheus监ceph服务
#启用prometheus监控模块
[cephadmin@ceph-deploy ceph-cluster]$ ceph mgr module enable prometheus

#验证manager数据
#配置haproxy
#ceph https access
listen ceph-prometheus-9283
bind 192.168.6.188:9283
mode tcp
server rgw1 192.168.6.104:9283 check inter 3s fall 3 rise 5
server rgw2 192.168.6.105:9283 check inter 3s fall 3 rise 5
root@ceph-client4-ubuntu1804:~# systemctl restart haproxy.service

#配置 prometheus采集数据
root@ceph-mon1:/apps/prometheus# vim /apps/prometheus/prometheus.yml
job_name: “prometheus-ceph”
static_configs:

targets: [“192.168.6.188:9283”]
#验证数据:

#通过grafana显示监控数据

#配置数据
在 grafana添加prometheus数据源

#导入模板

 

 

————————————————
版权声明:本文为CSDN博主「yong_shh」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。
原文链接:https://blog.csdn.net/weixin_42896216/article/details/128585549