OLAP系列之分析型数据库clickhouse主从副本模式(三)

发布时间 2023-07-24 17:37:10作者: 杨梅冲

一、测试单分片,单副本或多副本模式

# 1.停止集群
systemctl stop clickhouse-server

# 修改配置文件
vim /etc/clickhouse-server/config.d/metrika.xml
    <remote_servers>
        <clickhouse_cluster_3shards_1replicas>
        <!-- 1个分片,1个副本 -->
            <shard>
                <internal_replication>true</internal_replication>
                <replica>
                    <host>192.168.12.91</host>
                    <port>9000</port>
                </replica>
                <replica>
                    <host>192.168.12.90</host>
                    <port>9000</port>
                </replica>
            </shard>
        </clickhouse_cluster_3shards_1replicas>
    </remote_servers>
<!-- 复制标识的配置,也称为宏配置,这里唯一标识一个副本名称,每个实例配置都是唯一的 -->
    <macros>
        <!-- 当前节点在在集群中的分片编号,需要在集群中唯一,2个节点都为01-->
        <shard>01</shard>
        <!-- 副本的唯一标识,需要在单个分片的多个副本中唯一,cluster01,cluster02 -->
        <replica>cluster01</replica>
    </macros>

查询集群
clickhouse-clicent --password
 :) select * from system.clusters;

SELECT *
FROM system.clusters

Query id: 66d1008c-c5db-43bf-b799-7f8eebe23cc3

┌─cluster──────────────────────────────┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name─────┬─host_address──┬─port─┬─is_local─┬─user────┬─default_database─┬─errors_count─┬─slowdowns_count─┬─estimated_recovery_time─┐
│ clickhouse_cluster_3shards_1replicas │         1 │            1 │           1 │ 192.168.12.91 │ 192.168.12.91 │ 9000 │        1 │ default │                  │            0 │               0 │                       0 │
│ clickhouse_cluster_3shards_1replicas │         1 │            1 │           2 │ 192.168.12.90 │ 192.168.12.90 │ 9000 │        0 │ default │                  │            0 │               0 │                       0 │
└──────────────────────────────────────┴───────────┴──────────────┴─────────────┴───────────────┴───────────────┴──────┴──────────┴─────────┴──────────────────┴──────────────┴─────────────────┴─────────────────────────┘

2 rows in set. Elapsed: 0.001 sec.

二、测试

1.在其中一个节点建库
CREATE DATABASE IF NOT EXISTS test02 ON CLUSTER clickhouse_cluster_3shards_1replicas;

┌─host──────────┬─port─┬─status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
│ 192.168.12.91 │ 9000 │      0 │       │                   1 │                0 │
│ 192.168.12.90 │ 9000 │      0 │       │                   0 │                0 │
└───────────────┴──────┴────────┴───────┴─────────────────────┴──────────────────┘

2.创建本地表
CREATE TABLE test02.test_local ON CLUSTER 'clickhouse_cluster_3shards_1replicas' (
ts DateTime,
tvid Int32,
area Int32,
count Int32
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/test02/test_local/{shard}', '{replica}')
PARTITION BY toYYYYMMDD(ts) 
ORDER BY (ts,tvid,area);

show tales;
┌─name───────┐
│ test_local │
└────────────┘

# 3.创建分布式表
CREATE TABLE test02.test_all ON CLUSTER 'clickhouse_cluster_3shards_1replicas'  
AS test02.test_local  
ENGINE = Distributed(clickhouse_cluster_3shards_1replicas,test02,test_local,tvid);

# 像分布式表插入一条数据
insert into test02.test_all values('2023-07-24 17:27:20',6,11011,100);

# 查询分布式表和本地表
1 :) select * from test_local;

SELECT *
FROM test_local

Query id: b8d53bc5-c795-4c0f-8522-adaf4fcb6611

┌──────────────────ts─┬─tvid─┬──area─┬─count─┐
│ 2023-07-24 17:27:20 │    6 │ 11011 │   100 │
└─────────────────────┴──────┴───────┴───────┘

1 row in set. Elapsed: 0.002 sec. 

 1:) select * from test_all;

SELECT *
FROM test_all

Query id: 6c2efa70-9cc3-4334-b238-90a5b7a8b5c6

┌──────────────────ts─┬─tvid─┬──area─┬─count─┐
│ 2023-07-24 17:27:20 │    6 │ 11011 │   100 │
└─────────────────────┴──────┴───────┴───────┘

1 row in set. Elapsed: 0.002 sec.

查询分布式表和本地表在副本中的数据是否一样,如果数据一致,说明副本生效