elasticsearch安装部署(单机和集群)

发布时间 2023-12-28 21:34:27作者: 蓝色土耳其

一、单机模式

1、单机模式,下载对应es包,此次使用版本为7.8.0版本并解压

-rw-r--r--. 1 root root 319112561 12月 28 15:39 elasticsearch-7.8.0-linux-x86_64.tar.gz
mkdir es_standalone
mv elasticsearch-7.8.0-linux-x86_64.tar.gz es_standalone

2、因安全限制,linux中不允许使用root用户启动es服务,需单独建用户

##新增用户es
adduser es
##设置es密码 passwd es
##将权限给es用户
chown -R es:es es_standalone

3、修改配置文件

##将以下配置,追加在 config/elasticsearch.yml 末尾
cluster.name: elasticsearch
node.name: node-1
network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
##修改es的jvm参数 config/jvm.options
-Xms512m
-Xmx512m

4、启动es

su es
./bin/elasticsearch
##启动过程中会报如下错误
ERROR: [2] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /data/es_standalone/es/logs/elasticsearch.log

ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
ERROR: Elasticsearch did not exit normally - check the logs at /data/es_standalone/es/logs/elasticsearch.log

5、修改 /etc/security/limits.conf  和 /etc/sysctl.conf  修改后如果还在终端用 ulimit -Hn ulimit -Sn 去查会发现还是旧值,关闭终端重新打开一个终端,会发现查询已经是新值

##/etc/security/limits.conf 追加
* soft nofile 65535
* hard nofile 65535
* soft nproc 4096
* hard nproc 4096

##/etc/sysctl.conf 追加
vm.max_map_count = 655360
vm.swappiness = 1

 6、启动es成功,并用工具验证创建索引、文档正常。

二、集群模式

7、分别复制3份es服务,分别为node-9400、node-9500、node-9600,分别修改 elasticsearch.yml,config/jvm.options(跟单机一样,配置内存大小即可)

(9200做为http协议,用于外部通讯,比如postman用)

(9300作为Tcp协议,jar之间通过tcp协议通讯,ES集群之间通过9300通讯,Springboot连接ES也是用9300)

##config/elasticsearch.yml 追加如下内容
cluster.name: cluster-es #节点名称,每个节点名称不能重复 node.name: node-9400 #节点所在主机名称,或者localhost network.host: zjk ## 此处设置成了主机名,需要在/etc/hosts 中配置解析关系 #是不是有资格为主节点 node.master: true node.data: true
##对外端口默认为9200 http.port: 9400
##集群中互相通信端口 默认为9300
transport.tcp.port: 9401 #head插件需要打开这两个配置 http.cors.allow-origin: "*" http.cors.enabled: true http.max_content_length: 200mb #es7.x之后新增的配置,初始化一个新的集群时需要此配置来选举master cluster.initial_master_nodes: ["node-9400","node-9500","node-9600"] #es7.x之后新增的配置,节点发现 discovery.seed_hosts: ["zjk:9401","zjk:9501","zjk:9601"] gateway.recover_after_nodes: 2 network.tcp.keep_alive: true network.tcp.no_delay: true transport.tcp.compress: true #集群内同时启动的数据任务个数,默认是2个 cluster.routing.allocation.cluster_concurrent_rebalance: 16 #添加或删除节点及负责均衡时并发恢复的线程个数,默认4个 cluster.routing.allocation.node_concurrent_recoveries: 16 #初始化数据恢复时,并发恢复线程的个数,默认4个 cluster.routing.allocation.node_initial_primaries_recoveries: 16

8、启动一个9400成功后无误,通过请求查看集群状态

[2023-12-28T19:12:24,907][INFO ][o.e.c.s.ClusterApplierService] [node-9400] master node changed {previous [], current [{node-9400}{SAqVtiq6RvqF3jrLxT2-7w}
{VxxwgptaQSqip_Ni3EDdaw}{zjk}{192.168.23.131:9300}{dilmrt}{ml.machine_memory=8181792768, xpack.installed=true, transform.node=true, ml.max_open_jobs=20}]}, 
term: 1, version: 1, reason: Publication{term=1, version=1}
[2023-12-28T19:12:25,241][INFO ][o.e.h.AbstractHttpServerTransport] [node-9400] publish_address {zjk/192.168.23.131:9400}, bound_addresses {192.168.23.131:9400}
[2023-12-28T19:12:25,242][INFO ][o.e.n.Node               ] [node-9400] started

请求 
http://192.168.23.131:9400/_cat/nodes
返回: 
192.168.23.131 17 57 2 0.01 0.10 0.13 dilmrt * node-9400

9、启动9500、9600,可能会报如下错误

master not discovered yet, this node has not previously joined a bootstrapped (v7+) cluster, and this node must discover master-eligible nodes 
解决办法:
1、有可能是把单机的直接拷贝过来做的集群,里面有data会影响,把data删掉重新启动
2、有可能是transport.tcp.port配错了

最终调用返回
192.168.23.131 29 91 41 0.70 0.44 0.31 dilmrt - node-9600
192.168.23.131 46 91 11 0.70 0.44 0.31 dilmrt * node-9400
192.168.23.131 19 91 11 0.70 0.44 0.31 dilmrt - node-9500
此时通过一下3个端口都可以访问集群信息
192.168.23.131:9400
192.168.23.131:9500
192.168.23.131:9600
至此,ES Cluster集群搭建完毕

10、集群开机自启