HBase 组件安装与配置

发布时间 2023-07-06 18:29:06作者: 雙_木

HBase 组件安装与配置

1.1. 实验目的

完成本实验,您应该能够:
掌握 HBase 安装与配置
掌握 HBase 常用 Shell 命令

1.2. 实验要求

了解 HBase 原理
熟悉 HBase 常用 Shell 命令

1.3. 实验环境

本实验所需之主要资源环境如表 1-1 所示。
表 1-1 资源环境
服务器集群 单节点,机器最低配置:双核 CPU、8GB 内存、100G 硬盘
运行环境 CentOS.7.3
服务和组件 服务和组件根据实验需求安装

1.4. 实验过程

1.4.1. 实验任务一:HBase 安装与配置

1.4.1. 实验任务一:HBase 安装与配置
[root@master ~]# tar xf hbase-1.2.1-bin.tar.gz -C /usr/local/src
[root@master ~]# cd /usr/local/src
[root@master src]# ls
hadoop  hbase-1.2.1  hive  jdk  zookeeper

1.4.1.2. 步骤二:重命名 HBase 安装文件夹
[root@master src]# mv hbase-1.2.1/ hbase
[root@master src]# ls
hadoop  hbase  hive  jdk  zookeeper
1.4.1.3. 步骤三:在所有节点添加环境变量
[root@master src]# vi /etc/profile.d/hb.sh
在文件中添加
export HBASE_HOME=/usr/local/src/hbase 
export PATH=$HBASE_HOME/bin:$PATH 
1.4.1.4. 步骤四:在所有节点使环境变量生效
[root@master src]# source /etc/profile.d/hb.sh
1.4.1.5. 步骤五:在 master 节点进入配置文件目录
[root@master src]# cd hbase/conf/
[root@master conf]# 

1.4.1.6. 步骤六:在 master 节点配置 hbase-env.sh 文件

[root@master conf]# vi hbase-env.sh

#Java 安装位置 export JAVA_HOME=/usr/local/src/jdk
 
#值为 true 使用 HBase 自带的 ZooKeeper,值为 false 使用在 Hadoop 上装的 ZooKeeper export HBASE_MANAGES_ZK=false   
 
#HBase 类路径 export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop/ 
#注释下面这两句话
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
1.4.1.7. 步骤七:在 master 节点配置 hbase-site.xml
<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://master:9000/hbase</value>
		<description>The directory shared by region servers.</description>
	</property>
	<property>
                <name>hbase.master.info.port</name>
                <value>60010</value>
        </property>
	<property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
                <description>Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.</description>
        </property>
	<property>
                <name>zookeeper.session.timeout</name>
                <value>12000</value>
        </property>
	<property>
                <name>hbase.zookeeper.quorum</name>
                <value>master,slave1,slave2</value>
        </property>
	<property>
                <name>hbase.tmp.dir</name>
                <value>/usr/local/src/hbase/tmp</value>
        </property>
	<property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
</configuration>
hbase.rootdir:该项配置了数据写入的目录,默认 hbase.rootdir是指向/tmp/hbase
${user.name},也就说你会在重启后丢失数据(重启的时候操作系统会清理/tmp 目录)。
hbase.zookeeper.property.clientPort:指定 zk 的连接端口
zookeeper.session.timeout:RegionServer 与 ZooKeeper 间的连接超时时间。当超
时时间到后,ReigonServer 会被 ZooKeeper 从 RS 集群清单中移除,HMaster 收到移除通
知后,会对这台 server 负责的 regions 重新 balance,让其他存活的 RegionServer 接管.
hbase.zookeeper.quorum:默认值是 localhost,列出 zookeepr ensemble 中的servers
hbase.master.info.port:浏览器的访问端口
1.4.1.8. 步骤八:在 master 节点修改 regionservers 文件
 #删除 localhost,每一行写一个 slave 节点主机机器名 
 [root@master conf]# vi regionservers 
[root@master conf]# cat regionservers 
slave1
slave2
1.4.1.9. 步骤九:在 master 节点创建 hbase.tmp.dir 目录
[root@master conf]#  mkdir /usr/local/src/hbase/tmp 
1.4.1.10. 步骤十:将 master 上的 hbase 安装文件同步到 slave1 slave2
[root@master conf]#  scp -r /usr/local/src/hbase/ root@slave1:/usr/local/src/
root@slave1's password: 
[root@master conf]#  scp -r /usr/local/src/hbase/ root@slave2:/usr/local/src/
root@slave2's password: 
......
index.html                                              100%  872   998.1KB/s   00:00    
glyphicons-halflings-regular.ttf                        100%   29KB  23.9MB/s   00:00    
glyphicons-halflings-regular.woff                       100%   16KB  14.6MB/s   00:00    
glyphicons-halflings-regular.eot                        100%   14KB  27.6MB/s   00:00    
glyphicons-halflings-regular.svg                        100%   62KB  37.6MB/s   00:00    
bootstrap.min.css                                       100%   95KB  57.6MB/s   00:00    
bootstrap-theme.min.css                                 100%   15KB  21.2MB/s   00:00    
hbase.css                                               100% 1293   468.9KB/s   00:00    
bootstrap.css                                           100%  117KB  20.7MB/s   00:00    
bootstrap-theme.css                                     100%   17KB  26.3MB/s   00:00    
bootstrap.min.js                                        100%   27KB  25.4MB/s   00:00    
bootstrap.js                                            100%   57KB  40.6MB/s   00:00    
jquery.min.js                                           100%   91KB  48.1MB/s   00:00    
tab.js                                                  100% 1347     3.7MB/s   00:00    
jumping-orca_rotated_12percent.png                      100% 2401     6.4MB/s   00:00    
hbase_logo_med.gif                                      100% 3592     7.8MB/s   00:00    
hbase_logo.png                                          100% 2997     8.2MB/s   00:00    
hbase_logo_small.png                                    100% 3206     7.9MB/s   00:00    
web.xml                                                 100% 2268     5.2MB/s   00:00    
index.html                                              100%  876     2.5MB/s   00:00    
index.html                                              100%  873     2.0MB/s   00:00    
web.xml                                                 100%  680     2.1MB/s   00:00    
web.xml                                                 100%  666     1.5MB/s   00:00    
index.html      
1.4.1.11. 步骤十一:在所有节点修改 hbase 目录权限
[root@master conf]#  chown -R hadoop:hadoop /usr/local/src/hbase/
[root@slave1 ~]#  chown -R hadoop:hadoop /usr/local/src/hbase/
[root@slave2 ~]#  chown -R hadoop:hadoop /usr/local/src/hbase/
1.4.1.12. 步骤十二:在所有节点切换到 hadoop 用户
[root@master conf]# su - hadoop
Last login: Tue Mar 28 22:26:59 EDT 2023 on pts/0
[root@slave1 ~]# su - hadoop
Last login: Tue Mar 28 22:29:25 EDT 2023 on pts/0
[root@slave2 ~]# su - hadoop
Last login: Tue Mar 28 22:32:42 EDT 2023 on pts/0
1.4.1.13. 步骤十三:启动 HBase
先启动 Hadoop,然后启动 ZooKeeper,最后启动 HBase。
首先在 master 节点启动 Hadoop。
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.out
192.168.10.20: ssh: connect to host 192.168.10.20 port 22: Connection refused
192.168.10.30: ssh: connect to host 192.168.10.30 port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.out
192.168.10.20: ssh: connect to host 192.168.10.20 port 22: Connection refused
192.168.10.30: ssh: connect to host 192.168.10.30 port 22: Connection refused
#master 节点 
[hadoop@master ~]$ jps
1696 SecondaryNameNode
1493 NameNode
2119 Jps
1855 ResourceManager
#slave1 节点 
[hadoop@slave1 ~]$ jps
1504 NodeManager
1629 Jps
1390 DataNode
#slave2 节点 
[hadoop@slave2 ~]$ jps
1325 DataNode
1565 Jps
1439 NodeManager
1.4.1.14. 步骤十四:在所有节点启动 ZooKeeper
master 节点
[hadoop@master ~]$ zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@master ~]$ jps
1696 SecondaryNameNode
2145 QuorumPeerMain
1493 NameNode
2165 Jps
1855 ResourceManager

slave1 节点

[hadoop@slave1 ~]$ zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave1 ~]$ jps
1504 NodeManager
1680 Jps
1651 QuorumPeerMain
1390 DataNode

slave2 节点

[hadoop@slave2 ~]$ zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave2 ~]$ jps
1587 QuorumPeerMain
1610 Jps
1325 DataNode
1439 NodeManager
1.4.1.15. 步骤十五:在 master 节点启动 HBase
[hadoop@master ~]$ start-hbase.sh 
master: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-master.out
slave1: starting zookeeper, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-zookeeper-slave1.out
slave2: starting zookeeper, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-zookeeper-slave2.out
starting master, logging to /usr/local/src/hbase/logs/hbase-hadoop-master-master.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
slave1: starting regionserver, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-regionserver-slave1.out
slave2: starting regionserver, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-regionserver-slave2.out
slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

master 节点
[hadoop@master ~]$ jps
1696 SecondaryNameNode
2496 HMaster
2145 QuorumPeerMain
1493 NameNode
2713 Jps
1855 ResourceManager
2399 HQuorumPeer
slave1 节点
[hadoop@slave1 ~]$ jps
1504 NodeManager
53826 Jps
1651 QuorumPeerMain
30633 HRegionServer
1390 DataNode
slave2节点
[hadoop@slave2 ~]$ jps
1587 QuorumPeerMain
1769 HRegionServer
1325 DataNode
1439 NodeManager
1983 Jps
1.4.1.16. 步骤十六:在浏览器输入 master:60010 出现如下图 7-2 所示的界面。

image-20230331105102403

1.4.2. 实验任务二:HBase 常用 Shell 命令

启动 hdfs、zookeeper、hbase 服务
1.4.2.1. 步骤一:进入 HBase 命令行
[hadoop@master ~]$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/src/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/src/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016

hbase(main):001:0> 
1.4.2.2. 步骤二:建立表 scores,两个列簇:grade 和 course
hbase(main):001:0> create 'scores','grade','course' 
0 row(s) in 1.3750 seconds

=> Hbase::Table - scores
1.4.2.3. 步骤三:查看数据库状态
hbase(main):002:0> status
1 active master, 0 backup masters, 2 servers, 0 dead, 1.5000 average load
1.4.2.4. 步骤四:查看数据库版本
hbase(main):003:0> version
1.2.1, r8d8a7107dc4ccbf36a92f64675dc60392f85c015, Wed Mar 30 11:19:21 CDT 2016
1.4.2.5. 步骤五:查看表
hbase(main):004:0> list
TABLE                                                                                     
scores                                                                                    
1 row(s) in 0.0180 seconds

=> ["scores"]
1.4.2.6. 步骤六:插入记录 1:jie,grade: 143cloud
hbase(main):005:0>  put 'scores','jie','grade:','146cloud'
0 row(s) in 0.0960 seconds
1.4.2.7. 步骤七:插入记录 2:jie,course:math,86
hbase(main):006:0>  put 'scores','jie','course:math','86' 
0 row(s) in 0.0080 seconds

1.4.2.8. 步骤八:插入记录 3:jie,course:cloud,92
hbase(main):007:0>  put 'scores','jie','course:cloud','92'
0 row(s) in 0.0080 seconds
1.4.2.9. 步骤九:插入记录 4:shi,grade:133soft
hbase(main):010:0> put 'scores','shi','grade:','133soft' 
0 row(s) in 0.0070 seconds
1.4.2.10. 步骤十:插入记录 5:shi,grade:math,87
hbase(main):011:0>  put 'scores','shi','course:math','87' 
0 row(s) in 0.0080 seconds
1.4.2.11. 步骤十一:插入记录 6:shi,grade:cloud,96
hbase(main):012:0>  put 'scores','shi','course:cloud','96' 
0 row(s) in 0.0070 seconds
1.4.2.12. 步骤十二:读取 jie 的记录
hbase(main):013:0> get 'scores','jie' 
COLUMN                  CELL                                                              
 course:cloud           timestamp=1680168819550, value=92                                 
 course:math            timestamp=1680168788779, value=86                                 
 grade:                 timestamp=1680168764754, value=146cloud                           
3 row(s) in 0.0190 seconds
1.4.2.13. 步骤十三:读取 jie 的班级
hbase(main):014:0> get 'scores','jie','grade' 
COLUMN                  CELL                                                              
 grade:                 timestamp=1680168764754, value=146cloud                           
1 row(s) in 0.0070 seconds
1.4.2.14. 步骤十四:查看整个表记录
hbase(main):015:0>  scan 'scores' 
ROW                     COLUMN+CELL                                                       
 jie                    column=course:cloud, timestamp=1680168819550, value=92            
 jie                    column=course:math, timestamp=1680168788779, value=86             
 jie                    column=grade:, timestamp=1680168764754, value=146cloud            
 shi                    column=course:cloud, timestamp=1680168898986, value=96            
 shi                    column=course:math, timestamp=1680168879674, value=87             
 shi                    column=grade:, timestamp=1680168856040, value=133soft             
2 row(s) in 0.0180 seconds

1.4.2.15. 步骤十五:按例查看表记录
hbase(main):016:0>  scan 'scores',{COLUMNS=>'course'} 
ROW                     COLUMN+CELL                                                       
 jie                    column=course:cloud, timestamp=1680168819550, value=92            
 jie                    column=course:math, timestamp=1680168788779, value=86             
 shi                    column=course:cloud, timestamp=1680168898986, value=96            
 shi                    column=course:math, timestamp=1680168879674, value=87             
2 row(s) in 0.0110 seconds
1.4.2.16. 步骤十六:删除指定记录
hbase(main):017:0> delete 'scores','shi','grade' 
0 row(s) in 0.0200 seconds
1.4.2.17. 步骤十七:删除后,执行 scan 命令
hbase(main):001:0> scan 'scores'
ROW                      COLUMN+CELL                                                          
 jie                     column=course:cloud, timestamp=1680168819550, value=92               
 jie                     column=course:math, timestamp=1680168788779, value=86                
 jie                     column=grade:, timestamp=1680168764754, value=146cloud               
 shi                     column=course:cloud, timestamp=1680168898986, value=96               
 shi                     column=course:math, timestamp=1680168879674, value=87                
2 row(s) in 0.2670 seconds

1.4.2.18. 步骤十八:增加新的列簇
2 row(s) in 0.2670 seconds

hbase(main):002:0> alter 'scores',NAME=>'age'
Updating all regions with the new schema...
1/1 regions updated.
Done.
0 row(s) in 2.0020 seconds

1.4.2.19. 步骤十九:查看表结构
hbase(main):004:0>  describe 'scores'
Table scores is ENABLED                                                                       
scores                                                                                        
COLUMN FAMILIES DESCRIPTION                                                                   
{NAME => 'age', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELL
S => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERS
IONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}            
{NAME => 'course', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_C
ELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_V
ERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}         
{NAME => 'grade', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CE
LLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VE
RSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}          
3 row(s) in 0.0300 seconds

1.4.2.20. 步骤二十:删除列簇
hbase(main):005:0> alter 'scores',NAME=>'age',METHOD=>'delete'
Updating all regions with the new schema...
1/1 regions updated.
Done.
0 row(s) in 1.9120 seconds
1.4.2.21. 步骤二十一:删除表
hbase(main):006:0> disable 'scores'
0 row(s) in 2.2810 seconds

1.4.2.22. 步骤二十二:退出
hbase(main):008:0> quit
[hadoop@master ~]$ 

1.4.2.23 步骤二十三:关闭 HBase
在 master 节点关闭 HBase。
[hadoop@master etc]$ stop-hbase.sh
stopping hbase...................
slave2: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
master: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
slave1: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid

在所有节点关闭 ZooKeeper。
[hadoop@master etc]$ zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED

[hadoop@slave1 ~]$ zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED

[hadoop@slave2 ~]$ zkServer.sh stop
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED

在 master 节点关闭 Hadoop。
[hadoop@master etc]$ stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [master]
master: stopping namenode
slave2: stopping datanode
slave1: stopping datanode
192.168.10.20: ssh: connect to host 192.168.10.20 port 22: Connection refused
192.168.10.30: ssh: connect to host 192.168.10.30 port 22: Connection refused
Stopping secondary namenodes [0.0.0.0]
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
slave1: stopping nodemanager
slave2: stopping nodemanager
192.168.10.30: ssh: connect to host 192.168.10.30 port 22: Connection refused
192.168.10.20: ssh: connect to host 192.168.10.20 port 22: Connection refused
no proxyserver to stop

注意:各节点之间时间必须同步,否则 HBase 启动不了。 在每个节点执行 date 命令,查看每个节点的时间是否同步,不同步的话,在各节点执 行 date 命令,date -s "2016-04-15 12:00:00".

2.用hbase自带的zookeeper来调度配置

2.1. 实验任务一:HBase 安装与配置
[root@master ~]# tar xf hbase-1.2.1-bin.tar.gz -C /usr/local/src
[root@master ~]# cd /usr/local/src
[root@master src]# ls
hadoop  hbase-1.2.1  hive  jdk  zookeeper

2.2. 步骤二:重命名 HBase 安装文件夹
[root@master src]# mv hbase-1.2.1/ hbase
[root@master src]# ls
hadoop  hbase  hive  jdk  zookeeper
2.3. 步骤三:在所有节点添加环境变量
[root@master src]# vi /etc/profile.d/hb.sh
在文件中添加
export HBASE_HOME=/usr/local/src/hbase 
export PATH=$HBASE_HOME/bin:$PATH 
2.4. 步骤四:在所有节点使环境变量生效
[root@master src]# source /etc/profile.d/hb.sh
2.5. 步骤五:在 master 节点进入配置文件目录
[root@master src]# cd hbase/conf/
[root@master conf]# 

2.6. 步骤六:在 master 节点配置 hbase-env.sh 文件

[root@master conf]# vi hbase-env.sh

Java 安装位置 export JAVA_HOME=/usr/local/src/jdk
 
#值为 true 使用 HBase 自带的 ZooKeeper,值为 false 使用在 Hadoop 上装的 ZooKeeper export HBASE_MANAGES_ZK=true   
 
#HBase 类路径 export HBASE_CLASSPATH=/usr/local/src/hadoop/etc/hadoop/ 
#注释下面这两句话
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -XX:PermSize=128m -XX:MaxPermSize=128m"
2.7. 步骤七:在 master 节点配置 hbase-site.xml
<configuration>
	<property>
		<name>hbase.rootdir</name>
		<value>hdfs://master:9000/hbase</value>
		<description>The directory shared by region servers.</description>
	</property>
	<property>
                <name>hbase.master.info.port</name>
                <value>60010</value>
        </property>
	<property>
                <name>hbase.zookeeper.property.clientPort</name>
                <value>2181</value>
                <description>Property from ZooKeeper's config zoo.cfg. The port at which the clients will connect.</description>
        </property>
	<property>
                <name>zookeeper.session.timeout</name>
                <value>12000</value>
        </property>
	<property>
                <name>hbase.zookeeper.quorum</name>
                <value>master,slave1,slave2</value>
        </property>
	<property>
                <name>hbase.tmp.dir</name>
                <value>/usr/local/src/hbase/tmp</value>
        </property>
	<property>
                <name>hbase.cluster.distributed</name>
                <value>true</value>
        </property>
</configuration>
hbase.rootdir:该项配置了数据写入的目录,默认 hbase.rootdir是指向/tmp/hbase
${user.name},也就说你会在重启后丢失数据(重启的时候操作系统会清理/tmp 目录)。
hbase.zookeeper.property.clientPort:指定 zk 的连接端口
zookeeper.session.timeout:RegionServer 与 ZooKeeper 间的连接超时时间。当超
时时间到后,ReigonServer 会被 ZooKeeper 从 RS 集群清单中移除,HMaster 收到移除通
知后,会对这台 server 负责的 regions 重新 balance,让其他存活的 RegionServer 接管.
hbase.zookeeper.quorum:默认值是 localhost,列出 zookeepr ensemble 中的servers
hbase.master.info.port:浏览器的访问端口
2.8. 步骤八:在 master 节点修改 regionservers 文件
 #删除 localhost,每一行写一个 slave 节点主机机器名 
 [root@master conf]# vi regionservers 
[root@master conf]# cat regionservers 
slave1
slave2
2.9. 步骤九:在 master 节点创建 hbase.tmp.dir 目录
[root@master conf]#  mkdir /usr/local/src/hbase/tmp 
2.10. 步骤十:将 master 上的 hbase 安装文件同步到 slave1 slave2
[root@master conf]#  scp -r /usr/local/src/hbase/ root@slave1:/usr/local/src/
root@slave1's password: 
[root@master conf]#  scp -r /usr/local/src/hbase/ root@slave2:/usr/local/src/
root@slave2's password: 
......
index.html                                              100%  872   998.1KB/s   00:00    
glyphicons-halflings-regular.ttf                        100%   29KB  23.9MB/s   00:00    
glyphicons-halflings-regular.woff                       100%   16KB  14.6MB/s   00:00    
glyphicons-halflings-regular.eot                        100%   14KB  27.6MB/s   00:00    
glyphicons-halflings-regular.svg                        100%   62KB  37.6MB/s   00:00    
bootstrap.min.css                                       100%   95KB  57.6MB/s   00:00    
bootstrap-theme.min.css                                 100%   15KB  21.2MB/s   00:00    
hbase.css                                               100% 1293   468.9KB/s   00:00    
bootstrap.css                                           100%  117KB  20.7MB/s   00:00    
bootstrap-theme.css                                     100%   17KB  26.3MB/s   00:00    
bootstrap.min.js                                        100%   27KB  25.4MB/s   00:00    
bootstrap.js                                            100%   57KB  40.6MB/s   00:00    
jquery.min.js                                           100%   91KB  48.1MB/s   00:00    
tab.js                                                  100% 1347     3.7MB/s   00:00    
jumping-orca_rotated_12percent.png                      100% 2401     6.4MB/s   00:00    
hbase_logo_med.gif                                      100% 3592     7.8MB/s   00:00    
hbase_logo.png                                          100% 2997     8.2MB/s   00:00    
hbase_logo_small.png                                    100% 3206     7.9MB/s   00:00    
web.xml                                                 100% 2268     5.2MB/s   00:00    
index.html                                              100%  876     2.5MB/s   00:00    
index.html                                              100%  873     2.0MB/s   00:00    
web.xml                                                 100%  680     2.1MB/s   00:00    
web.xml                                                 100%  666     1.5MB/s   00:00    
index.html      
2.11. 步骤十一:在所有节点修改 hbase 目录权限
[root@master conf]#  chown -R hadoop:hadoop /usr/local/src/hbase/
[root@slave1 ~]#  chown -R hadoop:hadoop /usr/local/src/hbase/
[root@slave2 ~]#  chown -R hadoop:hadoop /usr/local/src/hbase/
2.12. 步骤十二:在所有节点切换到 hadoop 用户
[root@master conf]# su - hadoop
Last login: Tue Mar 28 22:26:59 EDT 2023 on pts/0
[root@slave1 ~]# su - hadoop
Last login: Tue Mar 28 22:29:25 EDT 2023 on pts/0
[root@slave2 ~]# su - hadoop
Last login: Tue Mar 28 22:32:42 EDT 2023 on pts/0
2.13. 步骤十三:启动 HBase
先启动 Hadoop,然后启动 ZooKeeper,最后启动 HBase。
首先在 master 节点启动 Hadoop。
[hadoop@master ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-datanode-slave2.out
192.168.10.20: ssh: connect to host 192.168.10.20 port 22: Connection refused
192.168.10.30: ssh: connect to host 192.168.10.30 port 22: Connection refused
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/src/hadoop/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /usr/local/src/hadoop/logs/yarn-hadoop-nodemanager-slave2.out
192.168.10.20: ssh: connect to host 192.168.10.20 port 22: Connection refused
192.168.10.30: ssh: connect to host 192.168.10.30 port 22: Connection refused
#master 节点 
[hadoop@master ~]$ jps
1696 SecondaryNameNode
1493 NameNode
2119 Jps
1855 ResourceManager
#slave1 节点 
[hadoop@slave1 ~]$ jps
1504 NodeManager
1629 Jps
1390 DataNode
#slave2 节点 
[hadoop@slave2 ~]$ jps
1325 DataNode
1565 Jps
1439 NodeManager
2.14. 步骤十四:在所有节点启动 ZooKeeper
master 节点
[hadoop@master ~]$ zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@master ~]$ jps
1696 SecondaryNameNode
2145 QuorumPeerMain
1493 NameNode
2165 Jps
1855 ResourceManager

slave1 节点

[hadoop@slave1 ~]$ zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave1 ~]$ jps
1504 NodeManager
1680 Jps
1651 QuorumPeerMain
1390 DataNode

slave2 节点

[hadoop@slave2 ~]$ zkServer.sh start 
ZooKeeper JMX enabled by default
Using config: /usr/local/src/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@slave2 ~]$ jps
1587 QuorumPeerMain
1610 Jps
1325 DataNode
1439 NodeManager
2.15. 步骤十五:在 master 节点启动 HBase
[hadoop@master ~]$ start-hbase.sh 
master: starting zookeeper, logging to /usr/local/src/hbase/logs/hbase-hadoop-zookeeper-master.out
slave1: starting zookeeper, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-zookeeper-slave1.out
slave2: starting zookeeper, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-zookeeper-slave2.out
starting master, logging to /usr/local/src/hbase/logs/hbase-hadoop-master-master.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
slave1: starting regionserver, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-regionserver-slave1.out
slave2: starting regionserver, logging to /usr/local/src/hbase/bin/../logs/hbase-hadoop-regionserver-slave2.out
slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

master 节点
[hadoop@master ~]$ jps
1696 SecondaryNameNode
2496 HMaster
2145 QuorumPeerMain
1493 NameNode
2713 Jps
1855 ResourceManager
2399 HQuorumPeer
slave1 节点
[hadoop@slave1 ~]$ jps
1504 NodeManager
53826 Jps
1651 QuorumPeerMain
30633 HRegionServer
1390 DataNode
slave2节点
[hadoop@slave2 ~]$ jps
1587 QuorumPeerMain
1769 HRegionServer
1325 DataNode
1439 NodeManager
1983 Jps
2.16. 步骤十六:在浏览器输入 master:60010 出现如下图 7-2 所示的界面。

image-20230331105102403