Mapr框架安装完后,安装与配置hbase、hive。
其中mapr框架的安装路径为/opt/mapr
Hbase的安装路径为/opt/mapr/hbase/hbase-0.90.4
Hive的安装路径为/opt/mapr/hive/hive-0.7.1
整合hive与hbase的过程如下:
1. 将文件 /opt/mapr/hbase/hbase-0.90.4/hbase-0.90.4.jar 与/opt/mapr/hbase/hbase-0.90.4/lib/zookeeper-3.3.2.jar拷贝到/opt/mapr/hive /hive-0.7.1/lib文件夹下面
注意:如果hive/lib下已经存在这两个文件的其他版本(例如zookeeper-3.3.1.jar),建议删除后使用hbase下的相关版本
2 修改hive/conf下hive-site.xml文件,在底部添加如下内容:
<property>
<name>hive.querylog.location</name>
<value>/opt/mapr/hive/hive-0.7.1/logs</value>
</property>
<property>
<name>hive.aux.jars.path</name> <value>file:///opt/mapr/hive/hive-0.7.1/lib/hive-hbase-handler-0.7.1.jar,file:///opt/mapr/hive/hive-0.7.1/lib/hbase-0.90.4.jar,file:///opt/mapr/hive/hive-0.7.1/lib/zookeeper-3.3.2.jar</value>
</property>
注意:如果hive-site.xml不存在则自行创建,或者把hive-default.xml.template文件改名后使用。
3. 拷贝hbase-0.90.4.jar到所有hadoop节点(包括master)的hadoop/lib下。
4. 拷贝hbase/conf下的hbase-site.xml文件到所有hadoop节点(包括master)的hadoop/conf下。
注意,如果3,4两步跳过的话,运行hive时很可能出现如下错误:
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and
then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information. at org.apache.hadoop.
hbase.zookeeper.ZooKeeperWatcher.
5 启动hive
单节点启动
bin/hive -hiveconf hbase.master=master:60000
集群启动
bin/hive -hiveconf hbase.zookeeper.quorum=node1,node2,node3 (所有的zookeeper节点)
如果hive-site.xml文件中没有配置hive.aux.jars.path,则可以按照如下方式启动。
hive --auxpath /opt/mapr/hive/hive-0.7.1/lib/hive-hbase-handler-0.7.1.jar,/opt/mapr/hive/hive-0.7.1/lib/hbase-0.90.4.jar,/opt/mapr/hive/hive-0.7.1/lib/zookeeper-3.3.2.jar -hiveconf hbase.master=localhost:60000
经测试修改hive的配置文件hive-site.xml
<property>
<name>hive.zookeeper.quorum</name>
<value>node1,node2,node3</value>
<description>The list of zookeeper servers to talk to. This is only needed for read/write locks.</description>
</property>
不用增加参数启动hive就可以联合hbase
6 启动后进行测试
(1) 创建hbase识别的表
CREATE TABLE hbase_table_1(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz");
hbase.table.name 定义在hbase的table名称,多列时,data:1,data:2;多列族时,data1:1,data2:1;
hbase.columns.mapping 定义在hbase的列族,里面的:key 是固定值而且要保证在表pokes中的foo字段是唯一值
创建有分区的表
CREATE TABLE hbase_table_1(key int, value string) partitioned by (day string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz");
(2) 使用sql导入数据
新建hive的数据表
create table pokes(foo int,bar string)row format delimited fields terminated by ',';
批量导入数据
load data local inpath '/home/1.txt' overwrite into table pokes;
1.txt文件的内容为
1,hello
2,pear
3,world
使用sql导入hbase_table_1
insert overwrite table hbase_table_1 select * from pokes;
导入有分区的表
insert overwrite table hbase_table_1 partition (day='2012-01-01') select * from pokes;
(3) 查看数据
hive> select * from hbase_table_1;
OK
1 hello
2 pear
3 world
(注:与hbase整合的有分区的表存在个问题 select * from table查询不到数据,select key,value from table可以查到数据)
(4)登录Hbase去查看数据
hbase shell
hbase(main):002:0> describe 'xyz'
DESCRIPTION ENABLED {NAME => 'xyz', FAMILIES => [{NAME => 'cf1', BLOOMFILTER => 'NONE', REPLICATION_S true
COPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSI
ZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]}
1 row(s) in 0.0830 seconds
hbase(main):003:0> scan 'xyz'
ROW COLUMN+CELL
1 column=cf1:val, timestamp=1331002501432, value=hello
2 column=cf1:val, timestamp=1331002501432, value=pear
3 column=cf1:val, timestamp=1331002501432, value=world
这时在Hbase中可以看到刚才在hive中插入的数据了。
7 对于在hbase已经存在的表,在hive中使用CREATE EXTERNAL TABLE来建立
例如hbase中的表名称为test1,字段为 a: , b: ,c: 在hive中建表语句为
create external table hive_test (key int,gid map<string,string>,sid map<string,string>,uid map<string,string>) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" ="a:,b:,c:") TBLPROPERTIES ("hbase.table.name" = "test1");
在hive中建立好表后,查询hbase中test1表内容
Select * from hive_test;
OK
1 {"":"qqq"} {"":"aaa"} {"":"bbb"}
2 {"":"qqq"} {} {"":"bbb"}
查询gid字段中value值的方法为
select gid[''] from hbase2;
得到查询结果
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201203052222_0017, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201203052222_0017
Kill Command = /opt/mapr/hadoop/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=maprfs:/// -kill job_201203052222_0017
2012-03-06 14:38:29,141 Stage-1 map = 0%, reduce = 0%
2012-03-06 14:38:33,171 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201203052222_0017
OK
qqq
qqq
如果hbase表test1中的字段为user:gid,user:sid,info:uid,info:level,在hive中建表语句为
create external table hive_test(key int,user map<string,string>,info map<string,string>) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" ="user:,info:") TBLPROPERTIES ("hbase.table.name" = "test1");
查询hbase表的方法为
select user['gid'] from hbase2;
建立关联表
这里我们要查询的表在hbase中已经存在所以,使用CREATE EXTERNAL TABLE来建立,如下:
- CREATE EXTERNAL TABLE hbase_table_2(key string, value string)
- STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
- WITH SERDEPROPERTIES ("hbase.columns.mapping" = "data:1")
- TBLPROPERTIES("hbase.table.name" = "test");
hbase.columns.mapping指向对应的列族;多列时,data:1,data:2;多列族时,data1:1,data2:1;
hbase.table.name指向对应的表;
hbase_table_2(key string, value string),这个是关联表
注意数据为int时候:建表语句不同
CREATE EXTERNAL TABLE HisDiagnose(key string, doctorId int, patientId int, description String, rtime int) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,diagnoseFamily:doctorId,diagnoseFamily:patientId,diagnoseFamily:description,diagnoseFamily:rtime","hbase.table.default.storage.type"="binary") TBLPROPERTIES("hbase.table.name" = "HisDiagnose");
功能测试:
使用hive
hive> create table pokes (foo int, bar striing); OK Time taken: 0.251 seconds hive>create table invites (foo INT, bar STRING) partitioned by (ds string); OK Time taken: 0.106 seconds hive>show tables; OK invites pokes Time taken: 0.107 seconds hive> descripe invites; OK foo int bar string ds string Time taken: 0.151 seconds hive> alter table pokes add columns (new_col int); OK Time taken: 0.117 seconds hive> alter table invites add columns (new_col2 int); OK Time taken: 0.152 seconds hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes; Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv1.txt Loading data to table pokes OK Time taken: 0.288 seconds hive> load data local inpath './examples/files/kv2.txt' overwrite into table invites partition (ds=’2008-08-15′); Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv2.txt Loading data to table invites partition {ds=2008-08-15} OK Time taken: 0.524 seconds hive> LOAD DATA LOCAL INPATH './examples/files/kv3.txt' OVERWRITE INTO TABLE invites PARTITION (ds=’2008-08-08′); Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv3.txt Loading data to table invites partition {ds=2008-08-08} OK Time taken: 0.406 seconds hive> INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT a.* FROM invites a; Total MapReduce jobs = 1 Starting Job = job_200902261245_0002, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0002 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0002 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% Ended Job = job_200902261245_0002 Moving data to: /tmp/hdfs_out OK Time taken: 18.551 seconds hive> select count(1) from pokes; Total MapReduce jobs = 2 Number of reducers = 1 In order to change numer of reducers use: set mapred.reduce.tasks = <number> Starting Job = job_200902261245_0003, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0003 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0003 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =17% map = 100%, reduce =100% Ended Job = job_200902261245_0003 Starting Job = job_200902261245_0004, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0004 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0004 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =100% Ended Job = job_200902261245_0004 OK 500 Time taken: 57.285 seconds hive> INSERT OVERWRITE DIRECTORY ‘/tmp/hdfs_out’ SELECT a.* FROM invites a; Total MapReduce jobs = 1 Starting Job = job_200902261245_0005, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0005 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0005 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% Ended Job = job_200902261245_0005 Moving data to: /tmp/hdfs_out OK Time taken: 18.349 seconds hive> INSERT OVERWRITE DIRECTORY ‘/tmp/reg_5′ SELECT COUNT(1) FROM invites a; Total MapReduce jobs = 2 Number of reducers = 1 In order to change numer of reducers use: set mapred.reduce.tasks = <number> Starting Job = job_200902261245_0006, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0006 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0006 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =17% map = 100%, reduce =100% Ended Job = job_200902261245_0006 Starting Job = job_200902261245_0007, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0007 Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0007 map = 0%, reduce =0% map = 50%, reduce =0% map = 100%, reduce =0% map = 100%, reduce =17% map = 100%, reduce =100% Ended Job = job_200902261245_0007 Moving data to: /tmp/reg_5 OK Time taken: 70.956 seconds
相关推荐
jdk1.8.0_131、apache-zookeeper-3.8.0、hadoop-3.3.2、hbase-2.4.12 mysql5.7.38、mysql jdbc驱动mysql-connector-java-8.0.8-dmr-bin.jar、 apache-hive-3.1.3 2.本文软件均安装在自建的目录/export/server/下 ...
hive0.8.1和hbase0.92.0集成的hive-hbase-handler.Jar包,里面包含:hbase-0.92.0.jar、hbase-0.92.0-tests.jar、hive-hbase-handler-0.9.0-SNAPSHOT.jar。经测试没有问题。
hive0.10.0和hbase0.94.4集成的hive-hbase-handler.Jar包,经测试没有问题。
此文档是本人在工作中用到的知识总结出来的整合过程,本人是菜鸟,希望得到大神们的建议。
hive与hbase整合经验谈
hive-1.2.2集成hbase1.2.6版本的包,本人亲身踩坑多次才编译成功的一个jar包
Hive提供了与HBase的集成,使得能够在HBase表上使用hive sql 语句进行查询 插入操作以及进行Join和Union等复杂查询、同时也可以将hive表中的数据映射到Hbase中
19:Flume+HBase+Hive集成大数据项目离线分析
0. 重新编译依赖包编译过程参考:- Hive整合Hbase详解删除软链接的命令修改hive-site.xml的配置主要修改zookeeper的配置项,以便能够
用于生产环境的hadoop2.2.0和hbase0.96.2、hive0.12的集成安装 经过测试环境
目录 一、Kettle整合Hadoop ...6、执行Hive的HiveSQL语句 三、Kettle整合HBase 1、HBase初始化 2. HBase input组件 3、HBase output组件 一、Kettle整合Hadoop 环境 kettle 8.2 版本: kettle国内镜像下载地址:h
ambari集成impala-3.0.0依赖cdh版本的hadoop-hbase-hive相关jar包,查hive外部表(基于hbase)
详细介绍了hbase的框架结构,运行原理,环境搭建,shell命令,java开发和接口集成。循序渐进,由浅入深,描述非常清晰,非常适合Hbase爱好者构建基础知识体系。...4.Hbase集成篇:与hive集成,与sqoop集成
的查询,往往是要通过类似 Hive、Pig 等系统进行全表的 MapReduce 计算,这种方式既浪费 了机器的计算资源,又因高延迟使得应用黯然失色。于是,针对 HBase Secondary Indexing 的方案出现了。 Solr Solr 是一个...
用户可以通过页面选择数据源即可创建数据同步任务,支持RDBMS,Hive,HBase,ClickHouse,MongoDB等数据源,RDBMS数据源可批量创建数据同步任务,支持实时查看数据同步进度及日志并提供终止同步功能,集成并二次开发...
本文是继hadoop伪分布式安装文档后,又一篇详细介绍完全分布式安装hadoop的过程,并在此基础上,介绍了如何集成安装hbase和hive的详细步骤。 本文真实记录了我安装过程的每个细节,初学者,可按文档一步步轻松完成...
用户可通过页面选择数据源即可创建数据同步任务,支持RDBMS、Hive、HBase、ClickHouse、MongoDB等数据源,RDBMS数据源可批量创建数据同步任务,支持实时查看数据同步进度及日志并提供终止同步功能,集成并二次开发...
能独立制定数据集成方案 熟练地向Hadoop提交作业以及查询作业运行情况 了解Map-Reduce原理,能书写Map-Reduce程序 了解HDFS原理,能熟练地对HDFS中的文件进行管理 能独立完成pig的安装并且利用pig做简单的数据分析...
hbase-用于将Hive与HBase集成的替代方法的实验性UDF。 要求: Brickhouse要求Hive 0.9.0或更高版本; 需要Maven 2.0和Java JDK才能构建。 入门 从中克隆(或fork)仓库 从命令行运行“ mvn软件包”。 将jar“ target...
能独立制定数据集成方案 熟练地向Hadoop提交作业以及查询作业运行情况 了解Map-Reduce原理,能书写Map-Reduce程序 了解HDFS原理,能熟练地对HDFS中的文件进行管理 能独立完成pig的安装并且利用pig做简单的数据分析...