`
liyonghui160com
  • 浏览: 761813 次
  • 性别: Icon_minigender_1
  • 来自: 北京
社区版块
存档分类
最新评论
阅读更多

 

Mapr框架安装完后,安装与配置hbase、hive。
其中mapr框架的安装路径为/opt/mapr
Hbase的安装路径为/opt/mapr/hbase/hbase-0.90.4
Hive的安装路径为/opt/mapr/hive/hive-0.7.1
整合hive与hbase的过程如下:
1. 将文件 /opt/mapr/hbase/hbase-0.90.4/hbase-0.90.4.jar 与/opt/mapr/hbase/hbase-0.90.4/lib/zookeeper-3.3.2.jar拷贝到/opt/mapr/hive /hive-0.7.1/lib文件夹下面
注意:如果hive/lib下已经存在这两个文件的其他版本(例如zookeeper-3.3.1.jar),建议删除后使用hbase下的相关版本
2 修改hive/conf下hive-site.xml文件,在底部添加如下内容:
<property>
<name>hive.querylog.location</name>
<value>/opt/mapr/hive/hive-0.7.1/logs</value>
</property>
<property>
<name>hive.aux.jars.path</name> <value>file:///opt/mapr/hive/hive-0.7.1/lib/hive-hbase-handler-0.7.1.jar,file:///opt/mapr/hive/hive-0.7.1/lib/hbase-0.90.4.jar,file:///opt/mapr/hive/hive-0.7.1/lib/zookeeper-3.3.2.jar</value>
</property>
注意:如果hive-site.xml不存在则自行创建,或者把hive-default.xml.template文件改名后使用。
3. 拷贝hbase-0.90.4.jar到所有hadoop节点(包括master)的hadoop/lib下。
4. 拷贝hbase/conf下的hbase-site.xml文件到所有hadoop节点(包括master)的hadoop/conf下。

注意,如果3,4两步跳过的话,运行hive时很可能出现如下错误:
org.apache.hadoop.hbase.ZooKeeperConnectionException: HBase is able to connect to ZooKeeper but the connection closes immediately.
This could be a sign that the server has too many connections (30 is the default). Consider inspecting your ZK server logs for that error and
then make sure you are reusing HBaseConfiguration as often as you can. See HTable's javadoc for more information. at org.apache.hadoop.
hbase.zookeeper.ZooKeeperWatcher.

5 启动hive
单节点启动
bin/hive -hiveconf hbase.master=master:60000
集群启动
bin/hive -hiveconf hbase.zookeeper.quorum=node1,node2,node3   (所有的zookeeper节点)
如果hive-site.xml文件中没有配置hive.aux.jars.path,则可以按照如下方式启动。
hive --auxpath /opt/mapr/hive/hive-0.7.1/lib/hive-hbase-handler-0.7.1.jar,/opt/mapr/hive/hive-0.7.1/lib/hbase-0.90.4.jar,/opt/mapr/hive/hive-0.7.1/lib/zookeeper-3.3.2.jar -hiveconf hbase.master=localhost:60000

经测试修改hive的配置文件hive-site.xml

<property>
  <name>hive.zookeeper.quorum</name>
  <value>node1,node2,node3</value>
  <description>The list of zookeeper servers to talk to. This is only needed for read/write locks.</description>
</property>

不用增加参数启动hive就可以联合hbase

6 启动后进行测试
(1) 创建hbase识别的表
CREATE TABLE hbase_table_1(key int, value string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz");
hbase.table.name 定义在hbase的table名称,多列时,data:1,data:2;多列族时,data1:1,data2:1;
hbase.columns.mapping 定义在hbase的列族,里面的:key 是固定值而且要保证在表pokes中的foo字段是唯一值

创建有分区的表

CREATE TABLE hbase_table_1(key int, value string)  partitioned by (day string) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,cf1:val") TBLPROPERTIES ("hbase.table.name" = "xyz");


(2) 使用sql导入数据
新建hive的数据表
create table pokes(foo int,bar string)row format delimited fields terminated by ',';
批量导入数据
load data local inpath '/home/1.txt' overwrite into table pokes;

1.txt文件的内容为 
1,hello 
2,pear 
3,world

使用sql导入hbase_table_1
insert overwrite table hbase_table_1 select * from pokes;

导入有分区的表

insert overwrite table hbase_table_1  partition (day='2012-01-01') select * from pokes;

(3) 查看数据 
hive> select * from hbase_table_1;
OK
1 hello
2 pear
3 world

(注:与hbase整合的有分区的表存在个问题  select * from table查询不到数据,select key,value from table可以查到数据)

(4)登录Hbase去查看数据
hbase shell

hbase(main):002:0> describe 'xyz' 
DESCRIPTION ENABLED {NAME => 'xyz', FAMILIES => [{NAME => 'cf1', BLOOMFILTER => 'NONE', REPLICATION_S true 
COPE => '0', COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2147483647', BLOCKSI 
ZE => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}]} 
1 row(s) in 0.0830 seconds
hbase(main):003:0> scan 'xyz'
ROW COLUMN+CELL 
1 column=cf1:val, timestamp=1331002501432, value=hello 
2 column=cf1:val, timestamp=1331002501432, value=pear 
3 column=cf1:val, timestamp=1331002501432, value=world

这时在Hbase中可以看到刚才在hive中插入的数据了。

7 对于在hbase已经存在的表,在hive中使用CREATE EXTERNAL TABLE来建立
例如hbase中的表名称为test1,字段为 a: , b: ,c: 在hive中建表语句为

create external table hive_test (key int,gid map<string,string>,sid map<string,string>,uid map<string,string>) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" ="a:,b:,c:")  TBLPROPERTIES  ("hbase.table.name" = "test1");
在hive中建立好表后,查询hbase中test1表内容
Select * from hive_test;

OK
1 {"":"qqq"} {"":"aaa"} {"":"bbb"}
2 {"":"qqq"} {} {"":"bbb"}

查询gid字段中value值的方法为
select gid[''] from hbase2;
得到查询结果
Total MapReduce jobs = 1
Launching Job 1 out of 1
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201203052222_0017, Tracking URL = http://localhost:50030/jobdetails.jsp?jobid=job_201203052222_0017
Kill Command = /opt/mapr/hadoop/hadoop-0.20.2/bin/../bin/hadoop job -Dmapred.job.tracker=maprfs:/// -kill job_201203052222_0017
2012-03-06 14:38:29,141 Stage-1 map = 0%, reduce = 0%
2012-03-06 14:38:33,171 Stage-1 map = 100%, reduce = 100%
Ended Job = job_201203052222_0017
OK
qqq
qqq

如果hbase表test1中的字段为user:gid,user:sid,info:uid,info:level,在hive中建表语句为
create external table hive_test(key int,user map<string,string>,info map<string,string>) STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" ="user:,info:")  TBLPROPERTIES  ("hbase.table.name" = "test1");

查询hbase表的方法为
select user['gid'] from hbase2;

 

建立关联表

这里我们要查询的表在hbase中已经存在所以,使用CREATE EXTERNAL TABLE来建立,如下:

Java代码  收藏代码
  1. CREATE EXTERNAL TABLE hbase_table_2(key string, value string)   
  2. STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'   
  3. WITH SERDEPROPERTIES ("hbase.columns.mapping" = "data:1")   
  4. TBLPROPERTIES("hbase.table.name" = "test");    

 hbase.columns.mapping指向对应的列族;多列时,data:1,data:2;多列族时,data1:1,data2:1;

 hbase.table.name指向对应的表;

 hbase_table_2(key string, value string),这个是关联表

 

注意数据为int时候:建表语句不同

 

CREATE EXTERNAL TABLE HisDiagnose(key string, doctorId int, patientId int, description String, rtime int)
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler'
WITH SERDEPROPERTIES ("hbase.columns.mapping" = ":key,diagnoseFamily:doctorId,diagnoseFamily:patientId,diagnoseFamily:description,diagnoseFamily:rtime","hbase.table.default.storage.type"="binary")
TBLPROPERTIES("hbase.table.name" = "HisDiagnose");

 

 

 

功能测试:

 

使用hive

 

hive> create table pokes (foo int, bar striing);
OK
Time taken: 0.251 seconds
hive>create table invites (foo INT, bar STRING) partitioned by (ds string);
OK
Time taken: 0.106 seconds
hive>show tables;
OK
invites pokes
Time taken: 0.107 seconds
hive> descripe invites;
OK
foo     int
bar     string
ds      string
Time taken: 0.151 seconds
hive> alter table pokes add columns (new_col int);
OK
Time taken: 0.117 seconds
hive> alter table invites add columns (new_col2 int);
OK
Time taken: 0.152 seconds
hive> LOAD DATA LOCAL INPATH './examples/files/kv1.txt' OVERWRITE INTO TABLE pokes;
Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv1.txt
Loading data to table pokes
OK
Time taken: 0.288 seconds
hive> load data local inpath './examples/files/kv2.txt' overwrite into table invites partition (ds=’2008-08-15′);
Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv2.txt
Loading data to table invites partition {ds=2008-08-15}
OK
Time taken: 0.524 seconds
hive> LOAD DATA LOCAL INPATH './examples/files/kv3.txt' OVERWRITE INTO TABLE invites PARTITION (ds=’2008-08-08′);
Copying data from file:/home/hadoop/hadoop-0.19.1/contrib/hive/examples/files/kv3.txt
Loading data to table invites partition {ds=2008-08-08}
OK
Time taken: 0.406 seconds

hive> INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT a.* FROM invites a;
Total MapReduce jobs = 1
Starting Job = job_200902261245_0002, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0002
Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job  -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0002
map = 0%,  reduce =0%
map = 50%,  reduce =0%
map = 100%,  reduce =0%
Ended Job = job_200902261245_0002
Moving data to: /tmp/hdfs_out
OK
Time taken: 18.551 seconds

hive> select count(1) from pokes;
Total MapReduce jobs = 2
Number of reducers = 1
In order to change numer of reducers use:
set mapred.reduce.tasks = <number>
Starting Job = job_200902261245_0003, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0003
Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job  -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0003
map = 0%,  reduce =0%
map = 50%,  reduce =0%
map = 100%,  reduce =0%
map = 100%,  reduce =17%
map = 100%,  reduce =100%
Ended Job = job_200902261245_0003
Starting Job = job_200902261245_0004, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0004
Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job  -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0004
map = 0%,  reduce =0%
map = 50%,  reduce =0%
map = 100%,  reduce =0%
map = 100%,  reduce =100%
Ended Job = job_200902261245_0004
OK
500
Time taken: 57.285 seconds

hive> INSERT OVERWRITE DIRECTORY ‘/tmp/hdfs_out’ SELECT a.* FROM invites a;
Total MapReduce jobs = 1
Starting Job = job_200902261245_0005, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0005
Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job  -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0005
map = 0%,  reduce =0%
map = 50%,  reduce =0%
map = 100%,  reduce =0%
Ended Job = job_200902261245_0005
Moving data to: /tmp/hdfs_out
OK
Time taken: 18.349 seconds

hive>  INSERT OVERWRITE DIRECTORY ‘/tmp/reg_5′ SELECT COUNT(1) FROM invites a;
Total MapReduce jobs = 2
Number of reducers = 1
In order to change numer of reducers use:
set mapred.reduce.tasks = <number>
Starting Job = job_200902261245_0006, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0006
Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job  -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0006
map = 0%,  reduce =0%
map = 50%,  reduce =0%
map = 100%,  reduce =0%
map = 100%,  reduce =17%
map = 100%,  reduce =100%
Ended Job = job_200902261245_0006
Starting Job = job_200902261245_0007, Tracking URL = http://gp1:50030/jobdetails.jsp?jobid=job_200902261245_0007
Kill Command = /home/hadoop/hadoop-0.19.1/bin/hadoop job  -Dmapred.job.tracker=gp1:9001 -kill job_200902261245_0007
map = 0%,  reduce =0%
map = 50%,  reduce =0%
map = 100%,  reduce =0%
map = 100%,  reduce =17%
map = 100%,  reduce =100%
Ended Job = job_200902261245_0007
Moving data to: /tmp/reg_5
OK
Time taken: 70.956 seconds
 

 

分享到:
评论

相关推荐

Global site tag (gtag.js) - Google Analytics