码迷,mamicode.com
首页 > 其他好文 > 详细

表数据迁移(可以指定时间戳将数据导出方法)

时间:2016-10-31 18:27:40      阅读:329      评论:0      收藏:0      [点我收藏+]

标签:keep   知识   ati   files   mapred   versions   war   backup   mem   

1 CopyTable 工具

用法:

CopyTable is a utility that can copy part or of all of a table, either to the same cluster or another cluster. The target table must first exist. The usage is as follows:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable [--starttime=X] [--endtime=Y] [--new.name=NEW] [--peer.adr=ADR] tablename

 

Options:

  • starttime Beginning of the time range. Without endtime means starttime to forever.
  • endtime End of the time range. Without endtime means starttime to forever.
  • versions Number of cell versions to copy.
  • new.name New table‘s name.
  • peer.adr Address of the peer cluster given in the format hbase.zookeeper.quorum:hbase.zookeeper.client.port:zookeeper.znode.parent
  • families Comma-separated list of ColumnFamilies to copy.
  • all.cells Also copy delete markers and uncollected deleted cells (advanced option).

Args:

  • tablename Name of table to copy.

 

Example of copying ‘TestTable‘ to a cluster that uses replication for a 1 hour window:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.CopyTable
--starttime=1265875194289 --endtime=1265878794289
--peer.adr=server1,server2,server3:2181:/hbase TestTable

 

Scanner Caching

Caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration.

Versions

By default, CopyTable utility only copies the latest version of row cells unless --versions=n is explicitly specified in the command.

See Jonathan Hsieh‘s Online HBase Backups with CopyTable blog post for more on CopyTable.

 

2 Export和Import工具

 

Export is a utility that will dump the contents of table to HDFS in a sequence file. Invoke via:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]

 

Note: caching for the input Scan is configured via hbase.client.scanner.caching in the job configuration.

 

 

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Export <tablename> <outputdir> [<versions> [<starttime> [<endtime>]]]

 

Import is a utility that will load data that has been exported back into HBase. Invoke via:

$ bin/hbase org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>

 

To import 0.94 exported files in a 0.96 cluster or onwards, you need to set system property "hbase.import.version" when running the import command as below:

$ bin/hbase -Dhbase.import.version=0.94 org.apache.hadoop.hbase.mapreduce.Import <tablename> <inputdir>

 

export带时间范围的具体用法: hbase org.apache.Hadoop.hbase.mapreduce.Export member5 hdfs://master24:9000/user/hadoop/dump2 1 1401938590466 1401938590467

导出路径为HDFS路径,写全路径。

导入的表必须存在预先定义好。

表数据迁移(可以指定时间戳将数据导出方法)

标签:keep   知识   ati   files   mapred   versions   war   backup   mem   

原文地址:http://www.cnblogs.com/yingjie2222/p/6016771.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!