hbase客户端操作
移除点击此处添加图片说明文字
集群全部搭建完毕:
移除点击此处添加图片说明文字1.首先启动zookeeper集群,在bin下执行,每台都要启动
./zkServer.sh start
2.找到hadoop的sbin执行:
./start-all.sh
3.找到hbase的bin下执行:
./start-hbase.sh
移除点击此处添加图片说明文字启动完毕,可以执行测试:
4.测试hbase的shell
./hbase shell
移除点击此处添加图片说明文字好的,开发调试开始。
我们写了客户端程序,但是发现:
程序卡主不动:
移除点击此处添加图片说明文字我在网上找了很久,很久,各种各样的问题,然后我来看日志。发现。
移除点击此处添加图片说明文字然后我把错误粘贴到网上,发现有个人和我差不多问题,他层层找到原因是因为各个服务器之间时间同步没有同步,时间差了很多。
我把原文贴出来:http://blog.csdn.net/showmyheart_libo/article/details/16841189
然后我一次进入到到了这些reginserver里面看日志,果然和他说的一样。
移除点击此处添加图片说明文字这些个日志的时间是差的太多了。
移除点击此处添加图片说明文字
hbase集群的时间差距不能超过30m,而我的集群环境没用配置时间服务器造成时间差引起的这次故障
接下来我们要做集群之间的时间同步。
yum -y install ntp
然后我们发现其中有一个节点,连yum都不行,我的天啊,好烦呀。我估计是yum源没有配好,待会儿还得配yum源。这一步一步,是把老子坑惨了。
移除点击此处添加图片说明文字http://blog.csdn.net/qq_33792843/article/details/76184931
可以看一下,我们直接去这个网站下载阿里的yum源文件下来。
移除点击此处添加图片说明文字为什么不用wget呢?因为这个命令都没有安装啊,尴尬吗?然后依次放到集群的拷贝文件里面,记住备份啊
移除点击此处添加图片说明文字不行啊,看来我们得手动调整时间了。我记得两年前还是可以的,现在怎么不行了呀。
date -s 27/07/17
date -s 13:58:00
移除点击此处添加图片说明文字好了,现在时间终于一致了。我们启动程序,hbase
2017-07-27 13:59:27,082 WARN [CatalogJanitor-h1:60000] util.Sleeper: We slept 430955ms instead of 300000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2017-07-27 13:59:27,106 WARN [h1,60000,1501120617113-BalancerChore] util.Sleeper: We slept 430983ms instead of 300000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2017-07-27 13:59:27,107 DEBUG [h1,60000,1501120617113-BalancerChore] master.HMaster: Master has not been initialized, don't run balancer.
2017-07-27 13:59:27,109 DEBUG [CatalogJanitor-h1:60000] client.HConnectionManager$HConnectionImplementation: Removed all cached region locations that map to h1,60020,1501120624929
此刻已经是三点多了,真的要爆粗口了,卡了我4 5个小时,妈的,蛋,居然是防火墙!!!
service iptables stop
chkconfig iptables off
终于看到报错了,我好激动!!!1
移除点击此处添加图片说明文字绕着地球跑一圈,原来是防火墙,尼玛啊,救命啊,受不了了啊。
public static void createTable(String tableName) {
System.out.println("start create table ......");
try {
HBaseAdmin hBaseAdmin = new HBaseAdmin(configuration);
if (hBaseAdmin.tableExists(tableName)) {// 如果存在要创建的表,那么先删除,再创建
hBaseAdmin.disableTable(tableName);
hBaseAdmin.deleteTable(tableName);
System.out.println(tableName + " is exist,detele....");
}
HTableDescriptor tableDescriptor = new HTableDescriptor(tableName);
tableDescriptor.addFamily(new HColumnDescriptor("column1"));
tableDescriptor.addFamily(new HColumnDescriptor("column2"));
tableDescriptor.addFamily(new HColumnDescriptor("column3"));
hBaseAdmin.createTable(tableDescriptor);
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end create table ......");
}
当然main方法:
public static void main(String[] args) {
System.out.println("main方法开始");
createTable("lishouzhuang");
System.out.println("main方法结束");
}
所以,真的,兄弟,再次声明,防火墙、时间同步的重要性。
我在插入数据使用代码:
public static void insertData(String tableName) {
System.out.println("start insert data ......");
HTablePool pool = new HTablePool(configuration, 1000);
HTable table = (HTable) pool.getTable(tableName);
Put put = new Put("112233bbbcccc".getBytes());// 一个PUT代表一行数据,再NEW一个PUT表示第二行数据,每行一个唯一的ROWKEY,此处rowkey为put构造方法中传入的值
put.add("column1".getBytes(), null, "aaa".getBytes());// 本行数据的第一列
put.add("column2".getBytes(), null, "bbb".getBytes());// 本行数据的第三列
put.add("column3".getBytes(), null, "ccc".getBytes());// 本行数据的第三列
try {
table.put(put);
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end insert data ......");
}
关于报错:
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.hadoop.hbase.client.HTable
HTablePool连接池,无法强转成HTable的问题。
因为版本问题,我们不用连接池了。直接用:
System.out.println("start insert data ......");
HTable table = new HTable(configuration, tableName);
Put put = new Put("112233bbbcccc".getBytes());// 一个PUT代表一行数据,再NEW一个PUT表示第二行数据,每行一个唯一的ROWKEY,此处rowkey为put构造方法中传入的值
put.add("column1".getBytes(), null, "aaa".getBytes());// 本行数据的第一列
put.add("column2".getBytes(), null, "bbb".getBytes());// 本行数据的第三列
put.add("column3".getBytes(), null, "ccc".getBytes());// 本行数据的第三列
好了,接下来就要开发hbase了,玩真格的了。不过我说,hbase的学习成本真的挺高的,真的要好好搞一搞了。
package 最终完整测试版;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTablePool;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.util.Bytes;
public class HBaseTest2{
public static Configuration configuration;
static {
configuration = HBaseConfiguration.create();
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("hbase.zookeeper.quorum", "192.168.181.20,192.168.181.21,192.168.181.23");
configuration.set("hbase.master", "192.168.181.20:60000");
}
public static void main(String[] args) throws IOException {
System.out.println("test2,,,");
// createTable("lishouzhuang"); //创建表
insertData("lishouzhuang"); //增加数据
//QueryAll("lishouzhuang"); //查询遍历数据
//QueryByCondition1("lishouzhuang"); //条件查询
//QueryByCondition2("lishouzhuang");
//QueryByCondition3("lishouzhuang");
//deleteRow("lishouzhuang","112233bbbcccc"); //删除id为 112233bbbcccc 的数据
// deleteByCondition("lishouzhuang","abcdef");
}
/**
* 创建一张表
* @param tableName
*/
public static void createTable(String tableName) {
System.out.println("start create table ......");
try {
HBaseAdmin hBaseAdmin = new HBaseAdmin(configuration);
if (hBaseAdmin.tableExists(tableName)) {// 如果存在要创建的表,那么先删除,再创建
hBaseAdmin.disableTable(tableName);
hBaseAdmin.deleteTable(tableName);
System.out.println(tableName + " is exist,detele....");
}
HTableDescriptor tableDescriptor = new HTableDescriptor(tableName);
tableDescriptor.addFamily(new HColumnDescriptor("column1"));
tableDescriptor.addFamily(new HColumnDescriptor("column2"));
tableDescriptor.addFamily(new HColumnDescriptor("column3"));
hBaseAdmin.createTable(tableDescriptor);
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end create table ......");
}
/**
* 增加一条数据
* @param tableName
* @throws IOException
* 一个id 对应多个属性 , 112233bbbcccc是id , column1是属性名字 , aaa是属性 value , 和mysql一个意思
*/
public static void insertData(String tableName) throws IOException {
System.out.println("start insert data ......");
HTable table = new HTable(configuration, tableName);
Put put = new Put("112233bbbcccc".getBytes());// 一个PUT代表一行数据,再NEW一个PUT表示第二行数据,每行一个唯一的ROWKEY,此处rowkey为put构造方法中传入的值
put.add("column1".getBytes(), null, "aaa".getBytes());// 本行数据的第一列
put.add("column2".getBytes(), null, "bbb".getBytes());// 本行数据的第三列
put.add("column3".getBytes(), null, "ccc".getBytes());// 本行数据的第三列
try {
table.put(put);
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end insert data ......");
}
public static void dropTable(String tableName) {
try {
HBaseAdmin admin = new HBaseAdmin(configuration);
admin.disableTable(tableName);
admin.deleteTable(tableName);
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void deleteRow(String tablename, String rowkey) {
try {
HTable table = new HTable(configuration, tablename);
List list = new ArrayList();
Delete d1 = new Delete(rowkey.getBytes());
list.add(d1);
table.delete(list);
System.out.println("删除行成功!");
} catch (IOException e) {
e.printStackTrace();
}
}
public static void deleteByCondition(String tablename, String rowkey) {
//目前还没有发现有效的API能够实现根据非rowkey的条件删除这个功能能,还有清空表全部数据的API操作
}
/**
* 遍历查询
* @param tableName
* @throws IOException
*/
public static void QueryAll(String tableName) throws IOException {
// HTablePool pool = new HTablePool(configuration, 1000);
// HTable table = (HTable) pool.getTable(tableName);
HTable table = new HTable(configuration, tableName);
try {
ResultScanner rs = table.getScanner(new Scan());
for (Result r : rs) {
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* 条件查询 根据"112233bbbcccc" 进行条件查询
* @param tableName
* @throws IOException
*/
public static void QueryByCondition1(String tableName) throws IOException {
// HTablePool pool = new HTablePool(configuration, 1000);
// HTable table = (HTable) pool.getTable(tableName);
HTable table = new HTable(configuration, tableName);
try {
Get scan = new Get("112233bbbcccc".getBytes());// 根据rowkey查询
Result r = table.get(scan);
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
} catch (IOException e) {
e.printStackTrace();
}
}
public static void QueryByCondition2(String tableName) {
try {
// HTablePool pool = new HTablePool(configuration, 1000);
// HTable table = (HTable) pool.getTable(tableName);
HTable table = new HTable(configuration, tableName);
Filter filter = new SingleColumnValueFilter(Bytes
.toBytes("column1"), null, CompareOp.EQUAL, Bytes
.toBytes("aaa")); // 当列column1的值为aaa时进行查询
Scan s = new Scan();
s.setFilter(filter);
ResultScanner rs = table.getScanner(s);
for (Result r : rs) {
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static void QueryByCondition3(String tableName) {
try {
HTablePool pool = new HTablePool(configuration, 1000);
HTable table = (HTable) pool.getTable(tableName);
List<Filter> filters = new ArrayList<Filter>();
Filter filter1 = new SingleColumnValueFilter(Bytes
.toBytes("column1"), null, CompareOp.EQUAL, Bytes
.toBytes("aaa"));
filters.add(filter1);
Filter filter2 = new SingleColumnValueFilter(Bytes
.toBytes("column2"), null, CompareOp.EQUAL, Bytes
.toBytes("bbb"));
filters.add(filter2);
Filter filter3 = new SingleColumnValueFilter(Bytes
.toBytes("column3"), null, CompareOp.EQUAL, Bytes
.toBytes("ccc"));
filters.add(filter3);
FilterList filterList1 = new FilterList(filters);
Scan scan = new Scan();
scan.setFilter(filterList1);
ResultScanner rs = table.getScanner(scan);
for (Result r : rs) {
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
}
rs.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
移除点击此处添加图片说明文字
集群全部搭建完毕:
移除点击此处添加图片说明文字1.首先启动zookeeper集群,在bin下执行,每台都要启动
./zkServer.sh start
2.找到hadoop的sbin执行:
./start-all.sh
3.找到hbase的bin下执行:
./start-hbase.sh
移除点击此处添加图片说明文字启动完毕,可以执行测试:
4.测试hbase的shell
./hbase shell
移除点击此处添加图片说明文字好的,开发调试开始。
我们写了客户端程序,但是发现:
程序卡主不动:
移除点击此处添加图片说明文字我在网上找了很久,很久,各种各样的问题,然后我来看日志。发现。
移除点击此处添加图片说明文字然后我把错误粘贴到网上,发现有个人和我差不多问题,他层层找到原因是因为各个服务器之间时间同步没有同步,时间差了很多。
我把原文贴出来:http://blog.csdn.net/showmyheart_libo/article/details/16841189
然后我一次进入到到了这些reginserver里面看日志,果然和他说的一样。
移除点击此处添加图片说明文字这些个日志的时间是差的太多了。
移除点击此处添加图片说明文字
hbase集群的时间差距不能超过30m,而我的集群环境没用配置时间服务器造成时间差引起的这次故障
接下来我们要做集群之间的时间同步。
yum -y install ntp
然后我们发现其中有一个节点,连yum都不行,我的天啊,好烦呀。我估计是yum源没有配好,待会儿还得配yum源。这一步一步,是把老子坑惨了。
移除点击此处添加图片说明文字http://blog.csdn.net/qq_33792843/article/details/76184931
可以看一下,我们直接去这个网站下载阿里的yum源文件下来。
移除点击此处添加图片说明文字为什么不用wget呢?因为这个命令都没有安装啊,尴尬吗?然后依次放到集群的拷贝文件里面,记住备份啊
移除点击此处添加图片说明文字不行啊,看来我们得手动调整时间了。我记得两年前还是可以的,现在怎么不行了呀。
date -s 27/07/17
date -s 13:58:00
移除点击此处添加图片说明文字好了,现在时间终于一致了。我们启动程序,hbase
2017-07-27 13:59:27,082 WARN [CatalogJanitor-h1:60000] util.Sleeper: We slept 430955ms instead of 300000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2017-07-27 13:59:27,106 WARN [h1,60000,1501120617113-BalancerChore] util.Sleeper: We slept 430983ms instead of 300000ms, this is likely due to a long garbage collecting pause and it's usually bad, see http://hbase.apache.org/book.html#trouble.rs.runtime.zkexpired
2017-07-27 13:59:27,107 DEBUG [h1,60000,1501120617113-BalancerChore] master.HMaster: Master has not been initialized, don't run balancer.
2017-07-27 13:59:27,109 DEBUG [CatalogJanitor-h1:60000] client.HConnectionManager$HConnectionImplementation: Removed all cached region locations that map to h1,60020,1501120624929
此刻已经是三点多了,真的要爆粗口了,卡了我4 5个小时,妈的,蛋,居然是防火墙!!!
service iptables stop
chkconfig iptables off
终于看到报错了,我好激动!!!1
移除点击此处添加图片说明文字绕着地球跑一圈,原来是防火墙,尼玛啊,救命啊,受不了了啊。
public static void createTable(String tableName) {
System.out.println("start create table ......");
try {
HBaseAdmin hBaseAdmin = new HBaseAdmin(configuration);
if (hBaseAdmin.tableExists(tableName)) {// 如果存在要创建的表,那么先删除,再创建
hBaseAdmin.disableTable(tableName);
hBaseAdmin.deleteTable(tableName);
System.out.println(tableName + " is exist,detele....");
}
HTableDescriptor tableDescriptor = new HTableDescriptor(tableName);
tableDescriptor.addFamily(new HColumnDescriptor("column1"));
tableDescriptor.addFamily(new HColumnDescriptor("column2"));
tableDescriptor.addFamily(new HColumnDescriptor("column3"));
hBaseAdmin.createTable(tableDescriptor);
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end create table ......");
}
当然main方法:
public static void main(String[] args) {
System.out.println("main方法开始");
createTable("lishouzhuang");
System.out.println("main方法结束");
}
所以,真的,兄弟,再次声明,防火墙、时间同步的重要性。
我在插入数据使用代码:
public static void insertData(String tableName) {
System.out.println("start insert data ......");
HTablePool pool = new HTablePool(configuration, 1000);
HTable table = (HTable) pool.getTable(tableName);
Put put = new Put("112233bbbcccc".getBytes());// 一个PUT代表一行数据,再NEW一个PUT表示第二行数据,每行一个唯一的ROWKEY,此处rowkey为put构造方法中传入的值
put.add("column1".getBytes(), null, "aaa".getBytes());// 本行数据的第一列
put.add("column2".getBytes(), null, "bbb".getBytes());// 本行数据的第三列
put.add("column3".getBytes(), null, "ccc".getBytes());// 本行数据的第三列
try {
table.put(put);
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end insert data ......");
}
关于报错:
org.apache.hadoop.hbase.client.HTablePool$PooledHTable cannot be cast to org.apache.hadoop.hbase.client.HTable
HTablePool连接池,无法强转成HTable的问题。
因为版本问题,我们不用连接池了。直接用:
System.out.println("start insert data ......");
HTable table = new HTable(configuration, tableName);
Put put = new Put("112233bbbcccc".getBytes());// 一个PUT代表一行数据,再NEW一个PUT表示第二行数据,每行一个唯一的ROWKEY,此处rowkey为put构造方法中传入的值
put.add("column1".getBytes(), null, "aaa".getBytes());// 本行数据的第一列
put.add("column2".getBytes(), null, "bbb".getBytes());// 本行数据的第三列
put.add("column3".getBytes(), null, "ccc".getBytes());// 本行数据的第三列
好了,接下来就要开发hbase了,玩真格的了。不过我说,hbase的学习成本真的挺高的,真的要好好搞一搞了。
package 最终完整测试版;
import java.io.IOException;
import java.util.ArrayList;
import java.util.List;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.KeyValue;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.HTablePool;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.FilterList;
import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.util.Bytes;
public class HBaseTest2{
public static Configuration configuration;
static {
configuration = HBaseConfiguration.create();
configuration.set("hbase.zookeeper.property.clientPort", "2181");
configuration.set("hbase.zookeeper.quorum", "192.168.181.20,192.168.181.21,192.168.181.23");
configuration.set("hbase.master", "192.168.181.20:60000");
}
public static void main(String[] args) throws IOException {
System.out.println("test2,,,");
// createTable("lishouzhuang"); //创建表
insertData("lishouzhuang"); //增加数据
//QueryAll("lishouzhuang"); //查询遍历数据
//QueryByCondition1("lishouzhuang"); //条件查询
//QueryByCondition2("lishouzhuang");
//QueryByCondition3("lishouzhuang");
//deleteRow("lishouzhuang","112233bbbcccc"); //删除id为 112233bbbcccc 的数据
// deleteByCondition("lishouzhuang","abcdef");
}
/**
* 创建一张表
* @param tableName
*/
public static void createTable(String tableName) {
System.out.println("start create table ......");
try {
HBaseAdmin hBaseAdmin = new HBaseAdmin(configuration);
if (hBaseAdmin.tableExists(tableName)) {// 如果存在要创建的表,那么先删除,再创建
hBaseAdmin.disableTable(tableName);
hBaseAdmin.deleteTable(tableName);
System.out.println(tableName + " is exist,detele....");
}
HTableDescriptor tableDescriptor = new HTableDescriptor(tableName);
tableDescriptor.addFamily(new HColumnDescriptor("column1"));
tableDescriptor.addFamily(new HColumnDescriptor("column2"));
tableDescriptor.addFamily(new HColumnDescriptor("column3"));
hBaseAdmin.createTable(tableDescriptor);
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end create table ......");
}
/**
* 增加一条数据
* @param tableName
* @throws IOException
* 一个id 对应多个属性 , 112233bbbcccc是id , column1是属性名字 , aaa是属性 value , 和mysql一个意思
*/
public static void insertData(String tableName) throws IOException {
System.out.println("start insert data ......");
HTable table = new HTable(configuration, tableName);
Put put = new Put("112233bbbcccc".getBytes());// 一个PUT代表一行数据,再NEW一个PUT表示第二行数据,每行一个唯一的ROWKEY,此处rowkey为put构造方法中传入的值
put.add("column1".getBytes(), null, "aaa".getBytes());// 本行数据的第一列
put.add("column2".getBytes(), null, "bbb".getBytes());// 本行数据的第三列
put.add("column3".getBytes(), null, "ccc".getBytes());// 本行数据的第三列
try {
table.put(put);
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("end insert data ......");
}
public static void dropTable(String tableName) {
try {
HBaseAdmin admin = new HBaseAdmin(configuration);
admin.disableTable(tableName);
admin.deleteTable(tableName);
} catch (MasterNotRunningException e) {
e.printStackTrace();
} catch (ZooKeeperConnectionException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
}
public static void deleteRow(String tablename, String rowkey) {
try {
HTable table = new HTable(configuration, tablename);
List list = new ArrayList();
Delete d1 = new Delete(rowkey.getBytes());
list.add(d1);
table.delete(list);
System.out.println("删除行成功!");
} catch (IOException e) {
e.printStackTrace();
}
}
public static void deleteByCondition(String tablename, String rowkey) {
//目前还没有发现有效的API能够实现根据非rowkey的条件删除这个功能能,还有清空表全部数据的API操作
}
/**
* 遍历查询
* @param tableName
* @throws IOException
*/
public static void QueryAll(String tableName) throws IOException {
// HTablePool pool = new HTablePool(configuration, 1000);
// HTable table = (HTable) pool.getTable(tableName);
HTable table = new HTable(configuration, tableName);
try {
ResultScanner rs = table.getScanner(new Scan());
for (Result r : rs) {
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
}
} catch (IOException e) {
e.printStackTrace();
}
}
/**
* 条件查询 根据"112233bbbcccc" 进行条件查询
* @param tableName
* @throws IOException
*/
public static void QueryByCondition1(String tableName) throws IOException {
// HTablePool pool = new HTablePool(configuration, 1000);
// HTable table = (HTable) pool.getTable(tableName);
HTable table = new HTable(configuration, tableName);
try {
Get scan = new Get("112233bbbcccc".getBytes());// 根据rowkey查询
Result r = table.get(scan);
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
} catch (IOException e) {
e.printStackTrace();
}
}
public static void QueryByCondition2(String tableName) {
try {
// HTablePool pool = new HTablePool(configuration, 1000);
// HTable table = (HTable) pool.getTable(tableName);
HTable table = new HTable(configuration, tableName);
Filter filter = new SingleColumnValueFilter(Bytes
.toBytes("column1"), null, CompareOp.EQUAL, Bytes
.toBytes("aaa")); // 当列column1的值为aaa时进行查询
Scan s = new Scan();
s.setFilter(filter);
ResultScanner rs = table.getScanner(s);
for (Result r : rs) {
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
}
} catch (Exception e) {
e.printStackTrace();
}
}
public static void QueryByCondition3(String tableName) {
try {
HTablePool pool = new HTablePool(configuration, 1000);
HTable table = (HTable) pool.getTable(tableName);
List<Filter> filters = new ArrayList<Filter>();
Filter filter1 = new SingleColumnValueFilter(Bytes
.toBytes("column1"), null, CompareOp.EQUAL, Bytes
.toBytes("aaa"));
filters.add(filter1);
Filter filter2 = new SingleColumnValueFilter(Bytes
.toBytes("column2"), null, CompareOp.EQUAL, Bytes
.toBytes("bbb"));
filters.add(filter2);
Filter filter3 = new SingleColumnValueFilter(Bytes
.toBytes("column3"), null, CompareOp.EQUAL, Bytes
.toBytes("ccc"));
filters.add(filter3);
FilterList filterList1 = new FilterList(filters);
Scan scan = new Scan();
scan.setFilter(filterList1);
ResultScanner rs = table.getScanner(scan);
for (Result r : rs) {
System.out.println("获得到rowkey:" + new String(r.getRow()));
for (KeyValue keyValue : r.raw()) {
System.out.println("列:" + new String(keyValue.getFamily())
+ "====值:" + new String(keyValue.getValue()));
}
}
rs.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}