Hadoop 2.6.0 hbase 0.98 Java API 调用实例

时间:2021-10-20 08:27:09

环境搭完了,不知道怎样引jar包,怎样调用的看这里! ps. 不涉及HBase原理详解,那个之后单独弄一篇

Let’s go!

Java IDE选用的是 eclipse, 用Intellij IDE的也差不多, 当然用sublime、vim、Emacs等的大神请绕道哈

下载eclipse

http://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/mars/2/eclipse-java-mars-2-linux-gtk-x86_64.tar.gz

下载的版本是 Eclipse IDE for Java Developers, eclipse-java-mars-2-linux-gtk-x86_64.tar.gz
用java ee的更好, 这里只是符合大众之前在windows下的使用习惯。

导入jar包依赖

cd 进eclipse 目录
./eclipse 启动eclipse
先新建个工程吧

在工程上点击右键,选择Properties
Hadoop 2.6.0 hbase 0.98 Java API 调用实例

选择java Build Path -> Libraries
Hadoop 2.6.0 hbase 0.98 Java API 调用实例

点击Add Livrary -> User Library -> next

Hadoop 2.6.0 hbase 0.98 Java API 调用实例
这里我已经弄好了哈,你们忽略,就当没弄好,接下来关键了

点击右侧 UserLibrary -> new
先建关于hadoop的依赖
自己取个响亮又大方的名字(关于hadoop,例如hdfslib,hbase的另建一个),我作为示意取为test
点击右侧 Add External JARS

Hadoop 2.6.0 hbase 0.98 Java API 调用实例

接下来是几个路径了,我就不配图了
以下用$HADOOP_HOME代表hadoop安装目录
我的是/usr/local/hadoop

$HADOOP_HOME/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar
$HADOOP_HOME/share/hadoop/hdfs/lib    所有
$HADOOP_HOME/share/hadoop/common/hadoop-common-2.6.0.jar
$HADOOP_HOME/share/hadoop/common/lib  所有

finish
接下来是HBase
同样的过程,新建一个usr library ,命名为hbase或者其他

HBase 简单一点,导入 安装目录/lib 下所有jar包即可

Hadoop hdfs Java Demo

package test;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
import java.net.URI;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;

public class HdfsDemo {

    public static void main(String[] args) throws IOException {
        // TODO Auto-generated method stub
        String file = "hdfs://localhost:9000/homework/hw1/tpch/lineitem.tbl";

        Configuration conf = new Configuration();
        FileSystem fs = FileSystem.get(URI.create(file),conf);
        Path path = new Path(file);
        FSDataInputStream in_stream = fs.open(path);

        BufferedReader in = new BufferedReader(new InputStreamReader(in_stream));
        String s;
        while ((s = in.readLine())!=null) {
            System.out.println(s);
        }
        in.close();
        fs.close();
    }

}

做作业的话,首先要将hw1文件夹上传到hdfs根目录

HBase Java Demo

package test;

import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.HTable;
import org.apache.hadoop.hbase.client.Put;

public class HBaseTest {

    public static void main(String[] args) throws MasterNotRunningException, ZooKeeperConnectionException, IOException {
        // TODO Auto-generated method stub
        //create table descriptor
        String tableName = "mytable";
        HTableDescriptor htd = new HTableDescriptor(TableName.valueOf(tableName));

        //create column descriptor
        HColumnDescriptor cf = new HColumnDescriptor("mycf");
        htd.addFamily(cf);

        //confugure HBase
        Configuration configuration = HBaseConfiguration.create();
        HBaseAdmin hAdmin = new HBaseAdmin(configuration);

        hAdmin.createTable(htd);
        hAdmin.close();

        //put "mytable","abc","mycf:a","789"
        HTable table = new HTable(configuration, tableName);
        Put put = new Put("abc".getBytes());
        put.add("mycf".getBytes(), "a".getBytes(), "789".getBytes());
        table.put(put);
        table.close();
        System.out.println("put successfully!");
    }

}