1.设置环境变量
#set Environment export JAVA_HOME=/usr/java/jdk1.7.0_45 export HADOOP_INSTALL=/home/luffy/Development/hadoop-1.2.1 export HADOOP_COMMON_HOME=$HADOOP_INSTALL export HADOOP_MAPRED_HOME=$HADOOP_INSTALL export SQOOP_HOME=/home/luffy/Development/sqoop-1.4.4.bin__hadoop-1.0.0 export HIVE_HOME=/home/luffy/Development/hive-0.11.0 export HBASE_HOME=/home/luffy/Development/hbase-0.94.13 export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$CLASSPATH export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$HADOOP_INSTALL/bin:$SQOOP_HOME/bin:$HIVE_HOME/bin:$HBASE_HOME/bin:$PATH
2.更改配置文件"hbase-site.xml"
<configuration> <property> <name>hbase.rootdir</name> <value>hdfs://localhost:9000/user/hbase</value> <description>The directory shared by RegionServers. </description> </property> <property> <name>dfs.replication</name> <value>1</value> <description>The replication count for HLog and HFile storage. Should not be greater than HDFS datanode count. </description> </property> </configuration>
3.启动hadoop
start-all.sh
4.启动hbase
start-hbase.sh
5.执行命令
sqoop import --connect 'jdbc:sqlserver://192.168.10.164:1433;username=sa;password=12345678;database=demo' --table TAB_USER --hbase-table TabUser --hbase-create-table --hbase-row-key F_ID --column-family cf
6.注:由于 HBase 依赖 Hadoop,它配套发布了一个Hadoop jar 文件在它的 lib 下。该套装jar仅用于独立模式。在分布式模式下,Hadoop版本必须和HBase下的版本一致。用你运行的分布式Hadoop版本jar文件替换HBase lib目录下的Hadoop jar文件,以避免版本不匹配问题。确认替换了集群中所有HBase下的jar文件。Hadoop版本不匹配问题有不同表现,但看起来都像挂掉了。