Alex 的 Hadoop 菜鸟教程: 第12课 Sqoop1 安装/导入/导出教程

时间:2022-05-19 07:30:22

原帖地址: http://blog.csdn.net/nsrainbow/article/details/41575807

Sqoop是什么

sqoop是用于在传统关系型数据库跟hdfs之间进行数据导入导出的工具。目前sqoop已经出了2,但是截至当前,sqoop2还是个半成品,不支持hbase,功能还很少,所以我还是主要讲sqoop1

安装Sqoop1


 yum install -y sqoop


用help测试下是否有安装好
# sqoop help
Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/11/28 11:33:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1
usage: sqoop COMMAND [ARGS]

Available commands:
codegen Generate code to interact with database records
create-hive-table Import a table definition into Hive
eval Evaluate a SQL statement and display the results
export Export an HDFS directory to a database table
help List available commands
import Import a table from a database to HDFS
import-all-tables Import tables from a database to HDFS
job Work with saved jobs
list-databases List available databases on a server
list-tables List available tables in a database
merge Merge results of incremental imports
metastore Run a standalone Sqoop metastore
version Display version information

See 'sqoop help COMMAND' for information on a specific command.

拷贝驱动到 /usr/lib/sqoop/lib

mysql jdbc 驱动下载地址 

下载后,解压开找到驱动jar包,upload到服务器上,然后移过去

mv /home/alex/mysql-connector-java-5.1.34-bin.jar /usr/lib/sqoop/lib


导入

数据准备

在mysql中建立sqoop_test库
create database sqoop_test;


在sqoop_test里面建立一个表 
CREATE TABLE `employee` (    
`id` int(11) NOT NULL,
`name` varchar(20) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;

插入几条数据
insert into employee (id,name) values (1,'michael');  
insert into employee (id,name) values (2,'ted');
insert into employee (id,name) values (3,'jack');

导入mysql到hdfs

列出所有表

我们先不急着导入,先做几个准备步骤热身一下,也方便排查问题
先把mysql的测试用户设置成可以远程连接的,因为hadoop会把导入/导出任务分发到不同的机器上执行,所以你的数据库url里面不能写localhost而要写成域名或者IP。本例子中直接用root来测试,所以就改下root的Host (实际生产环境千万别这么干啊!)
mysql> use mysql

mysql> update user set Host='%' where Host='127.0.0.1' and User='root';
mysql> flush privileges;



列出所有数据库
# sqoop list-databases --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root
Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/12/01 09:20:28 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1
14/12/01 09:20:28 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/12/01 09:20:28 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
cacti
metastore
mysql
sqoop_test
wordpress
zabbix



先用sqoop连接上数据库并列出所有表
# sqoop list-tables --connect jdbc:mysql://host1/sqoop_test --username root --password root
Warning: /usr/lib/sqoop/../hive-hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
14/11/28 11:46:11 INFO sqoop.Sqoop: Running Sqoop version: 1.4.4-cdh5.0.1
14/11/28 11:46:11 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
14/11/28 11:46:11 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
employee
student
workers

这条命令不用跟驱动的类名是因为sqoop默认支持mysql的,如果要跟jdbc驱动的类名用
# sqoop list-tables --connect jdbc:mysql://localhost/sqoop_test --username root --password root --driver com.mysql.jdbc.Driver

导入数据到hdfs

sqoop import --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --target-dir /user/test3
  • import 代表是导入任务
  • --connect 指定连接的url
  • --username 指定用户名
  • --password 指定密码
  • --table 指定要导入的数据源表
  • --m 代表任务并发数,这里设置成1
  • --target-dir 代表导入后要存储的hdfs上的文件夹位置
  • 更详细的参数介绍在 http://sqoop.apache.org/docs/1.4.5/SqoopUserGuide.html#_purpose 
执行的效果
[root@host1 hadoop-hdfs]# sqoop import --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --target-dir /user/test3
Warning: /usr/lib/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
15/01/23 06:48:10 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.2.1
15/01/23 06:48:10 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/01/23 06:48:11 INFO manager.SqlManager: Using default fetchSize of 1000
15/01/23 06:48:11 INFO tool.CodeGenTool: Beginning code generation
15/01/23 06:48:12 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
15/01/23 06:48:12 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
15/01/23 06:48:12 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/0989201fc3275ff35dc9c41f1031ea42/employee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/01/23 06:48:45 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/0989201fc3275ff35dc9c41f1031ea42/employee.jar
15/01/23 06:48:47 WARN manager.MySQLManager: It looks like you are importing from mysql.
15/01/23 06:48:47 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
15/01/23 06:48:47 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
15/01/23 06:48:47 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
15/01/23 06:48:47 INFO mapreduce.ImportJobBase: Beginning import of employee
15/01/23 06:48:57 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/01/23 06:49:12 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/01/23 06:49:13 INFO client.RMProxy: Connecting to ResourceManager at host1/192.168.199.126:8032
15/01/23 06:50:10 INFO db.DBInputFormat: Using read commited transaction isolation
15/01/23 06:50:10 INFO mapreduce.JobSubmitter: number of splits:1
15/01/23 06:50:12 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421771779239_0003
15/01/23 06:50:22 INFO impl.YarnClientImpl: Submitted application application_1421771779239_0003
15/01/23 06:50:23 INFO mapreduce.Job: The url to track the job: http://host1:8088/proxy/application_1421771779239_0003/
15/01/23 06:50:23 INFO mapreduce.Job: Running job: job_1421771779239_0003
15/01/23 06:57:10 INFO mapreduce.Job: Job job_1421771779239_0003 running in uber mode : false
15/01/23 06:57:16 INFO mapreduce.Job: map 0% reduce 0%
15/01/23 06:58:13 INFO mapreduce.Job: map 100% reduce 0%
15/01/23 06:58:19 INFO mapreduce.Job: Job job_1421771779239_0003 completed successfully
15/01/23 06:58:33 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=128844
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=23
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=74359
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=74359
Total vcore-seconds taken by all map tasks=74359
Total megabyte-seconds taken by all map tasks=76143616
Map-Reduce Framework
Map input records=3
Map output records=3
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=501
CPU time spent (ms)=2680
Physical memory (bytes) snapshot=107692032
Virtual memory (bytes) snapshot=654852096
Total committed heap usage (bytes)=17760256
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=23
15/01/23 06:58:35 INFO ipc.Client: Retrying connect to server: host1.localdomain/192.168.199.126:39437. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
15/01/23 06:58:36 INFO ipc.Client: Retrying connect to server: host1.localdomain/192.168.199.126:39437. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
15/01/23 06:58:37 INFO ipc.Client: Retrying connect to server: host1.localdomain/192.168.199.126:39437. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
15/01/23 06:58:37 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
15/01/23 06:59:16 INFO mapreduce.ImportJobBase: Transferred 23 bytes in 601.9783 seconds (0.0382 bytes/sec)
15/01/23 06:59:16 INFO mapreduce.ImportJobBase: Retrieved 3 records.


查看一下结果
# hdfs dfs -ls /user/test3
Found 2 items
-rw-r--r-- 2 root supergroup 0 2014-12-01 14:16 /user/test3/_SUCCESS
-rw-r--r-- 2 root supergroup 16 2014-12-01 14:16 /user/test3/part-m-00000
# hdfs dfs -cat /user/test3/part-m-00000
1,michael
2,ted
3,jack


我遇到遇到的问题

如果你遇到以下问题
14/12/01 10:12:42 INFO mapreduce.Job: Task Id : attempt_1406097234796_0017_m_000000_0, Status : FAILED
Error: employee : Unsupported major.minor version 51.0
用ps aux| grep hadoop看下会发现hadoop用的是jdk1.6 。我的cdh是5.0.1 sqoop版本是 1.4.4 ,我遇到了这个问题。
原因:sqoop是使用jdk1.7编译的,所以如果你用 ps aux| grep hadoop 看到hadoop用的是1.6运行的,那sqoop不能正常工作注意:CDH4.7以上已经兼容jdk1.7 ,但如果你是从4.5升级上来的会发现hadoop用的是jdk1.6,需要修改一下整个hadoop调用的jdk为1.7,而且这是官方推荐的搭配 

关于改jdk的方法

官方提供了2个方法http://www.cloudera.com/content/cloudera/en/documentation/cdh4/latest/CDH4-Requirements-and-Supported-Versions/cdhrsv_topic_3.html
这个是让你把 /usr/java/ 下建一个软链叫 default 指向你要的jdk,我这么做了,无效 http://www.cloudera.com/content/cloudera/en/documentation/archives/cloudera-manager-4/v4-5-3/Cloudera-Manager-Enterprise-Edition-Installation-Guide/cmeeig_topic_16_2.html
这个是叫你增加一个环境变量, 我这么做了,无效最后我用了简单粗暴的办法:停掉所有相关服务,然后删掉那个该死的jdk1.6然后再重启,这回就用了 /usr/java/default 了
停掉所有hadoop相关服务的命令
for x in `cd /etc/init.d ; ls hive-*` ; do sudo service $x stop ; done
for x in `cd /etc/init.d ; ls hbase-*` ; do sudo service $x stop ; done
for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
<span style="font-family: Arial, Helvetica, sans-serif;">/etc/init.d/zookeeper-server stop</span>


zookeeper , hbase, hive 如果你们没装就跳过。建议你们用ps aux | grep jre1.6 去找找有什么服务,然后一个一个关掉,先关其他的,最后关hadoop
启动所有

<pre code_snippet_id="538371" snippet_file_name="blog_20141201_15_171205" name="code" class="plain">/etc/init.d/zookeeper-server start
for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x start ; donefor x in `cd /etc/init.d ; ls hbase-*` ; do sudo service $x start ; donefor x in `cd /etc/init.d ; ls hive-*` ; do sudo service $x start ; done

从hdfs导出数据到mysql

接着这个例子做

数据准备

清空employee
mysql> truncate employee

导出数据到mysql

# sqoop export --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --export-dir /user/test3
执行效果
[root@host1 hadoop-hdfs]# sqoop export --connect jdbc:mysql://host1:3306/sqoop_test --username root --password root --table employee --m 1 --export-dir /user/test3
Warning: /usr/lib/sqoop/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /usr/lib/sqoop/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
15/01/23 07:04:44 INFO sqoop.Sqoop: Running Sqoop version: 1.4.5-cdh5.2.1
15/01/23 07:04:44 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
15/01/23 07:04:45 INFO manager.SqlManager: Using default fetchSize of 1000
15/01/23 07:04:45 INFO tool.CodeGenTool: Beginning code generation
15/01/23 07:04:48 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
15/01/23 07:04:48 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `employee` AS t LIMIT 1
15/01/23 07:04:48 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/lib/hadoop-mapreduce
Note: /tmp/sqoop-root/compile/4e6318352dc0beeb6e1e7724c8a6d935/employee.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
15/01/23 07:05:07 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/4e6318352dc0beeb6e1e7724c8a6d935/employee.jar
15/01/23 07:05:07 INFO mapreduce.ExportJobBase: Beginning export of employee
15/01/23 07:05:11 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
15/01/23 07:05:24 INFO Configuration.deprecation: mapred.reduce.tasks.speculative.execution is deprecated. Instead, use mapreduce.reduce.speculative
15/01/23 07:05:24 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
15/01/23 07:05:24 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
15/01/23 07:05:25 INFO client.RMProxy: Connecting to ResourceManager at host1/192.168.199.126:8032
15/01/23 07:06:00 INFO input.FileInputFormat: Total input paths to process : 1
15/01/23 07:06:00 INFO input.FileInputFormat: Total input paths to process : 1
15/01/23 07:06:01 INFO mapreduce.JobSubmitter: number of splits:1
15/01/23 07:06:01 INFO Configuration.deprecation: mapred.map.tasks.speculative.execution is deprecated. Instead, use mapreduce.map.speculative
15/01/23 07:06:02 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1421771779239_0004
15/01/23 07:06:05 INFO impl.YarnClientImpl: Submitted application application_1421771779239_0004
15/01/23 07:06:06 INFO mapreduce.Job: The url to track the job: http://host1:8088/proxy/application_1421771779239_0004/
15/01/23 07:06:06 INFO mapreduce.Job: Running job: job_1421771779239_0004
15/01/23 07:08:03 INFO mapreduce.Job: Job job_1421771779239_0004 running in uber mode : false
15/01/23 07:08:03 INFO mapreduce.Job: map 0% reduce 0%
15/01/23 07:12:15 INFO mapreduce.Job: map 100% reduce 0%
15/01/23 07:12:49 INFO mapreduce.Job: Job job_1421771779239_0004 completed successfully
15/01/23 07:12:52 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=128509
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=147
HDFS: Number of bytes written=0
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=0
Job Counters
Launched map tasks=1
Rack-local map tasks=1
Total time spent by all maps in occupied slots (ms)=253584
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=253584
Total vcore-seconds taken by all map tasks=253584
Total megabyte-seconds taken by all map tasks=259670016
Map-Reduce Framework
Map input records=3
Map output records=3
Input split bytes=121
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=3872
CPU time spent (ms)=3390
Physical memory (bytes) snapshot=97640448
Virtual memory (bytes) snapshot=652566528
Total committed heap usage (bytes)=15585280
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=0
15/01/23 07:12:52 INFO mapreduce.ExportJobBase: Transferred 147 bytes in 448.1491 seconds (0.328 bytes/sec)
15/01/23 07:12:52 INFO mapreduce.ExportJobBase: Exported 3 records.



最后去mysql看成功导出了3条数据
mysql> select * from employee;
+----+---------+
| id | name |
+----+---------+
| 1 | michael |
| 2 | ted |
| 3 | jack |
+----+---------+
3 rows in set (0.12 sec)



hadoop是一个分布式系统,所有的map reduce任务都会被分发到各个节点上去执行,所以实际环境上mysql jdbc url 里面千万不要写localhost,否则只有你这台机子上的mysql的数据会被读到,其他机子上的任务都会失败