Spark算子:RDD创建操作

时间:2022-05-08 21:20:36

关键字:Spark RDD 创建、parallelize、makeRDD、textFile、hadoopFile、hadoopRDD、newAPIHadoopFile、newAPIHadoopRDD

从集合创建RDD

  • parallelize

def parallelize[T](seq: Seq[T], numSlices: Int = defaultParallelism)(implicit arg0: ClassTag[T]): RDD[T]

从一个Seq集合创建RDD。

参数1:Seq集合,必须。

参数2:分区数,默认为该Application分配到的资源的CPU核数

 
 
  1. scala> var rdd = sc.parallelize(1 to 10)
  2. rdd: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[2] at parallelize at :21
  3.  
  4. scala> rdd.collect
  5. res3: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
  6.  
  7. scala> rdd.partitions.size
  8. res4: Int = 15
  9.  
  10. //设置RDD为3个分区
  11. scala> var rdd2 = sc.parallelize(1 to 10,3)
  12. rdd2: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[3] at parallelize at :21
  13.  
  14. scala> rdd2.collect
  15. res5: Array[Int] = Array(1, 2, 3, 4, 5, 6, 7, 8, 9, 10)
  16.  
  17. scala> rdd2.partitions.size
  18. res6: Int = 3
  19.  
  • makeRDD

def makeRDD[T](seq: Seq[T], numSlices: Int = defaultParallelism)(implicit arg0: ClassTag[T]): RDD[T]

这种用法和parallelize完全相同

def makeRDD[T](seq: Seq[(T, Seq[String])])(implicit arg0: ClassTag[T]): RDD[T]

该用法可以指定每一个分区的preferredLocations。

 
 
  1. scala> var collect = Seq((1 to 10,Seq("slave007.lxw1234.com","slave002.lxw1234.com")),
  2. (11 to 15,Seq("slave013.lxw1234.com","slave015.lxw1234.com")))
  3. collect: Seq[(scala.collection.immutable.Range.Inclusive, Seq[String])] = List((Range(1, 2, 3, 4, 5, 6, 7, 8, 9, 10),
  4. List(slave007.lxw1234.com, slave002.lxw1234.com)), (Range(11, 12, 13, 14, 15),List(slave013.lxw1234.com, slave015.lxw1234.com)))
  5.  
  6. scala> var rdd = sc.makeRDD(collect)
  7. rdd: org.apache.spark.rdd.RDD[scala.collection.immutable.Range.Inclusive] = ParallelCollectionRDD[6] at makeRDD at :23
  8.  
  9. scala> rdd.partitions.size
  10. res33: Int = 2
  11.  
  12. scala> rdd.preferredLocations(rdd.partitions(0))
  13. res34: Seq[String] = List(slave007.lxw1234.com, slave002.lxw1234.com)
  14.  
  15. scala> rdd.preferredLocations(rdd.partitions(1))
  16. res35: Seq[String] = List(slave013.lxw1234.com, slave015.lxw1234.com)
  17.  
  18.  

指定分区的优先位置,对后续的调度优化有帮助。

 

从外部存储创建RDD

  • textFile

//从hdfs文件创建.

 
 
  1. //从hdfs文件创建
  2. scala> var rdd = sc.textFile("hdfs:///tmp/lxw1234/1.txt")
  3. rdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[26] at textFile at :21
  4.  
  5. scala> rdd.count
  6. res48: Long = 4
  7.  
  8. //从本地文件创建
  9. scala> var rdd = sc.textFile("file:///etc/hadoop/conf/core-site.xml")
  10. rdd: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[28] at textFile at :21
  11.  
  12. scala> rdd.count
  13. res49: Long = 97
  14.  

注意这里的本地文件路径需要在Driver和Executor端存在。

  • 从其他HDFS文件格式创建

hadoopFile

sequenceFile

objectFile

newAPIHadoopFile

  • 从Hadoop接口API创建

hadoopRDD

newAPIHadoopRDD

比如:从HBase创建RDD

 
 
  1. scala> import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor, TableName}
  2. import org.apache.hadoop.hbase.{HBaseConfiguration, HTableDescriptor, TableName}
  3.  
  4. scala> import org.apache.hadoop.hbase.mapreduce.TableInputFormat
  5. import org.apache.hadoop.hbase.mapreduce.TableInputFormat
  6.  
  7. scala> import org.apache.hadoop.hbase.client.HBaseAdmin
  8. import org.apache.hadoop.hbase.client.HBaseAdmin
  9.  
  10. scala> val conf = HBaseConfiguration.create()
  11. scala> conf.set(TableInputFormat.INPUT_TABLE,"lxw1234")
  12. scala> var hbaseRDD = sc.newAPIHadoopRDD(
  13. conf,classOf[org.apache.hadoop.hbase.mapreduce.TableInputFormat],classOf[org.apache.hadoop.hbase.io.ImmutableBytesWritable],classOf[org.apache.hadoop.hbase.client.Result])
  14.  
  15. scala> hbaseRDD.count
  16. res52: Long = 1
  17.  

转载请注明:lxw的大数据田地 » Spark算子:RDD创建操作