Spark学习六:spark streaming
标签(空格分隔): Spark
一,概述
一个简单的实例
1,安装nc
nc -lk 9999
2,启动应用
./bin/run-example streaming.NeworkWordCount localhost 9999
二,企业案例分析
需求:
实时统计最近两个小时的网站访问状况,
pv,uv,地区
5分钟执行一次
10:00
8:00 - 10:00 window 24
10:05
8:05 - 10:05 window 24
StreamingContext(sc, Mintuns(5))
DStream.window(Mintuns(24 * 5), Mintuns(2 * 5))
代码实现
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.3.0</version>
</dependency>
如何在spark-shell中执行某个scala代码
scala > :load /opt/cdh5.3.6/spark-1.3.0-bin-2.5.0-cdh5.3.6/HdfsWordCount.scala
三,Spark streaming的工作原理
四,textFileStreaming的应用
1,准备数据
bin/hdfs dfs -put wordcount.txt /spark/streaming
2,启动spark应用
bin/spark-shell --master local[2]
3,编写代码
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
val ssc = new StreamingContext(sc, Seconds(30))
val lines = ssc.textFileStream("hdfs://study.com.cn:8020/myspark")
val words = lines.flatMap(_.split(","))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
4,测试结果
四,企业中的开发方式:
1,idea编码,打成jar包执行
2,放到脚本里面执行
touch test.scala
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
val ssc = new StreamingContext(sc, Seconds(30))
val lines = ssc.textFileStream("hdfs://study.com.cn:8020/myspark")
val words = lines.flatMap(_.split(","))
val pairs = words.map(word => (word, 1))
val wordCounts = pairs.reduceByKey(_ + _)
wordCounts.print()
ssc.start()
ssc.awaitTermination()
scala > :load /opt/app/spark-1.3.0-bin-2.5.0/test/test.scala