Fast Data Processing with Spark

时间:2017-03-28 06:52:47
【文件属性】:
文件名称:Fast Data Processing with Spark
文件大小:8.14MB
文件格式:PDF
更新时间:2017-03-28 06:52:47
分布式 Distributed Spark High-speed distributed computing made easy with Spark Overview Implement Spark's interactive shell to prototype distributed applications Deploy Spark jobs to various clusters such as Mesos, EC2, Chef, YARN, EMR, and so on Use Shark's SQL query-like syntax with Spark In Detail Spark is a framework for writing fast, distributed programs. Spark solves similar problems as Hadoop MapReduce does but with a fast in-memory approach and a clean functional style API. With its ability to integrate with Hadoop and inbuilt tools for interactive query analysis (Shark), large-scale graph processing and analysis (Bagel), and real-time analysis (Spark Streaming), it can be interactively used to quickly process and query big data sets. Fast Data Processing with Spark covers how to write distributed map reduce style programs with Spark. The book will guide you through every step required to write effective distributed programs from setting up your cluster and interactively exploring the API, to deploying your job to the cluster, and tuning it for your purposes. Fast Data Processing with Spark covers everything from setting up your Spark cluster in a variety of situations (stand-alone, EC2, and so on), to how to use the interactive shell to write distributed code interactively. From there, we move on to cover how to write and deploy distributed jobs in Java, Scala, and Python. We then examine how to use the interactive shell to quickly prototype distributed programs and explore the Spark API. We also look at how to use Hive with Spark to use a SQL-like query syntax with Shark, as well as manipulating resilient distributed datasets (RDDs). What you will learn from this book Prototype distributed applications with Spark's interactive shell Learn different ways to interact with Spark's distributed representation of data (RDDs) Load data from the various data sources Query Spark with a SQL-like query syntax Integrate Shark queries with Spark programs Effectively test your distributed software Tune a Spark installation Install and set up Spark on your cluster Work effectively with large data sets Approach This book will be a basic, step-by-step tutorial, which will help readers take advantage of all that Spark has to offer. Who this book is written for Fast Data Processing with Spark is for software developers who want to learn how to write distributed programs with Spark. It will help developers who have had problems that were too much to be dealt with on a single computer. No previous experience with distributed programming is necessary. This book assumes knowledge of either Java, Scala, or Python.

网友评论

  • 不是完整版,只是一部分内容的试读,想下载人需要注意
  • 内容比较清楚,只是英文版看起来要多花点时间,不过理解起来还是要比中文版的好理解!
  • 挺清楚的。谢谢分享。
  • 经典数据,质量很好。
  • 关于spark的第一本书,内容有些少,适合初学者
  • 不错的书籍,推荐
  • 科普大数据Spark,好教材。
  • 没什么内容,讲的比较粗浅
  • 不是完整版,只是一部分内容的试读,想下载人需要注意
  • 不错,可惜要英文好才行。
  • 挺不错的,可以用。虽然质量不是特别高,但是读起来没问题。
  • Spark 学习的好资料!
  • 简单易懂,还不错
  • 书内容不错,适合初学者。
  • 很不错的一本书,内容详实。
  • 比较经典的spark资料,值得收藏。
  • 介绍Spark的资料不是很多,这个不错
  • 入门不错的资料
  • 非常不错的入门教材!赞一个
  • 内容不错,值得研究学习
  • 很好,可惜是英文的。
  • 非常好的学习spark的英文资料,谢谢分享。
  • 不错的入门书,现在中文版也出版了
  • 入门为主的书籍,深度不够
  • 不错,挺简洁的