Apache Spark with Scala – Hands-On with Big Data!

“Big data” analysis is a hot and highly valuable skill—and this course will teach you the hottest technology in big data: Apache Spark. Employers including Amazon, eBay, NASA JPL, and Yahoo all use Spark to quickly extract meaning from massive datasets across a fault-tolerant Hadoop cluster. You will learn those same techniques using your own Windows system right at home. It is easier than you think, and you will learn from an ex-engineer and senior manager from Amazon and IMDb.

In this course, you will learn the concepts of Spark’s Resilient Distributed Datasets, DataFrames, and datasets. We will also cover a crash course in the Scala programming language that will help you with the course. You will learn how to develop and run Spark jobs quickly using Scala, IntelliJ, and SBT. You will learn how to translate complex analysis problems into iterative or multi-stage Spark scripts. You will learn how to scale up to larger datasets using Amazon’s Elastic MapReduce service and understand how Hadoop YARN distributes Spark across computing clusters. We will also be practicing using other Spark technologies, such as Spark SQL, DataFrames, DataSets, Spark Streaming, Machine Learning, and GraphX.

By the end of this course, you will be running code that analyzes gigabytes worth of information—in the cloud—in a matter of minutes.

All the codes and supporting files for this course are available at https://github.com/PacktPublishing/Apache-Spark-with-Scala---Hands-On-w…-

Type
video
Category
publication date
2016-09-13
what you will learn

Learn the concepts of Spark’s RDD, DataFrames, and Datasets
Get a crash course in the Scala programming language
Develop and run Spark jobs quickly using Scala, IntelliJ, and SBT
Translate complex analysis problems into iterative or multi-stage Spark scripts
Scale up to larger datasets using Amazon’s Elastic MapReduce service
Understand how Hadoop YARN distributes Spark across computing clusters

duration
535
key features
Understand the fundamentals of Scala and the Apache Spark ecosystem * Develop distributed code using the Scala programming language * Include practical examples to help you develop real-world Big Data applications with Spark with Scala
approach
This course is very hands-on; you will spend most of your time following along with the instructor as we write, analyze, and run real code together—both on your own system and in the cloud using Amazon’s Elastic MapReduce service. Over eight hours of video content is included, with over 20 real examples of increasing complexity that you can build, run, and study yourself. Move through them at your own pace, on your own schedule.
audience
This course is designed for software engineers who want to expand their skills into the world of big data processing on a cluster. It is necessary to have some prior programming or scripting knowledge.
meta description
Get to grips with the fundamentals of Apache Spark for real-time Big Data processing
short description
This is a comprehensive and practical Apache Spark course. In this course, you will learn and master the art of framing data analysis problems as Spark problems through 20+ hands-on examples, and then scale them up to run on cloud computing services. Explore Spark 3, IntelliJ, Structured Streaming, and a stronger focus on the DataSet API.
subtitle
Dive right in with 20+ hands-on examples of analyzing large datasets with Apache Spark, on your desktop or on Hadoop!
keywords
Apache Spark, Spark SQL, Spark RDD (Resilient Distributed Datasets), Spark Streaming, Spark MLLib, Real-time processing, Big Data processing, Big Data, Spark Hadoop, GraphX, Spark SQL, DataFrames, DataSets, Machine Learning
Product ISBN
9781787129849