Apache Spark 3 is an open-source distributed engine for querying and processing data. This course will provide you with a detailed understanding of PySpark and its stack. This course is carefully developed and designed to guide you through the process of data analytics using Python Spark. The author uses an interactive approach in explaining keys concepts of PySpark such as the Spark architecture, Spark execution, transformations and actions using the structured API, and much more. You will be able to leverage the power of Python, Java, and SQL and put it to use in the Spark ecosystem.
You will start by getting a firm understanding of the Apache Spark architecture and how to set up a Python environment for Spark. Followed by the techniques for collecting, cleaning, and visualizing data by creating dashboards in Databricks. You will learn how to use SQL to interact with DataFrames. The author provides an in-depth review of RDDs and contrasts them with DataFrames.
There are multiple problem challenges provided at intervals in the course so that you get a firm grasp of the concepts taught in the course.
The code bundle for this course is available here: https://github.com/PacktPublishing/Apache-Spark-3-for-Data-Engineering-…-
Learn Spark architecture, transformations, and actions using the structured API
Learn to set up your own local PySpark environment
Learn to interpret DAG (Directed Acyclic Graph) for Spark execution
Learn to interpret the Spark web UI
Learn the RDD (Resilient Distributed Datasets) API
Learn to visualize (graphs and dashboards) data on Databricks