Snowflake - Build and Architect Data Pipelines Using AWS

Snowflake is the next big thing, and it is becoming a full-blown data ecosystem. With the level of scalability and efficiency in handling massive volumes of data and also with several new concepts in it, this is the right time to wrap your head around Snowflake and have it in your toolkit. This course not only covers the core features of Snowflake but also teaches you how to deploy Python/PySpark jobs in AWS Glue and Airflow that communicate with Snowflake, which is one of the most important aspects of building pipelines.

In this course, you will look at Snowflake, and then the most crucial aspects of Snowflake in an efficient manner. You will be writing Python/Spark Jobs in AWS Glue Jobs for data transformation and seeing real-time streaming using Kafka and Snowflake. You will be interacting with external functions and use cases, and see the security features in Snowflake. Finally, you will look at Snowpark and explore how it can be used for data pipelines and data science.

By the end of this course, you will have learned about Snowflake and Snowpark, and learned how to build and architect data pipelines using AWS.

You need to have an active AWS account in order to perform the sections related to Python and PySpark. For the rest of the course, a free trial Snowflake account should suffice.

Type
video
publication date
2022-09-27
what you will learn

Learn about Snowflake and its basics before getting started with labs
Check the crucial aspects of Snowflake in a very practical manner
Write Python/Spark jobs in AWS Glue Jobs for data transformation
Execute real-time streaming using Kafka and Snowflake
Interact with external functions and use cases
Learn and explore the Snowpark library

duration
519
key features
Learn from an easy-to-understand and step-by-step course, divided into 85+ videos along with detailed resource files * Integrate real-time streaming data and data orchestration with Airflow and Snowflake * Highly practical explanations and lab exercises to help you grasp the most out of the course
approach
This course is comprehensive, easy to understand, and designed for intermediate to advance with a basic understanding of programming and SQL knowledge.

You can read the reference links and the official documentation of Snowflake as much as possible, which are attached to the resource links. There are lab exercises and practical content demonstrated after providing theoretical information; therefore, a well-balanced course.
audience
This course is ideal for software engineers, aspiring data engineers or data analysts, and data scientists who want to excel in their careers in the IT domain. Apart from them, this course is also good for programmers and database administrators with experience in writing SQL queries.

Prior programming experience in SQL or at least some prior knowledge in writing queries and Python is a must. You should have a basic experience or understanding of cloud services such as AWS along with an active AWS account.
meta description
Be a part of a comprehensive course on AWS and Snowflake integration. Leverage your learning with a power-packed and well-structured course at an adequate pace.
short description
The course helps you learn Snowflake from scratch and explore a few of its important features. You will build automated pipelines with Snowflake and use the AWS cloud with Snowflake as a data warehouse.

You will also explore Snowpark to be worked on the data pipelines.
subtitle
Data engineering and architecting pipelines using Snowflake and AWS cloud
keywords
Snowflake, Data pipelines AWS, SQL, Airflow, Architect Data pipelines, AWS account, Python, Spark, Data Scientists, Analysts, Data Engineers, Software Developers, SQL Programmers, DBAs, real-time streaming, Snowpark
Product ISBN
9781804615676