Course Overview
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting.
Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
Who should attend
- Data Engineers
- Data Analysts
Certifications
This course is part of the following Certifications:
Prerequisites
- Basic proficiency with Data Warehousing and ETL/ELT concepts
- Basic proficiency in SQL
- Basic programming knowledge (Python recommended)
- Familiarity with gcloud CLI and the Google Cloud console
- Familiarity with core Google Cloud concepts and services
Course Objectives
- Determine whether batch data pipelines are the correct choice for your business use case.
- Design and build scalable batch data pipelines for high-volume ingestion and transformation.
- Implement data quality controls within batch pipelines to ensure data integrity.
- Orchestrate, manage, and monitor batch data pipeline workflows, implementing error handling and observability using logging and monitoring tools.
Outline: Building Batch Data Pipelines on Google Cloud (BBDP)
Module 1 - When to choose batch data pipelines
Description: You will learn the critical role of a data engineer in developing and maintaining batch data pipelines, understand their core components and lifecycle, and analyze common challenges in batch data processing. You'll also identify key Google Cloud services that address these challenges.
Topics:
- Batch data pipelines and their use cases
- Processing and common challenges
Activities:
- Quiz
Module 2 - Design and build batch data pipelines
Description: You will design scalable batch data pipelines for high-volume data ingestion and transformation. You'll also optimize batch jobs for high throughput and cost-efficiency using various resource management and performance tuning techniques.
Topics:
- Design batch pipelines
- Large scale data transformations
- Dataflow and Serverless for Apache Spark
- Data connections and orchestration
- Execute an Apache Spark pipeline
- Optimize batch pipeline performance
Activities:
- Quiz
- Lab: Build a Simple Batch Data Pipeline with Serverless for Apache Spark
- Lab: Build a Simple Batch Data Pipeline with Dataflow Job Builder UI
Module 3 - Control data quality in batch data pipelines
Description: You will develop data validation rules and cleansing logic to ensure data quality within batch pipelines. You'll also implement strategies for managing schema evolution and performing data deduplication in large datasets.
Topics:
- Batch data validation and cleansing
- Log and analyze errors
- Schema evolution for batch pipelines
- Data integrity and duplication
- Deduplication with Serverless for Apache Spark
- Deduplication with Dataflow
Activities:
- Quiz
- Lab: Validate Data Quality in a Batch Pipeline with Serverless for Apache Spark
Module 4 - Orchestrate and monitor batch data pipelines
Description: You will orchestrate complex batch data pipeline workflows for efficient scheduling and lineage tracking. You'll also implement robust error handling, monitoring, and observability for batch data pipelines.
Topics:
- Orchestration for batch processing
- Cloud Composer
- Unified observability
- Alerts and troubleshooting
- Visual pipeline management
- Congratulations: Course summary
Activities:
- Quiz
- Lab: Building Batch Pipelines in Cloud Data Fusion