Skip to content

Automated data pipeline project using AWS, Snowflake, dbt, and Terraform

Notifications You must be signed in to change notification settings

vcovelli/data-pipeline-project

Repository files navigation

ETL Data Pipeline

This project is an ETL (Extract, Transform, Load) pipeline built with Prefect for data orchestration. The pipeline extracts data, performs transformations, and loads the data into a destination database (e.g., Snowflake).

Table of Contents

  1. Project Overview
  2. Pipeline Structure
  3. Setup Instructions
  4. Prefect Deployment
  5. Running the Pipeline
  6. Scheduling the Pipeline
  7. Future Enhancements

Project Overview

This ETL pipeline demonstrates an end-to-end data workflow using Prefect, a modern workflow orchestration tool. The pipeline is designed to run both locally and in Prefect Cloud, making it easy to monitor, schedule, and manage.

Pipeline Structure

The ETL pipeline consists of three main tasks:

  1. Extract Data: Pulls data from a source (e.g., a simulated API response).
  2. Transform Data: Modifies the data as needed. In this example, it doubles the age attribute for each record.
  3. Load Data: Loads the transformed data into a destination (e.g., a database or data warehouse).

These tasks are orchestrated using Prefect, with the flow structure defined in data_pipeline.py.

Setup Instructions

1. Clone the Repository

git clone https://github.com/your-username/data-pipeline-project.git
cd data-pipeline-project

2. Install Dependencies

Make sure to install Prefect and any other required libraries:

	pip install -r requirements.txt

Note: If requirements.txt doesn’t exist yet, add it with the necessary packages (e.g., prefect, snowflake-connector-python).

3. Configure Environment Variables (if needed)

If your pipeline requires sensitive information (e.g., database credentials), store these in environment variables or Prefect Secrets.

For example:

	export DB_USER="your_user"
	export DB_PASSWORD="your_password"
	export DB_ACCOUNT="your_account"

Prefect Deployment

The flow is deployed to Prefect Cloud, where it can be monitored, scheduled, and run.

Deployment YAML

The deployment configuration is located in deployments/etl_data_pipeline.yaml. This YAML file defines:

  • The flow to deploy (etl_data_pipeline)
  • Tags to help with organization in Prefect Cloud
  • An optional schedule for automated runs

Applying the Deployment

To apply the deployment (if using the CLI):

prefect deployment apply deployments/etl_data_pipeline.yaml

Or, create the deployment manually in Prefect Cloud.

Running the Pipeline

Local Run

To test the pipeline locally, run:

python3 data_pipeline.py

Prefect Cloud Run

  1. Go to your Prefect Cloud dashboard.
  2. Locate the etl_data_pipeline flow.
  3. Manually trigger a run or set up a schedule for automated runs.

Scheduling the Pipeline

To automate the ETL pipeline, you can configure a schedule in Prefect Cloud.

  1. In the Prefect Cloud dashboard, navigate to Deployments.
  2. Select etl_data_pipeline and add a schedule (e.g., daily at 2 AM).

Alternatively, define the schedule directly in the etl_data_pipeline.yaml file under the schedule field.

Future Enhancements

Potential improvements to the pipeline include:

  • Dynamic Parameters: Allow users to specify input parameters at runtime.
  • Error Handling and Retries: Implement robust error handling and retries for data extraction and loading.
  • Integration with Other Data Sources: Connect to various data sources, such as APIs, cloud storage, or data lakes.
  • Additional Transformations: Expand the transformation logic to handle more complex data processing.

About

Automated data pipeline project using AWS, Snowflake, dbt, and Terraform

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published