Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
112 changes: 47 additions & 65 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,84 +1,66 @@
# Indicium Tech Code Challenge
# Desafio de Engenharia de Dados - TechIndicium

Code challenge for Software Developer with focus in data projects.
# Este repositório contém scripts para a realização de um desafio de engenharia de dados proposto pelo TechIndicium. O objetivo do desafio é extrair dados de fontes CSV e PostgreSQL e transferi-los para um banco de dados. O processo envolve a configuração de bancos de dados, a execução de scripts de extração e a organização dos dados extraídos em arquivos CSV.

# Estrutura do Repositório
# - extract_csv.py: Script para baixar e processar um arquivo CSV do GitHub.
# - extract_postgres.py: Script para baixar um arquivo SQL do GitHub, executar os comandos no PostgreSQL e extrair os dados das tabelas para CSV.
# - extract_and_transfer.py: Script para transferir os dados de um banco de dados localdb para o banco de dados northwind, e salvar os dados em arquivos CSV organizados por data e tabela.
# - setup.sh: Script para automatizar a instalação e configuração do ambiente, incluindo o Docker e o PostgreSQL.
# - docker-compose.yml: Arquivo para configurar o banco de dados northwind via Docker.

## Context
# Passo a Passo para Executar o Desafio

At Indicium we have many projects where we develop the whole data pipeline for our client, from extracting data from many data sources to loading this data at its final destination, with this final destination varying from a data warehouse for a Business Intelligency tool to an api for integrating with third party systems.
# 1. Clonar o Repositório
git clone https://github.com/Syds674/code-challenge.git
cd code-challenge

As a software developer with focus in data projects your mission is to plan, develop, deploy, and maintain a data pipeline.
# 2. Executar o Script setup.sh (Opcional, se necessário configurar o ambiente)
chmod +x setup.sh
./setup.sh

# O script setup.sh irá instalar o Docker, Docker Compose, PostgreSQL e as dependências necessárias do Python.

## The Challenge
# 3. Subir o Banco de Dados northwind com Docker
docker-compose up -d

We are going to provide 2 data sources, a PostgreSQL database and a CSV file.
# O comando acima irá iniciar os contêineres em segundo plano. Para verificar se os contêineres estão rodando corretamente, execute:
docker-compose ps

The CSV file represents details of orders from an ecommerce system.
# Isso deverá exibir algo como:
# Name Command State Ports
# -------------------------------------------------------------------
# northwind-db docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp

The database provided is a sample database provided by microsoft for education purposes called northwind, the only difference is that the **order_detail** table does not exists in this database you are beeing provided with. This order_details table is represented by the CSV file we provide.
# 4. Acessar o Banco de Dados northwind
# Para acessar o banco de dados northwind, você pode usar o psql ou um cliente gráfico como o DBeaver ou pgAdmin.

Schema of the original Northwind Database:
# Usando psql:
psql -h localhost -U postgres -d northwind

![image](https://user-images.githubusercontent.com/49417424/105997621-9666b980-608a-11eb-86fd-db6b44ece02a.png)
# A senha padrão para o usuário postgres é "postgres".

Your challenge is to build a pipeline that extracts the data everyday from both sources and write the data first to local disk, and second to a PostgreSQL database. For this challenge, the CSV file and the database will be static, but in any real world project, both data sources would be changing constantly.
# Usando um cliente gráfico:
# Configure a conexão com as seguintes informações:
# - Host: localhost
# - Porta: 5432
# - Banco de dados: northwind
# - Usuário: postgres
# - Senha: postgres

Its important that all writing steps (writing data from inputs to local filesystem and writing data from local filesystem to PostgreSQL database) are isolated from each other, you shoud be able to run any step without executing the others.
# 5. Executar os Scripts de Extração de Dados
# Agora que o banco de dados está configurado e em execução, você pode rodar os scripts para extrair e transferir os dados.

For the first step, where you write data to local disk, you should write one file for each table. This pipeline will run everyday, so there should be a separation in the file paths you will create for each source(CSV or Postgres), table and execution day combination, e.g.:
# Para executar o script que transferirá os dados entre os bancos de dados localdb e northwind e salvará os dados em arquivos CSV, execute:

```
/data/postgres/{table}/2024-01-01/file.format
/data/postgres/{table}/2024-01-02/file.format
/data/csv/2024-01-02/file.format
```
python3 Scripts/extract_and_transfer.py

You are free to chose the naming and the format of the file you are going to save.
# Esse script irá:
# - Extrair as tabelas do banco de dados "localdb"
# - Transferir os dados para o banco de dados "northwind"
# - Salvar os dados extraídos em arquivos CSV organizados por data e tabela

At step 2, you should load the data from the local filesystem, which you have created, to the final database.
# 6. Parar os Contêineres do Docker
# Para parar os contêineres do Docker após a execução, execute:

The final goal is to be able to run a query that shows the orders and its details. The Orders are placed in a table called **orders** at the postgres Northwind database. The details are placed at the csv file provided, and each line has an **order_id** field pointing the **orders** table.

## Solution Diagram

As Indicium uses some standard tools, the challenge was designed to be done using some of these tools.

The following tools should be used to solve this challenge.

Scheduler:
- [Airflow](https://airflow.apache.org/docs/apache-airflow/stable/installation/index.html)

Data Loader:
- [Embulk](https://www.embulk.org) (Java Based)
**OR**
- [Meltano](https://docs.meltano.com/?_gl=1*1nu14zf*_gcl_au*MTg2OTE2NDQ4Mi4xNzA2MDM5OTAz) (Python Based)

Database:
- [PostgreSQL](https://www.postgresql.org/docs/15/index.html)

The solution should be based on the diagrams below:
![image](docs/diagrama_embulk_meltano.jpg)


### Requirements

- You **must** use the tools described above to complete the challenge.
- All tasks should be idempotent, you should be able to run the pipeline everyday and, in this case where the data is static, the output shold be the same.
- Step 2 depends on both tasks of step 1, so you should not be able to run step 2 for a day if the tasks from step 1 did not succeed.
- You should extract all the tables from the source database, it does not matter that you will not use most of them for the final step.
- You should be able to tell where the pipeline failed clearly, so you know from which step you should rerun the pipeline.
- You have to provide clear instructions on how to run the whole pipeline. The easier the better.
- You must provide evidence that the process has been completed successfully, i.e. you must provide a csv or json with the result of the query described above.
- You should assume that it will run for different days, everyday.
- Your pipeline should be prepared to run for past days, meaning you should be able to pass an argument to the pipeline with a day from the past, and it should reprocess the data for that day. Since the data for this challenge is static, the only difference for each day of execution will be the output paths.

### Things that Matters

- Clean and organized code.
- Good decisions at which step (which database, which file format..) and good arguments to back those decisions up.
- The aim of the challenge is not only to assess technical knowledge in the area, but also the ability to search for information and use it to solve problems with tools that are not necessarily known to the candidate.
- Point and click tools are not allowed.


Thank you for participating!
docker-compose down