Senior Data Engineer – DBT

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – DBT on a contract-to-hire basis, offering a pay rate of "unknown". Required skills include DBT, Python, SQL, and AWS with 10+ years of experience in data engineering and model development.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
New York, United States
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Apache Airflow #Programming #Version Control #NumPy #AWS Glue #dbt (data build tool) #Data Storage #Storage #S3 (Amazon Simple Storage Service) #Data Analysis #SQL (Structured Query Language) #Data Integration #SQL Queries #Python #AWS (Amazon Web Services) #Athena #Cloud #AWS Lambda #Monitoring #IAM (Identity and Access Management) #Security #Macros #Data Warehouse #Documentation #Lambda (AWS Lambda) #Pandas #Data Lake #AWS S3 (Amazon Simple Storage Service) #Logging #Airflow #Data Engineering #Redshift #GIT #Data Processing
Role description

Title: Senior Data Engineer – DBT

Location: NYC, NY Remote

Job Type: Contract to Hire

Responsibilities:

Skills must have

DBT

Python

SQL

AWS

10+ Years’ Experience

DBT Proficiency: model development:

Experience in creating complex DBT models including incremental models, snapshots and documentation. Ability to write and maintain DBT macros for reusable code

Testing and documentation:

Proficiency in implementing DBT tests for data validation and quality checks

Familiarity with generating and maintaining documentation using DBT's built in features

Version control:

Experience in managing DBT projects using git ,including implementing CI/CD process from the scratch

   • AWS Expertise:

o Data STORAGE solutions:

 In depth understanding of AWS S3 for data storage, including best practices for organization and security

 Experience with AWS redshift for data warehousing and performance optimization

o Data Integration:

 Familiarity with Aws glue for ETL processes and orchestration -Nice to have

 Experience with AWS lambda for serverless data processing tasks

o Workflow Orchestration:

 Proficiency in using Apache Airflow on AWS to design ,schedule and monitor complex data flows

 Ability to integrate Airflow with AWS services and DBT models such as triggering a DBT model or EMR or reading from s3 writing to redshift

o Data Lakes and Data warehousing:

 Understanding the architecture of data lakes vs data warehouses and when to use each

 Experience with amazon Athena for querying data directly in s3 using SQL

o Monitoring and Logging:

 Familiarity with AWS cloud watch for monitoring the pipelines and setting up alerts for workflow failures

o Cloud Security:

 Knowledge of AWS security best practices ,including IAM roles, encryption, DBT profiles access configurations

Programming Skills:

   • Python:

o Proficiency in Pandas and NumPy for data analysis and manipulation

o Ability to write scripts for automating ETL processes and scheduling jobs using airflow

o Experience in creating custom DBT macros using jinja and Python allowing for reusable components within dbt models

o Knowledge on how to implement conditional logic in DBT through python

   • SQL:

o Advanced SQL skills, including complex joins ,window functions, CTE’s and subqueries

Experience in optimizing SQL queries for performance and optimization