

Dagster Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Dagster Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills required include 3+ years in data engineering, proficiency in Dagster and dbt, SQL, and experience with cloud data warehouses.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 19, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Cumberland, RI
-
π§ - Skills detailed
#Snowflake #Data Pipeline #Version Control #Data Quality #BigQuery #Data Engineering #GIT #BI (Business Intelligence) #GCP (Google Cloud Platform) #Azure #AWS (Amazon Web Services) #Docker #Automated Testing #Redshift #dbt (data build tool) #Data Warehouse #Observability #Scala #Data Governance #Cloud #Data Modeling #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Analysis
Role description
Job Overview
We are looking for a skilled Dagster Software Engineer with strong experience in dbt to design, develop, and maintain modern data pipelines. You will work closely with data engineering and analytics teams to ensure scalable, reliable, and maintainable data workflows.
Key Responsibilities
β’ Develop and maintain data pipelines using Dagster for orchestration and dbt for transformations.
β’ Design, implement, and optimize ELT processes to ensure high performance and scalability.
β’ Build and manage modular, testable, and reusable data models in dbt.
β’ Integrate data from multiple sources into data warehouses like Snowflake, BigQuery, or Redshift.
β’ Implement best practices for version control, CI/CD, and automated testing in data workflows.
β’ Monitor and troubleshoot data pipeline performance, reliability, and data quality issues.
β’ Collaborate with data analysts, BI teams, and stakeholders to deliver business-critical data products.
Required Skills & Qualifications
β’ 3+ years of experience in data engineering or similar role.
β’ Hands-on experience with Dagster for workflow orchestration.
β’ Strong proficiency in dbt for data modeling and transformations.
β’ Proficiency with SQL and experience with modern cloud data warehouses (Snowflake, BigQuery, or Redshift).
β’ Experience with Python for building and customizing pipeline components.
β’ Familiarity with CI/CD pipelines, Git, and containerization (Docker).
β’ Strong understanding of data modeling principles, ELT/ETL patterns, and data governance best practices.
Preferred Qualifications
β’ Experience with cloud platforms (AWS, GCP, or Azure).
β’ Familiarity with analytics engineering concepts and modern data stack tools.
β’ Knowledge of observability tools for data pipelines (e.g., Great Expectations, Monte Carlo).
Job Overview
We are looking for a skilled Dagster Software Engineer with strong experience in dbt to design, develop, and maintain modern data pipelines. You will work closely with data engineering and analytics teams to ensure scalable, reliable, and maintainable data workflows.
Key Responsibilities
β’ Develop and maintain data pipelines using Dagster for orchestration and dbt for transformations.
β’ Design, implement, and optimize ELT processes to ensure high performance and scalability.
β’ Build and manage modular, testable, and reusable data models in dbt.
β’ Integrate data from multiple sources into data warehouses like Snowflake, BigQuery, or Redshift.
β’ Implement best practices for version control, CI/CD, and automated testing in data workflows.
β’ Monitor and troubleshoot data pipeline performance, reliability, and data quality issues.
β’ Collaborate with data analysts, BI teams, and stakeholders to deliver business-critical data products.
Required Skills & Qualifications
β’ 3+ years of experience in data engineering or similar role.
β’ Hands-on experience with Dagster for workflow orchestration.
β’ Strong proficiency in dbt for data modeling and transformations.
β’ Proficiency with SQL and experience with modern cloud data warehouses (Snowflake, BigQuery, or Redshift).
β’ Experience with Python for building and customizing pipeline components.
β’ Familiarity with CI/CD pipelines, Git, and containerization (Docker).
β’ Strong understanding of data modeling principles, ELT/ETL patterns, and data governance best practices.
Preferred Qualifications
β’ Experience with cloud platforms (AWS, GCP, or Azure).
β’ Familiarity with analytics engineering concepts and modern data stack tools.
β’ Knowledge of observability tools for data pipelines (e.g., Great Expectations, Monte Carlo).