

Jobs via Dice
DBT Experienced AWS Data Engineer with Databricks
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DBT Experienced AWS Data Engineer with Databricks in Lebanon, NJ. Contract length is unspecified, with a focus on SQL, DBT, AWS, and Databricks. Experience in CI/CD, Git, and Agile environments is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Lebanon, NJ
-
🧠 - Skills detailed
#Documentation #Data Governance #Cloud #Jira #SQL Queries #Redshift #Deployment #Data Pipeline #Spark (Apache Spark) #Agile #Python #GIT #Delta Lake #Security #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Databases #Tableau #DevOps #Data Engineering #dbt (data build tool) #Databricks #PySpark #Apache Airflow #Data Science #AWS (Amazon Web Services) #Snowflake #Data Bricks #SQL Server #Version Control #S3 (Amazon Simple Storage Service) #SQL (Structured Query Language) #Data Quality #BI (Business Intelligence) #Automation #Data Manipulation #Data Lake #Airflow #Unix #Linux #Kafka (Apache Kafka)
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, HAN IT Staffing Inc., is seeking the following. Apply via Dice today!
Role: DBT - experienced AWS Data Engineer with Databricks Location: Lebanon, NJ Contract position
Responsibilities
Design and Development of Data Pipelines:
Design, build, and optimize robust ETL/ELT pipelines using AWS services (S3, Glue, Lambda) and the Databricks platform (Spark, Delta Lake, DLT).
Ingest and process large volumes of structured and semi-structured data from various sources (APIs, databases, streaming platforms like Kafka or Kinesis) into a centralized data lake or lakehouse.
Data Transformation and Modeling:
Develop and maintain data models (e.g., star/snowflake schemas, medallion architecture) optimized for analytics and BI tools using dbt (Data Build Tool).
Write complex and efficient SQL queries and Python/PySpark code for data manipulation, transformation, and validation within the Databricks environment.
Implement data quality checks, tests, and documentation as part of the dbt workflow, enforcing data governance and security standards.
Orchestration and Automation:
Orchestrate and monitor data workflows using Databricks Jobs or external tools like AWS MWAA (Managed Workflows for Apache Airflow).
Implement CI/CD pipelines and version control (Git) for all data engineering artifacts (code, configurations, dbt models) to ensure reliable and consistent deployments.
Performance Optimization and Operations:
Monitor, troubleshoot, and resolve issues in production data pipelines and environments to ensure high performance, reliability, and cost-efficiency.
Tune Spark jobs and optimize Delta Lake features (Z-Order, partitioning) to handle growing data volumes and complexity.
Collaboration and Support:
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights.
Provide expertise and guidance on data best practices, promoting a culture of data quality and governance.
Must Have skills
SQL DBT core and DBT Cloud AWS (redshift)
Data bricks with AWS SQL server DB
Stone branch scheduling tool
Should understand CICD, GIT
Work in Agile environment with JIRA
Other Skills required/ Good to have:
Tableau experience
Harness devops Proficient in Linux / Unix environments
Dice is the leading career destination for tech experts at every stage of their careers. Our client, HAN IT Staffing Inc., is seeking the following. Apply via Dice today!
Role: DBT - experienced AWS Data Engineer with Databricks Location: Lebanon, NJ Contract position
Responsibilities
Design and Development of Data Pipelines:
Design, build, and optimize robust ETL/ELT pipelines using AWS services (S3, Glue, Lambda) and the Databricks platform (Spark, Delta Lake, DLT).
Ingest and process large volumes of structured and semi-structured data from various sources (APIs, databases, streaming platforms like Kafka or Kinesis) into a centralized data lake or lakehouse.
Data Transformation and Modeling:
Develop and maintain data models (e.g., star/snowflake schemas, medallion architecture) optimized for analytics and BI tools using dbt (Data Build Tool).
Write complex and efficient SQL queries and Python/PySpark code for data manipulation, transformation, and validation within the Databricks environment.
Implement data quality checks, tests, and documentation as part of the dbt workflow, enforcing data governance and security standards.
Orchestration and Automation:
Orchestrate and monitor data workflows using Databricks Jobs or external tools like AWS MWAA (Managed Workflows for Apache Airflow).
Implement CI/CD pipelines and version control (Git) for all data engineering artifacts (code, configurations, dbt models) to ensure reliable and consistent deployments.
Performance Optimization and Operations:
Monitor, troubleshoot, and resolve issues in production data pipelines and environments to ensure high performance, reliability, and cost-efficiency.
Tune Spark jobs and optimize Delta Lake features (Z-Order, partitioning) to handle growing data volumes and complexity.
Collaboration and Support:
Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver actionable insights.
Provide expertise and guidance on data best practices, promoting a culture of data quality and governance.
Must Have skills
SQL DBT core and DBT Cloud AWS (redshift)
Data bricks with AWS SQL server DB
Stone branch scheduling tool
Should understand CICD, GIT
Work in Agile environment with JIRA
Other Skills required/ Good to have:
Tableau experience
Harness devops Proficient in Linux / Unix environments






