

Recru
Databricks Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is remote. Key skills include Azure Data Factory, Databricks, and dimensional modeling. Requires 5-7 years of data engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 4, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Spring, TX
-
π§ - Skills detailed
#Python #Continuous Deployment #Data Quality #Version Control #Spark (Apache Spark) #Data Warehouse #SQL (Structured Query Language) #Microsoft Power BI #Scala #Synapse #dbt (data build tool) #Infrastructure as Code (IaC) #Data Pipeline #Visualization #Spark SQL #ADF (Azure Data Factory) #Dimensional Data Models #Azure #Delta Lake #Data Architecture #Azure DevOps #Azure Data Factory #Deployment #DevOps #"ETL (Extract #Transform #Load)" #GIT #Dimensional Modelling #Azure Synapse Analytics #Data Integration #Data Modeling #BI (Business Intelligence) #Data Engineering #Databricks #Automated Testing #SQL Server
Role description
β’
β’ NO C2C
β’
β’ Job Summary:
We are looking for a highly motivated Data Engineer with a passion for data modeling, data architecture patterns, and modern engineering practices. In this role, you will be instrumental in designing and building scalable, reliable, and high-performance data solutions that power our analytics and business intelligence platforms.
You will work closely with senior data engineers and cross-functional teams to implement robust data models and pipelines using cutting-edge tools in the Azure ecosystem. This is a unique opportunity to shape the foundation of our data infrastructure and contribute to the evolution of our data products.
Duties/Responsibilities:
β’ Design & Implement Data Models: Develop and maintain dimensional data models and semantic layers that support enterprise reporting and analytics. Apply best practices in data warehousing and data architecture.
β’ Build Scalable Data Pipelines: Create modular, reusable, and pattern-driven ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks.
β’ Transform & Structure Data: Use Spark SQL, DBT, and Azure Synapse to transform raw data into structured, analytics-ready formats. Implement data quality checks and schema evolution strategies.
β’ Leverage Azure Technologies: Utilize Azure services such as SQL Server, Synapse Analytics, Databricks, and Delta Lake to build and manage scalable data infrastructure. and data pipelines
β’ Enable CI/CD & DevOps: Integrate data workflows with Azure DevOps and Git for version control, automated testing, and continuous deployment.
β’ Support Reporting & Visualization: Collaborate with analysts and BI developers to ensure data models are optimized for tools like Power BI.
β’ Troubleshoot & Optimize: Monitor and resolve issues in data pipelines and models. Ensure high availability and performance of data systems.
β’ Stay Ahead of the Curve: Continuously learn and apply the latest in data engineering best practices, tools, and technologies.
Required Skills/Abilities:
β’ 5β7 years of experience in data engineering or related roles.
β’ Proven expertise in data modeling (especially dimensional modeling) and data warehouse design.
β’ Hands-on experience with Azure Data Factory, Synapse Analytics, Databricks, and DBT.
β’ Strong SQL and Spark SQL skills.
β’ Experience with CI/CD, DevOps, and version control (Git, Azure DevOps).
β’ Familiarity with Power BI or similar visualization tools.
β’ Solid understanding of data integration, data quality, and data transformation best practices.
β’ Strong problem-solving skills and a collaborative mindset.
Desired Skills/Abilities:
β’ Experience with Azure Synapse Analytics
β’ Experience with Infrastructure as Code
β’ Experience with Databricks, DLT, Autoloader
β’ Experience with Unity Catalog
β’ Familiarity with Delta Lake core and Parquet files
β’ Experience with DBT (Data Build Tool)
β’ Working understanding of Spark Pool and/or Python
β’ This role requires extensive experience in data modelling, specifically dimensional modelling
Education and Experience:
At least 5-7 yearsβ related experience required.
β’
β’ NO C2C
β’
β’ Job Summary:
We are looking for a highly motivated Data Engineer with a passion for data modeling, data architecture patterns, and modern engineering practices. In this role, you will be instrumental in designing and building scalable, reliable, and high-performance data solutions that power our analytics and business intelligence platforms.
You will work closely with senior data engineers and cross-functional teams to implement robust data models and pipelines using cutting-edge tools in the Azure ecosystem. This is a unique opportunity to shape the foundation of our data infrastructure and contribute to the evolution of our data products.
Duties/Responsibilities:
β’ Design & Implement Data Models: Develop and maintain dimensional data models and semantic layers that support enterprise reporting and analytics. Apply best practices in data warehousing and data architecture.
β’ Build Scalable Data Pipelines: Create modular, reusable, and pattern-driven ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and Databricks.
β’ Transform & Structure Data: Use Spark SQL, DBT, and Azure Synapse to transform raw data into structured, analytics-ready formats. Implement data quality checks and schema evolution strategies.
β’ Leverage Azure Technologies: Utilize Azure services such as SQL Server, Synapse Analytics, Databricks, and Delta Lake to build and manage scalable data infrastructure. and data pipelines
β’ Enable CI/CD & DevOps: Integrate data workflows with Azure DevOps and Git for version control, automated testing, and continuous deployment.
β’ Support Reporting & Visualization: Collaborate with analysts and BI developers to ensure data models are optimized for tools like Power BI.
β’ Troubleshoot & Optimize: Monitor and resolve issues in data pipelines and models. Ensure high availability and performance of data systems.
β’ Stay Ahead of the Curve: Continuously learn and apply the latest in data engineering best practices, tools, and technologies.
Required Skills/Abilities:
β’ 5β7 years of experience in data engineering or related roles.
β’ Proven expertise in data modeling (especially dimensional modeling) and data warehouse design.
β’ Hands-on experience with Azure Data Factory, Synapse Analytics, Databricks, and DBT.
β’ Strong SQL and Spark SQL skills.
β’ Experience with CI/CD, DevOps, and version control (Git, Azure DevOps).
β’ Familiarity with Power BI or similar visualization tools.
β’ Solid understanding of data integration, data quality, and data transformation best practices.
β’ Strong problem-solving skills and a collaborative mindset.
Desired Skills/Abilities:
β’ Experience with Azure Synapse Analytics
β’ Experience with Infrastructure as Code
β’ Experience with Databricks, DLT, Autoloader
β’ Experience with Unity Catalog
β’ Familiarity with Delta Lake core and Parquet files
β’ Experience with DBT (Data Build Tool)
β’ Working understanding of Spark Pool and/or Python
β’ This role requires extensive experience in data modelling, specifically dimensional modelling
Education and Experience:
At least 5-7 yearsβ related experience required.






