Realign LLC

Databricks Engineer-5

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer on a contract basis, offering a remote work location. Required skills include Python, SQL, PySpark, Azure SQL, and ETL/ELT design. Experience with Docker and Kubernetes is essential. Contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
Unknown
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 5, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
California
-
🧠 - Skills detailed
#ML (Machine Learning) #Schema Design #Data Ingestion #Automation #Databricks #Scala #SQL (Structured Query Language) #DevOps #Azure Data Platforms #Data Architecture #SQL Server #"ETL (Extract #Transform #Load)" #PySpark #Python #Cloud #Data Science #Data Quality #Azure SQL #Spark (Apache Spark) #Data Pipeline #Deployment #Monitoring #SQL Queries #Docker #Azure SQL Database #Azure #Data Engineering #Kubernetes #Databases #Data Modeling #Data Processing
Role description
Job Type: Contract Job Category: IT Job Title: Databricks Engineer Location: Remote Contract About the Role We are looking for a skilled Databricks Engineer to join our cloud data engineering team. In this role, you will be responsible for building, optimizing, and maintaining data pipelines and processing systems on the Azure and Databricks ecosystem. The ideal candidate has strong hands-on experience with Python, SQL, PySpark, and Azure SQL environments, as well as a solid understanding of ETL/ELT design patterns, data modeling, and cloud performance optimization. You will collaborate with architects, analysts, and data scientists to ensure data is efficiently transformed, processed, and made available for analytics, reporting, and machine learning use cases. Key Responsibilities Develop and maintain scalable, high-performance data pipelines using Databricks (Python, SQL, PySpark). Design and implement ETL/ELT workflows to move and transform data across Azure data platforms. Work with Azure SQL Server environments, including Managed Instances, Azure SQL Databases, and SQL Server VMs to manage data ingestion and integration. Design data models and implement best practices for schema design, data quality, and lineage tracking. Use Docker and Azure Kubernetes Service (AKS) to automate, containerize, and deploy scalable data processing workloads. Optimize Databricks cluster configurations, SQL queries, and data transformation logic for cost and performance efficiency. Collaborate with data architects and business stakeholders to design and implement robust data architecture solutions. Support performance tuning, troubleshooting, and proactive monitoring of data pipelines. Contribute to CI/CD processes and data pipeline automation to enhance deployment efficiency. Required Skills DEVOPS ENGINEER