

Qwiktechy LLC
Cloud Data Engineer – Azure Databricks & Data Factory
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Cloud Data Engineer with 12-15 years of experience, focusing on Azure Databricks and Data Factory. It is a remote position for 12+ months, requiring expertise in cloud data engineering, ETL pipelines, and data integration.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 2, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#DevOps #Scala #Azure Databricks #Data Governance #Databricks #Azure Data Factory #GIT #Azure #PySpark #Data Engineering #Spark (Apache Spark) #Data Quality #Documentation #Synapse #Migration #"ETL (Extract #Transform #Load)" #Data Lake #Databases #Kafka (Apache Kafka) #Data Pipeline #ADF (Azure Data Factory) #SQL (Structured Query Language) #Azure Stream Analytics #Metadata #Cloud
Role description
Job Title: Senior Cloud Data Engineer – Azure Databricks & Data Factory
Location: Remote
Duration: 12+Months
About the Role:
We’re looking for a Senior Cloud Data Engineer with 12-15 years of experience to lead the design and development of large-scale data pipelines using Azure Databricks and Azure Data Factory. If you have a deep understanding of cloud data engineering and a passion for optimizing complex data workflows, we want you on our team.
Key Responsibilities:
• Lead the design and development of scalable ETL pipelines using Azure Databricks and Azure Data Factory.
• Drive the migration of legacy ETL systems to cloud-based, Databricks-powered solutions.
• Integrate and manage data from APIs, relational databases, and cloud sources into Azure Data Lake and Synapse.
• Collaborate with cross-functional teams to define and implement data models that support strategic business goals.
• Ensure data quality, implement robust validation processes, and maintain detailed metadata and lineage documentation.
What We’re Looking For:
• 12-15 years of experience in data engineering, with significant expertise in cloud ecosystems (preferably Azure).
• Expertise in Azure Databricks, PySpark, SparkSQL, and Azure Data Factory.
• Strong background in data warehousing, SQL optimization, and performance tuning.
• Proven track record of integrating complex data sources (APIs, relational, cloud systems).
• Experience with DevOps, CI/CD, Git, and data governance best practices.
Bonus Points:
• Azure Data Engineer Associate certification.
• Experience with real-time streaming (e.g., Kafka, Azure Stream Analytics).
Job Title: Senior Cloud Data Engineer – Azure Databricks & Data Factory
Location: Remote
Duration: 12+Months
About the Role:
We’re looking for a Senior Cloud Data Engineer with 12-15 years of experience to lead the design and development of large-scale data pipelines using Azure Databricks and Azure Data Factory. If you have a deep understanding of cloud data engineering and a passion for optimizing complex data workflows, we want you on our team.
Key Responsibilities:
• Lead the design and development of scalable ETL pipelines using Azure Databricks and Azure Data Factory.
• Drive the migration of legacy ETL systems to cloud-based, Databricks-powered solutions.
• Integrate and manage data from APIs, relational databases, and cloud sources into Azure Data Lake and Synapse.
• Collaborate with cross-functional teams to define and implement data models that support strategic business goals.
• Ensure data quality, implement robust validation processes, and maintain detailed metadata and lineage documentation.
What We’re Looking For:
• 12-15 years of experience in data engineering, with significant expertise in cloud ecosystems (preferably Azure).
• Expertise in Azure Databricks, PySpark, SparkSQL, and Azure Data Factory.
• Strong background in data warehousing, SQL optimization, and performance tuning.
• Proven track record of integrating complex data sources (APIs, relational, cloud systems).
• Experience with DevOps, CI/CD, Git, and data governance best practices.
Bonus Points:
• Azure Data Engineer Associate certification.
• Experience with real-time streaming (e.g., Kafka, Azure Stream Analytics).






