

Dexian
Azure Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with a 6-month remote contract, offering a pay rate of "TBD." Candidates must have 5+ years of experience in data engineering, expertise in Azure services, and strong skills in SQL, Python, and Spark.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
440
-
ποΈ - Date
October 11, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Security #Disaster Recovery #Cloud #Database Architecture #Data Engineering #Azure SQL Database #Azure cloud #Computer Science #Azure Databricks #Azure #Documentation #Data Architecture #Azure Data Factory #Data Processing #Security #Python #Automation #Databricks #Scala #Spark (Apache Spark) #ADF (Azure Data Factory) #Compliance #Schema Design #Redis #Data Pipeline #SQL (Structured Query Language) #Azure SQL #"ETL (Extract #Transform #Load)" #Databases #Azure Cosmos DB #Monitoring #Data Governance
Role description
We are looking for candidates with strong technical expertise to fill this role. Below are the details of the position:
π¦ Industry:
πΌ Job Title: Azure Data Engineer
π» Mode of Job: Remote
π
Contract: 6 months
Job Summary:
We are seeking a highly skilled Azure Data Engineer / Architect with deep expertise in designing, implementing, and optimizing OLTP systems and cloud-based data architectures. The ideal candidate will have strong technical proficiency across Azure data services, distributed systems, and performance-driven database solutions that support large-scale, real-time applications.
Key Responsibilities:
β’ OLTP System Design: Architect, implement, and support high-performance OLTP (Online Transaction Processing) systems ensuring scalability, consistency, and reliability.
β’ Azure Data Ecosystem:
β’ Design and maintain data models using Azure Cosmos DB for distributed, low-latency workloads.
β’ Manage Azure SQL databases, including schema design, query optimization, and data security.
β’ Develop and orchestrate data pipelines in Azure Data Factory for ETL/ELT workflows.
β’ Utilize Azure Databricks for large-scale data processing, analytics, and collaborative development using Spark.
β’ Implement Redis-based caching layers to enhance application responsiveness and throughput.
β’ Collaborate with product, application, and analytics teams to align database architecture with business and data needs.
β’ Establish best practices for data governance, performance tuning, and disaster recovery.
β’ Monitor, troubleshoot, and optimize system performance and cost across Azure services.
Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or related field.
β’ 5+ years of experience in data engineering or cloud database architecture, with proven hands-on expertise in OLTP systems.
β’ Advanced knowledge of Azure cloud services including Cosmos DB, Azure SQL, Data Factory, Databricks, and Redis.
β’ Strong understanding of ETL processes, distributed systems, and data performance optimization.
β’ Proficiency in SQL, Python, and Spark for data processing and automation.
β’ Experience implementing security, compliance, and monitoring in cloud data environments.
β’ Excellent problem-solving, documentation, and communication skills.
We are looking for candidates with strong technical expertise to fill this role. Below are the details of the position:
π¦ Industry:
πΌ Job Title: Azure Data Engineer
π» Mode of Job: Remote
π
Contract: 6 months
Job Summary:
We are seeking a highly skilled Azure Data Engineer / Architect with deep expertise in designing, implementing, and optimizing OLTP systems and cloud-based data architectures. The ideal candidate will have strong technical proficiency across Azure data services, distributed systems, and performance-driven database solutions that support large-scale, real-time applications.
Key Responsibilities:
β’ OLTP System Design: Architect, implement, and support high-performance OLTP (Online Transaction Processing) systems ensuring scalability, consistency, and reliability.
β’ Azure Data Ecosystem:
β’ Design and maintain data models using Azure Cosmos DB for distributed, low-latency workloads.
β’ Manage Azure SQL databases, including schema design, query optimization, and data security.
β’ Develop and orchestrate data pipelines in Azure Data Factory for ETL/ELT workflows.
β’ Utilize Azure Databricks for large-scale data processing, analytics, and collaborative development using Spark.
β’ Implement Redis-based caching layers to enhance application responsiveness and throughput.
β’ Collaborate with product, application, and analytics teams to align database architecture with business and data needs.
β’ Establish best practices for data governance, performance tuning, and disaster recovery.
β’ Monitor, troubleshoot, and optimize system performance and cost across Azure services.
Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, or related field.
β’ 5+ years of experience in data engineering or cloud database architecture, with proven hands-on expertise in OLTP systems.
β’ Advanced knowledge of Azure cloud services including Cosmos DB, Azure SQL, Data Factory, Databricks, and Redis.
β’ Strong understanding of ETL processes, distributed systems, and data performance optimization.
β’ Proficiency in SQL, Python, and Spark for data processing and automation.
β’ Experience implementing security, compliance, and monitoring in cloud data environments.
β’ Excellent problem-solving, documentation, and communication skills.