

Azure Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer on a long-term remote contract, offering a competitive pay rate. Candidates should have 4+ years in data engineering, strong Azure/AWS experience, and expertise in ETL processes, big data technologies, and cloud security practices.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 8, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
San Francisco, CA
-
π§ - Skills detailed
#Terraform #Java #"ETL (Extract #Transform #Load)" #Synapse #Programming #Data Lake #Security #Scala #Data Modeling #Big Data #Cloud #Delta Lake #Kafka (Apache Kafka) #Azure ADLS (Azure Data Lake Storage) #AWS S3 (Amazon Simple Storage Service) #Python #Data Management #ADLS (Azure Data Lake Storage) #Azure Data Factory #Hadoop #Airflow #Data Pipeline #Kubernetes #ADF (Azure Data Factory) #Data Storage #S3 (Amazon Simple Storage Service) #DevOps #Azure #Apache Spark #Databricks #ML (Machine Learning) #Docker #Computer Science #Data Engineering #IAM (Identity and Access Management) #Storage #Deployment #Network Security #Infrastructure as Code (IaC) #Data Ingestion #Spark (Apache Spark) #Luigi #AWS (Amazon Web Services) #Data Processing #Data Governance #Metadata
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Azure Data Engineer
Location: Remote ||SFO, CA preferred.
Contract: Long term
Job Summary:
We are seeking a highly skilled Azure Data Engineer to join our data engineering team. The ideal candidate will have strong experience in designing, building, and maintaining scalable data pipelines and infrastructure on cloud platforms like Azure and AWS. You will be responsible for developing robust ETL processes, working with big data technologies such as Hadoop and Spark, and contributing to system design for real-time analytics and fault-tolerant architectures.
Key Responsibilities:
β’ Design, build, and maintain ETL/ELT pipelines using modern orchestration tools like Luigi, Azure Data Factory, or similar.
β’ Develop scalable data lakes and data ingestion frameworks leveraging Azure Data Lake Storage, Databricks, or AWS S3.
β’ Work closely with cross-functional teams to support real-time data processing and streaming analytics solutions.
β’ Implement and optimize big data solutions using technologies like Hadoop, Apache Spark, and Delta Lake.
β’ Design and implement secure, scalable cloud infrastructure using Azure (preferred) and AWS services for data storage and processing.
β’ Ensure robust access control and security policies across cloud resources using RBAC, IAM, network security groups, etc.
β’ Participate in system architecture discussions focused on fault tolerance, high availability, and performance optimization.
β’ Automate infrastructure and deployments using CI/CD practices and Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates).
β’ Monitor and troubleshoot pipeline and infrastructure issues to ensure data reliability and integrity.
Required Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Information Systems, Engineering, or related field.
β’ 4+ years of experience in data engineering, with at least 2+ years on Azure and/or AWS.
β’ Strong programming skills in Python, Scala, or Java for building ETL workflows.
β’ Hands-on experience with Luigi, Airflow, or other pipeline orchestration tools.
β’ Expertise in Azure Data Services (Data Lake, Synapse, Data Factory, Databricks, Event Hubs).
β’ Solid understanding of cloud access controls, identity management, and security best practices.
β’ Deep knowledge of big data technologies β Hadoop, Spark, Hive, and related ecosystems.
β’ Experience in real-time analytics and streaming data processing using Kafka, Spark Streaming, or similar.
β’ Proven track record of contributing to the design of scalable, fault-tolerant architectures in cloud environments.
Preferred Skills:
β’ Azure certifications (e.g., Azure Data Engineer Associate, Azure Solutions Architect).
β’ Experience with data modeling, data governance, and metadata management.
β’ Familiarity with containerization (Docker, Kubernetes) and DevOps practices.
β’ Exposure to machine learning pipelines or analytics platforms is a plus.