

Azure Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer on a contract basis for 3 months, offering competitive pay. Candidates must have strong skills in Python, PySpark, SQL, and Azure Data Services, with experience in building scalable data pipelines and ETL processes.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date discovered
September 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
New Jersey, United States
-
π§ - Skills detailed
#Big Data #Data Modeling #Kafka (Apache Kafka) #Python #Cloud #Data Pipeline #Storage #Data Lake #Azure cloud #Spark SQL #Data Transformations #Data Engineering #BI (Business Intelligence) #Spark (Apache Spark) #Data Governance #Agile #"ETL (Extract #Transform #Load)" #Data Science #Data Processing #Azure #PySpark #Security #Data Architecture #Scala #Databricks #SQL (Structured Query Language) #Complex Queries #Datasets #Delta Lake #Data Analysis #Programming #Data Security #Synapse #Compliance
Role description
Job Title: Azure Data Engineer
Location: [Onsite β New Jersey] 3 days/Week
Employment Type: [Contract]
US Citizens Only
Job Description:
We are seeking a skilled Data Engineer with strong expertise in building and optimizing scalable data pipelines and solutions using Python, PySpark, SQL, Databricks, and Azure. The candidate will be responsible for designing, implementing, and maintaining robust data architectures to support advanced analytics and business intelligence requirements.
Key Responsibilities:
β’ Design, develop, and optimize ETL/ELT data pipelines using PySpark and SQL.
β’ Work on Databricks to build scalable data solutions and manage big data workflows.
β’ Integrate, clean, and transform structured and unstructured datasets from multiple sources.
β’ Implement best practices for data modeling, storage, and performance tuning.
β’ Collaborate with data analysts, data scientists, and business stakeholders to deliver high-quality data solutions.
β’ Ensure data security, compliance, and governance standards on Azure cloud environments.
β’ Monitor and troubleshoot data pipelines to maintain high reliability and performance.
Required Skills & Qualifications:
β’ Strong programming skills in Python and PySpark.
β’ Proficiency in SQL for complex queries, optimization, and data transformations.
β’ Hands-on experience with Databricks and distributed data processing.
β’ Experience working with Azure Data Services (Azure Data Lake, Synapse, Data Factory, etc.).
β’ Good understanding of data warehousing concepts, ETL pipelines, and data modeling.
β’ Strong problem-solving skills and ability to work in collaborative, agile teams.
Preferred Qualifications:
β’ Experience with CI/CD pipelines for data solutions.
β’ Familiarity with Delta Lake, Kafka, or other real-time data streaming tools.
β’ Knowledge of data governance, security, and compliance frameworks.
Job Title: Azure Data Engineer
Location: [Onsite β New Jersey] 3 days/Week
Employment Type: [Contract]
US Citizens Only
Job Description:
We are seeking a skilled Data Engineer with strong expertise in building and optimizing scalable data pipelines and solutions using Python, PySpark, SQL, Databricks, and Azure. The candidate will be responsible for designing, implementing, and maintaining robust data architectures to support advanced analytics and business intelligence requirements.
Key Responsibilities:
β’ Design, develop, and optimize ETL/ELT data pipelines using PySpark and SQL.
β’ Work on Databricks to build scalable data solutions and manage big data workflows.
β’ Integrate, clean, and transform structured and unstructured datasets from multiple sources.
β’ Implement best practices for data modeling, storage, and performance tuning.
β’ Collaborate with data analysts, data scientists, and business stakeholders to deliver high-quality data solutions.
β’ Ensure data security, compliance, and governance standards on Azure cloud environments.
β’ Monitor and troubleshoot data pipelines to maintain high reliability and performance.
Required Skills & Qualifications:
β’ Strong programming skills in Python and PySpark.
β’ Proficiency in SQL for complex queries, optimization, and data transformations.
β’ Hands-on experience with Databricks and distributed data processing.
β’ Experience working with Azure Data Services (Azure Data Lake, Synapse, Data Factory, etc.).
β’ Good understanding of data warehousing concepts, ETL pipelines, and data modeling.
β’ Strong problem-solving skills and ability to work in collaborative, agile teams.
Preferred Qualifications:
β’ Experience with CI/CD pipelines for data solutions.
β’ Familiarity with Delta Lake, Kafka, or other real-time data streaming tools.
β’ Knowledge of data governance, security, and compliance frameworks.