BNETAL

Azure Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer on a contract basis, paying $60.00 - $65.00 per hour. Requires 6+ years of data engineering experience, expertise in Azure Data Factory, Azure Databricks, PySpark, SQL, and an Azure Certification. Remote work.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
October 10, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Remote
-
🧠 - Skills detailed
#Azure #Data Transformations #Spark (Apache Spark) #Scala #Agile #Talend #Migration #Cloud #ADF (Azure Data Factory) #"ETL (Extract #Transform #Load)" #Project Management #DevOps #Databricks #Java #Unit Testing #Documentation #Data Quality #Data Lake #PySpark #Data Analysis #Azure Databricks #Azure Data Factory #Quality Assurance #Data Engineering #Data Pipeline #Data Processing #Delta Lake #Data Migration #SQL (Structured Query Language)
Role description
We are looking for a seasoned Azure Data Engineer with strong expertise in Azure Data Factory, Azure Databricks, and PySpark, who also brings hands-on experience in data migration and validation. The ideal candidate will have a proven track record of migrating Java-based data processing jobs to Azure Databricks using PySpark, and validating data between source and target systems using SQL and PySpark. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Data Factory and Azure Databricks. Migrate existing Java-based ETL jobs to PySpark on Azure Databricks. Implement standard data quality checks using SQL and PySpark(e.g., null checks, duplicates, referential integrity, record counts). Write efficient and optimized PySpark code for large-scale data processing. Collaborate with data analysts, architects, and business stakeholders to understand data requirements. Perform unit testing and validation of data transformations and ensure data quality. Develop and maintain automated validation frameworks for continuous data quality assurance. Monitor and troubleshoot data pipelines and workflows in production environments. Document technical solutions, migration strategies, and validation procedures. Required Skills & Qualifications: Minimum 6 years of experience in data engineering or related roles. Strong hands-on experience with Azure Data Factory, Azure Databricks, and PySpark. Proven experience in migrating Java-based data jobs to PySpark on Azure Databricks. Solid understanding of SQL and experience in data validation and testing. Familiarity with standard data quality checks and validation frameworks. Experience working in cloud-based environments, preferably Azure. Strong problem-solving skills and ability to work independently or in a team. Excellent communication and documentation skills. Preferred Qualifications: Experience with CI/CD pipelines and DevOps practices in data engineering. Experience working with Talend Studio. Knowledge of Delta Lake, Spark optimization techniques, and data lake architectures. Exposure to Agile methodologies and project management tools. Job Type: Contract Pay: $60.00 - $65.00 per hour Education: Bachelor's (Preferred) Experience: Azure Data Factory: 3 years (Required) Azure Databricks: 3 years (Required) PySpark: 5 years (Required) SQL: 5 years (Required) Java: 3 years (Required) Language: Fluent English (Required) License/Certification: Azure Certification (Required) Work Location: Remote