

Simplify Recruiting
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5-10 years of experience, specializing in Microsoft Fabric and PySpark. It offers a remote contract position with a focus on Azure Synapse Analytics, CI/CD processes, and advanced SQL skills, requiring Microsoft certification.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
December 3, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Engineering #Spark SQL #Azure Data Platforms #Delta Lake #"ETL (Extract #Transform #Load)" #PostgreSQL #Programming #Azure DevOps #Azure #Azure Data Factory #Databricks #Deployment #GitHub #Data Quality #Data Pipeline #Python #Azure Synapse Analytics #Synapse #Dataflow #Spark (Apache Spark) #ADLS (Azure Data Lake Storage) #SQL (Structured Query Language) #Version Control #Database Systems #ADF (Azure Data Factory) #Azure ADLS (Azure Data Lake Storage) #PySpark #Data Lake #Azure SQL #Storage #Data Processing #Databases #DevOps #Cloud
Role description
Title: Data Engineer
Location: USA (Remote)
Experience range :- 5 Yrs to 10 Yrs
Data Engineer - Microsoft Fabric & PySpark(Synapse) Specialist
Required Technical Skills:
Microsoft Fabric: Strong hands-on experience with core components like Fabric Lakehouse, Notebooks (PySpark), Data Pipelines, and Dataflows.
Programming: Expert proficiency in Python and specifically PySpark/Spark SQL for large-scale data processing and complex transformations.
Cloud Data Services: Practical experience with Azure Synapse Analytics (Data Pipelines, Spark Pools) and Azure Data Lake Storage (ADLS Gen2).
Databases & SQL: Advanced SQL skills and experience working with various database systems (e.g., Azure SQL DB, PostgreSQL).
DevOps & CI/CD: Proven experience implementing CI/CD processes for data solutions using GitHub Actions/Pipelines or Azure DevOps Pipelines for automated deployment and testing.
Data Formats: Deep understanding of modern data formats like Delta Lake, Parquet, and CSV.
Experience
Total Experience: 4 β 6 years in Data Engineering, Software Engineering, or a related field.
Relevant Experience: 2 β 3 years specifically focused on cloud data engineering utilizing Spark/PySpark and Microsoft/Azure data platforms, 1-2 completed projects
- Evidence of work with GitHub or Azure DevOps for CI/CD, pipeline deployment, or version control.
- Explicit mention of Microsoft Fabric, Lakehouse, and PySpark experience. Look for keywords like pyspark.sql, Delta Lake, notebooks.
- Direct mention of Azure Synapse Analytics (especially Spark Pools), Azure Data Factory (ADF), or Azure Data Lake Storage (ADLS Gen2), UDFs in PySpark, and implementing data quality/validation checks.
- Microsoft Certified: Azure Data Engineer Associate (DP-203) or relevant Databricks/Spark certifications.
Title: Data Engineer
Location: USA (Remote)
Experience range :- 5 Yrs to 10 Yrs
Data Engineer - Microsoft Fabric & PySpark(Synapse) Specialist
Required Technical Skills:
Microsoft Fabric: Strong hands-on experience with core components like Fabric Lakehouse, Notebooks (PySpark), Data Pipelines, and Dataflows.
Programming: Expert proficiency in Python and specifically PySpark/Spark SQL for large-scale data processing and complex transformations.
Cloud Data Services: Practical experience with Azure Synapse Analytics (Data Pipelines, Spark Pools) and Azure Data Lake Storage (ADLS Gen2).
Databases & SQL: Advanced SQL skills and experience working with various database systems (e.g., Azure SQL DB, PostgreSQL).
DevOps & CI/CD: Proven experience implementing CI/CD processes for data solutions using GitHub Actions/Pipelines or Azure DevOps Pipelines for automated deployment and testing.
Data Formats: Deep understanding of modern data formats like Delta Lake, Parquet, and CSV.
Experience
Total Experience: 4 β 6 years in Data Engineering, Software Engineering, or a related field.
Relevant Experience: 2 β 3 years specifically focused on cloud data engineering utilizing Spark/PySpark and Microsoft/Azure data platforms, 1-2 completed projects
- Evidence of work with GitHub or Azure DevOps for CI/CD, pipeline deployment, or version control.
- Explicit mention of Microsoft Fabric, Lakehouse, and PySpark experience. Look for keywords like pyspark.sql, Delta Lake, notebooks.
- Direct mention of Azure Synapse Analytics (especially Spark Pools), Azure Data Factory (ADF), or Azure Data Lake Storage (ADLS Gen2), UDFs in PySpark, and implementing data quality/validation checks.
- Microsoft Certified: Azure Data Engineer Associate (DP-203) or relevant Databricks/Spark certifications.






