

FUSTIS LLC
Azure Data Engineer with Fabric & Synapse
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with Fabric & Synapse, offering a remote contract for US citizens at $70-$78/hr. Requires 5-7 years in Data Engineering, expertise in Azure platforms, SQL, and data pipeline orchestration.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
624
-
🗓️ - Date
April 22, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Synapse #BI (Business Intelligence) #Data Ingestion #"ETL (Extract #Transform #Load)" #Monitoring #PySpark #SQL (Structured Query Language) #Scala #Delta Lake #Data Modeling #Azure #Data Pipeline #Data Engineering #Data Governance #Spark SQL #ADLS (Azure Data Lake Storage) #Databricks #Security #Debugging #Semantic Models #Data Processing #Data Architecture #Microsoft Power BI #Deployment #Snowflake #Data Quality #Spark (Apache Spark) #Version Control #JSON (JavaScript Object Notation)
Role description
Job Role: Azure Data Engineer
Location: Remote Job
Eligibility: US Citizens Only
Pay Rate: $70-$78/hr. on W2
Job Description:
The Data Engineer is responsible for the design, development, and operational support of data solutions built on the Microsoft Fabric platform. This role focuses on implementing scalable data architectures that leverage lakehouse design principles, enabling efficient ingestion, transformation, and serving of data for analytical consumption.
The position requires applying established data engineering patterns within the Fabric ecosystem, including the use of data pipelines, Spark-based processing, SQL endpoints, and semantic modeling. The Data Engineer will ensure that data assets are reliable, performant, and structured to support reporting, analytics, and downstream applications.
Key Responsibilities
• Design and implement data ingestion and transformation pipelines using Fabric Data Factory and related services
• Develop and manage Lakehouse architectures, including ingestion into Delta tables
• Develop and maintain notebooks (PySpark / Spark SQL) for data transformation and enrichment
• Create and maintain semantic models to support reporting and analytics use cases
• Implement data quality checks, validation, and monitoring for ingestion pipelines
• Collaborate with analysts, Power BI developers, and stakeholders to deliver data solutions aligned with business needs
• Optimize performance and cost across data workloads
• Contribute to evolving best practices, standards, and patterns for Microsoft Fabric adoption
Required Qualifications
• 5–7 years of experience in Data Engineering or related roles
• Hands-on experience with modern data platforms (e.g., Azure Synapse, Databricks, Snowflake, or similar)
• Understanding of lakehouse architecture and distributed data processing concepts
• Proficiency in SQL and experience with data modeling concepts (star schema, semantic layers)
• Experience building and orchestrating data pipelines (ETL/ELT)
• Familiarity with Spark (PySpark or Spark SQL) and notebook-based development
• Experience working with structured and semi-structured data (JSON, Parquet, etc.)
Preferred Qualifications
• Exposure to Microsoft Fabric or its core components (Lakehouse, Data Factory, Synapse Data Engineering, Power BI)
• Experience with Power BI semantic models and integration with data platforms
• Knowledge of Delta Lake and medallion architecture (bronze/silver/gold layers)
• Familiarity with CI/CD practices in data engineering (e.g., deployment pipelines, version control)
• Experience working in Azure ecosystem (ADLS, Azure Functions, etc.)
Key Competencies
• Solid problem-solving and debugging skills in distributed data environments
• Ability to work independently in a contractor capacity with minimal supervision
• Strong communication skills to collaborate with both technical and non-technical stakeholders
• Pragmatic approach to adopting new technologies while leveraging established patterns
Nice to Have
• Knowledge of data governance, lineage, and security practices
• Prior experience in migrating workloads to new platforms
Job Role: Azure Data Engineer
Location: Remote Job
Eligibility: US Citizens Only
Pay Rate: $70-$78/hr. on W2
Job Description:
The Data Engineer is responsible for the design, development, and operational support of data solutions built on the Microsoft Fabric platform. This role focuses on implementing scalable data architectures that leverage lakehouse design principles, enabling efficient ingestion, transformation, and serving of data for analytical consumption.
The position requires applying established data engineering patterns within the Fabric ecosystem, including the use of data pipelines, Spark-based processing, SQL endpoints, and semantic modeling. The Data Engineer will ensure that data assets are reliable, performant, and structured to support reporting, analytics, and downstream applications.
Key Responsibilities
• Design and implement data ingestion and transformation pipelines using Fabric Data Factory and related services
• Develop and manage Lakehouse architectures, including ingestion into Delta tables
• Develop and maintain notebooks (PySpark / Spark SQL) for data transformation and enrichment
• Create and maintain semantic models to support reporting and analytics use cases
• Implement data quality checks, validation, and monitoring for ingestion pipelines
• Collaborate with analysts, Power BI developers, and stakeholders to deliver data solutions aligned with business needs
• Optimize performance and cost across data workloads
• Contribute to evolving best practices, standards, and patterns for Microsoft Fabric adoption
Required Qualifications
• 5–7 years of experience in Data Engineering or related roles
• Hands-on experience with modern data platforms (e.g., Azure Synapse, Databricks, Snowflake, or similar)
• Understanding of lakehouse architecture and distributed data processing concepts
• Proficiency in SQL and experience with data modeling concepts (star schema, semantic layers)
• Experience building and orchestrating data pipelines (ETL/ELT)
• Familiarity with Spark (PySpark or Spark SQL) and notebook-based development
• Experience working with structured and semi-structured data (JSON, Parquet, etc.)
Preferred Qualifications
• Exposure to Microsoft Fabric or its core components (Lakehouse, Data Factory, Synapse Data Engineering, Power BI)
• Experience with Power BI semantic models and integration with data platforms
• Knowledge of Delta Lake and medallion architecture (bronze/silver/gold layers)
• Familiarity with CI/CD practices in data engineering (e.g., deployment pipelines, version control)
• Experience working in Azure ecosystem (ADLS, Azure Functions, etc.)
Key Competencies
• Solid problem-solving and debugging skills in distributed data environments
• Ability to work independently in a contractor capacity with minimal supervision
• Strong communication skills to collaborate with both technical and non-technical stakeholders
• Pragmatic approach to adopting new technologies while leveraging established patterns
Nice to Have
• Knowledge of data governance, lineage, and security practices
• Prior experience in migrating workloads to new platforms



