Foxworth Analytics

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a contract Data Engineer position focused on designing Azure-based analytics platforms and automation frameworks. Key skills include SQL, Python, Spark, and Azure DevOps. The pay rate is $65–$90 per hour, with US remote work.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date
May 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#ADF (Azure Data Factory) #Azure Data Factory #BI (Business Intelligence) #Azure #Scala #Data Framework #Consulting #Semantic Models #Data Engineering #Azure DevOps #"ETL (Extract #Transform #Load)" #Data Pipeline #Datasets #SQL (Structured Query Language) #Deployment #Spark (Apache Spark) #DevOps #Libraries #Python #PySpark #Security #Automation
Role description
Azure Data Engineer Type: Contract Location: US Remote About the Role We are seeking a Data Engineer with strong experience designing modern analytics platforms and automation frameworks in the Microsoft stack of technology. This role focuses on building a scalable medallion-based data platform, implementing enterprise ingestion frameworks, and establishing DevOps-driven deployment architecture for analytics solutions. The ideal candidate has hands-on experience designing medallion architecture and objects, building reusable pipeline functions and parameterized frameworks, and preparing models for semantic consumption. Key Responsibilities • Implement medallion framework within Fabric Lakehouse environments • Contribute to scalable bronze ingestion pipelines and notebooks for a variety of native data frameworks • Contribute to DevOps system to maintain parameterized data pipelines and reusable ingestion frameworks using Azure Data Factory or Fabric Data Pipelines • Create and maintain shared transformation libraries and functions used across pipelines and notebooks • Develop Spark / PySpark notebooks for Silver layer transformations • Familiar with Azure DevOps architecture for notebooks, and semantic models • Establish environment parameterization and deployment automation across Dev/Test/Prod environments • contribute to BI semantic models and datasets optimized for performance and scalability • Collaborate with stakeholders to define data models and governance standards • Ensure solutions follow best practices for security, performance, and maintainability Qualifications • Experience designing Fabric Lakehouse environments using OneLake • Experience implementing DevOps-driven data platform architecture • Familiarity with Fabric Data Pipelines, Warehouse, and Spark optimization • Experience implementing Azure DevOps CI/CD pipelines • Experience designing parameterized pipelines and reusable framework components • Strong proficiency in SQL, Python, and Spark based modeling Preferred Skills • Experience preparing BI semantic models for enterprise reporting • Exposure to advanced governance practices • Consulting experience, particularly in customer advisory and deployments • Experience collaborating with product teams and familiarity with SDLC practice The anticipated hourly rate range for this position is $65–$90 per hour. Actual compensation will be determined based on a variety of factors, including but not limited to relevant experience, education, skills, certifications, geographic location, and business needs, in accordance with applicable federal, state, and local laws. Please note that we are not accepting third-party vendors, staffing agencies, or C2C arrangements for this role.