

Brooksource
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position focused on Microsoft SQL and Azure technologies, offering a competitive pay rate. Key skills required include SQL, Azure services, ETL/ELT pipelines, and data modeling. Remote work is available for the contract duration.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
January 7, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Washington, United States
-
🧠 - Skills detailed
#Data Quality #Synapse #Logging #Data Lake #ADF (Azure Data Factory) #Microsoft SQL #Schema Design #Data Management #Documentation #Monitoring #SQL (Structured Query Language) #Complex Queries #Microsoft SQL Server #"ETL (Extract #Transform #Load)" #Azure Data Factory #MS SQL (Microsoft SQL Server) #Scala #Data Modeling #GIT #Indexing #Automation #DevOps #SQL Queries #Data Ingestion #Azure SQL #Security #Data Engineering #Microsoft Power BI #BI (Business Intelligence) #Azure #SQL Server #SSAS (SQL Server Analysis Services) #Deployment #Metadata #Debugging #Datasets #Data Pipeline #Storage
Role description
Data Engineer (Microsoft SQL & Azure Technologies)
Location: Remote
Type: Contract
Role Overview
The Data Engineer will design, build, and optimize data pipelines and data platforms leveraging Microsoft SQL Server and Azure data services. This role focuses on integrating, transforming, and consolidating data from diverse sources into scalable, secure, and high‑performance data solutions suitable for analytics and operational workloads.
Key Responsibilities
• Design, build, and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and SQL‑based transformations.
• Develop and optimize SQL queries, stored procedures, indexing strategies, and performance‑tuning processes.
• Build secure, compliant, and production‑grade data ingestion and transformation frameworks across structured and unstructured sources.
• Develop and maintain data models, data lake zones (Raw/Bronze, Refined/Silver, Curated/Gold), and schemas aligned to business needs.
• Implement robust data quality, validation, and metadata management practices.
• Work with stakeholders to understand data requirements and deliver analytics‑ready datasets.
• Monitor, troubleshoot, and optimize data pipelines ensuring high availability and reliability.
• Collaborate with engineering, BI, and analytics teams to support downstream consumption (Power BI, Synapse SQL, etc.).
• Implement Azure best practices for security, governance, monitoring, logging, and cost optimization.
Core Technical Skills
• Strong experience with SQL (writing complex queries, stored procedures, performance tuning).
• Knowledge of Azure services including Azure Synapse, Azure SQL, Azure Storage, App Services, Azure Functions.
• Experience with SSAS, OLAP/analytical modeling technologies.
• Understanding of data modeling, schema design, and metadata-driven application development.
• Familiarity with CI/CD pipelines, Git, DevOps automation, and test frameworks.
• Skilled in debugging, troubleshooting, and diagnosing distributed systems issues.
Delivery Excellence
• Demonstrates accountability across design, development, testing, documentation, and deployment.
• Strong understanding of engineering fundamentals
• Code quality
• Secure coding
• Release readiness
• Support
• Ability to independently drive features end-to-end with minimal supervision.
Data Engineer (Microsoft SQL & Azure Technologies)
Location: Remote
Type: Contract
Role Overview
The Data Engineer will design, build, and optimize data pipelines and data platforms leveraging Microsoft SQL Server and Azure data services. This role focuses on integrating, transforming, and consolidating data from diverse sources into scalable, secure, and high‑performance data solutions suitable for analytics and operational workloads.
Key Responsibilities
• Design, build, and maintain ETL/ELT pipelines using Azure Data Factory, Synapse Pipelines, and SQL‑based transformations.
• Develop and optimize SQL queries, stored procedures, indexing strategies, and performance‑tuning processes.
• Build secure, compliant, and production‑grade data ingestion and transformation frameworks across structured and unstructured sources.
• Develop and maintain data models, data lake zones (Raw/Bronze, Refined/Silver, Curated/Gold), and schemas aligned to business needs.
• Implement robust data quality, validation, and metadata management practices.
• Work with stakeholders to understand data requirements and deliver analytics‑ready datasets.
• Monitor, troubleshoot, and optimize data pipelines ensuring high availability and reliability.
• Collaborate with engineering, BI, and analytics teams to support downstream consumption (Power BI, Synapse SQL, etc.).
• Implement Azure best practices for security, governance, monitoring, logging, and cost optimization.
Core Technical Skills
• Strong experience with SQL (writing complex queries, stored procedures, performance tuning).
• Knowledge of Azure services including Azure Synapse, Azure SQL, Azure Storage, App Services, Azure Functions.
• Experience with SSAS, OLAP/analytical modeling technologies.
• Understanding of data modeling, schema design, and metadata-driven application development.
• Familiarity with CI/CD pipelines, Git, DevOps automation, and test frameworks.
• Skilled in debugging, troubleshooting, and diagnosing distributed systems issues.
Delivery Excellence
• Demonstrates accountability across design, development, testing, documentation, and deployment.
• Strong understanding of engineering fundamentals
• Code quality
• Secure coding
• Release readiness
• Support
• Ability to independently drive features end-to-end with minimal supervision.






