Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a long-term hybrid contract in Durham, NC, offering a competitive pay rate. Key requirements include 5 years of data pipeline experience with Azure Synapse, strong SQL proficiency, and ETL/ELT expertise.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 27, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Durham, NC
-
🧠 - Skills detailed
#Databricks #Visualization #"ETL (Extract #Transform #Load)" #Azure Stream Analytics #AWS (Amazon Web Services) #Azure SQL Database #ADF (Azure Data Factory) #Snowflake #Datasets #Azure SQL #Data Architecture #Cloud #Storage #Scala #SQL (Structured Query Language) #Data Pipeline #Scripting #Python #Azure #Azure Data Factory #BI (Business Intelligence) #Data Engineering #Data Lake #Azure Synapse Analytics #Programming #Microsoft Azure #Synapse #Monitoring #Data Modeling #Microsoft Power BI #Data Processing
Role description
DataStaff, Inc. is in need of a Senior Data Engineer for a long-term contract opportunity with one of our direct clients located in Durham, NC β€’ This position is hybrid. Responsibilities: β€’ Lead the development and optimization of the data pipelines, analytics solutions, and cloud-based data platforms. β€’ Designing and implementing end-to-end data engineering solutions using Microsoft Azure Synapse Analytics to meet child welfare business needs. β€’ Design, Develop and maintain data models in Synapse to support reporting and analytics β€’ Develop ETL solutions using Azure Synapse Analytics Pipelines and dedicated SQL pools for extraction, transformation and aggregation from Salesforce objects. β€’ Analyze the source system and design the ETL data load. β€’ Ability to create and configure Azure Synapse Analytics workspace. β€’ Create pipelines in Azure Synapse using Linked Services/Datasets/Pipeline to Extract, Transform and Load data from different sources like Salesforce. β€’ Perform needed monitoring, troubleshooting, and maintenance of the data architecture and pipelines. β€’ Collaborate with data engineers, analysts, and stakeholders to gather requirements, design visualizations, and provide training to use self-service BI Tools. Required Skills: β€’ 5 Years - Experience in designing and optimizing scalable data pipelines using modern platforms like Azure Synapse, Snowflake or Databricks workflows β€’ 3 Years - Programming Languages: Strong proficiency in Python or Scala for data processing. β€’ 7 Years - SQL Mastery: Advanced query writing, performance tuning, and optimization. β€’ 4 Years - Experience with Data Modeling and Warehousing (star schema, snowflake schema) β€’ 5 Years - Experience with ETL/ELT processes Desired Skills: β€’ 4 Years - Experience with Azure Data Platform (Azure Data Factory, Azure Data Lake, and Azure SQL Database) β€’ 4 Years - Familiarity with Event Hubs, Azure Stream Analytics, and Azure Functions β€’ 1 Year - Experience with Programming and Scripting (SQL, T-SQL, and data query optimization) β€’ 5 Years - Experience with cloud platforms (e.g. AWS and Azure) and associated services for data processing and storage β€’ 1 Year - Experience with Power BI This position is available as a corp-to-corp or W2 position with a competitive benefits package. DataStaff offers medical, dental, and vision coverage options as well as paid vacation, sick, and holiday leave. As many of our opportunities are long-term, we also have a 401k program available for employees after 6 months.