Keylent Inc

Lead Azure Data Engineer - W2 - No C2C

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Azure Data Engineer in Raleigh, NC, with a contract length of 1 year+, paying W2 only. Key skills include Azure Data Factory, Apache Spark, and advanced SQL. Experience in ETL/ELT design is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 10, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Corp-to-Corp (C2C)
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Apache Spark #NoSQL #Batch #Complex Queries #Data Modeling #Kafka (Apache Kafka) #Data Ingestion #Cloud #ADF (Azure Data Factory) #Databricks #Python #Datasets #Azure SQL #Data Lake #Data Security #Security #Data Pipeline #Storage #Metadata #Compliance #Synapse #API (Application Programming Interface) #Azure Data Factory #Data Management #Azure #Data Engineering #Apache Kafka #Azure Cosmos DB #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Governance #AWS (Amazon Web Services) #Data Architecture #CRM (Customer Relationship Management) #Scala #Azure Synapse Analytics #Azure Databricks
Role description
Lead Azure Data Engineer Main Skill: Data Software Engineering Skill Spec: DSE Β· Python Β· Azure Β· Databricks Employment Type: W2 only Location: Raleigh, NC Work Mode: Full-time, onsite, 5 days a week Project Duration: 1 year+ Start Date: January–February 2026 Travel: None --- # About the Role We’re looking for a senior-level Azure Data Engineer to join a high-impact data and CRM platform team supporting a leading financial institution. This role is hands-on and onsite in Raleigh, NC, and is ideal for someone who enjoys building scalable data solutions, working with modern Azure services, and collaborating closely with business and technical teams. If you enjoy solving complex data problems, designing reliable pipelines, and working in an environment that values clarity, ownership, and quality, this role will suit you well. --- # What You’ll Do β€’ Design, build, and maintain scalable data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse Analytics β€’ Develop ETL/ELT solutions for batch and streaming data ingestion β€’ Work with large datasets, optimizing storage and performance across Azure Data Lake, Azure SQL, and Azure Cosmos DB β€’ Implement API-based and streaming ingestion pipelines (low-latency processing) β€’ Monitor, troubleshoot, and optimize data workflows to ensure availability and performance β€’ Apply data security best practices, including access control, encryption, and auditing β€’ Automate pipelines and workflows using CI/CD and infrastructure-as-code principles β€’ Partner with engineering, analytics, and business teams to support data-driven decisions β€’ Document data architectures, pipelines, and processes to meet compliance and governance standards β€’ Support data governance efforts such as metadata management, lineage, and cataloging --- ### Must-Have Skills β€’ Strong experience with Apache Spark β€’ Hands-on expertise with: β€’ Azure Data Factory β€’ Azure SQL β€’ Azure Synapse Analytics β€’ Solid background in ETL / ELT design and implementation β€’ Advanced SQL skills, including writing and optimizing complex queries β€’ Experience with data modeling and large-scale data environments --- ### Nice-to-Have Skills β€’ Apache Kafka β€’ CI/CD pipelines β€’ Python β€’ Experience with Azure Databricks β€’ Familiarity with NoSQL solutions (Azure Cosmos DB) β€’ Exposure to both Azure and AWS cloud infrastructure --- ### Interview Process 1. General Interview 1. Technical Interview 1. Project Interview 1. Client Interview