

TALENDICA
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a W2 contract for "X" months, offering a pay rate of "$X". Key skills include Azure Data Engineering, Scala, Spark, and SQL. Requires 5+ years in big data platforms and CI/CD processes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Deployment #Azure #Big Data #Data Engineering #Spark (Apache Spark) #Azure Data Platforms #Python #C# #ADLS (Azure Data Lake Storage) #Programming #Java #SQL (Structured Query Language) #Compliance #Security #Monitoring #Debugging #SQL Queries #Scala
Role description
ONLY W2
Must-haves:
Azure Data Engineering
Scala
spark
Job Description:
Top 3 Must-Have Hard Skillsets
1. Big Data Platform Experience — Minimum 5 years
Cosmos (MapReducer), Azure data stack (HDI, ADLS Gen2), Spark/Scala.
1. SQL + High-Level Programming Language — Minimum 5 years
SQL queries plus experience in C#, Scala, Java, or Python (flexible, not a deal breaker).
1. CICD, Debugging & Troubleshooting — Minimum 5 years
Strong debugging, monitoring, and engineering lifecycle skills.
Typical Day
Will work with one of the engineers and engineering manager and responsible for feature design, development, validation, CI/CD, bug fixes, troubleshooting & debugging issues
Follow engineering practice and deployment rules,
SAW device is needed
Key Projects
• Security work: fixing code security issues, updating identities/secrets, ensuring secure endpoints.
• Governance: data partitioning, physical access restrictions, compliance related updates.
• Feature innovation and continuous platform enhancements.
• Maintaining and improving pipelines across Cosmos (MapReducer) and Azure data platforms (HDI, ADLS Gen2, Spark).
Typical Task Breakdown & Operating Rhythm
• 15–20% security-related fixes (code vulnerabilities, endpoint issues, secret rotation).
• Governance and compliance work (data partitioning, access control).
• Writing, modifying, and debugging code and services.
• CICD, monitoring, and standard engineering lifecycle practices.
• Mostly heads down execution work, following a task list and completing high priority deliverables quickly.
• Collaborating with FTEs who will help onboard and accelerate ramp up.
• Using SAW device access for AME domain work (pre prod/prod as allowed).
Ideal Background
Distributed system design & implementation, BIG DATA platforms knowledge, CI/CD processes, debugging & troubleshooting skills
Disqualifiers
• Long ramp up timeline or lack of experience with large scale data platforms.
• No experience in high reliability data engineering environments.
• Inability to work independently with heavy execution demands.
• Lack of any programming language + SQL exposure (manager expects at least a foundational high level language).
Best vs. Average Candidate
• Best:
o Has 5–7 yrs in large scale data engineering (Cosmos, Azure, Spark).
o Strong SQL + one high level language (C#, Scala, Java, Python).
o Prior Microsoft experience (S360, security expectations, internal processes).
o Very fast ramp up and can immediately execute tasks independently
ONLY W2
Must-haves:
Azure Data Engineering
Scala
spark
Job Description:
Top 3 Must-Have Hard Skillsets
1. Big Data Platform Experience — Minimum 5 years
Cosmos (MapReducer), Azure data stack (HDI, ADLS Gen2), Spark/Scala.
1. SQL + High-Level Programming Language — Minimum 5 years
SQL queries plus experience in C#, Scala, Java, or Python (flexible, not a deal breaker).
1. CICD, Debugging & Troubleshooting — Minimum 5 years
Strong debugging, monitoring, and engineering lifecycle skills.
Typical Day
Will work with one of the engineers and engineering manager and responsible for feature design, development, validation, CI/CD, bug fixes, troubleshooting & debugging issues
Follow engineering practice and deployment rules,
SAW device is needed
Key Projects
• Security work: fixing code security issues, updating identities/secrets, ensuring secure endpoints.
• Governance: data partitioning, physical access restrictions, compliance related updates.
• Feature innovation and continuous platform enhancements.
• Maintaining and improving pipelines across Cosmos (MapReducer) and Azure data platforms (HDI, ADLS Gen2, Spark).
Typical Task Breakdown & Operating Rhythm
• 15–20% security-related fixes (code vulnerabilities, endpoint issues, secret rotation).
• Governance and compliance work (data partitioning, access control).
• Writing, modifying, and debugging code and services.
• CICD, monitoring, and standard engineering lifecycle practices.
• Mostly heads down execution work, following a task list and completing high priority deliverables quickly.
• Collaborating with FTEs who will help onboard and accelerate ramp up.
• Using SAW device access for AME domain work (pre prod/prod as allowed).
Ideal Background
Distributed system design & implementation, BIG DATA platforms knowledge, CI/CD processes, debugging & troubleshooting skills
Disqualifiers
• Long ramp up timeline or lack of experience with large scale data platforms.
• No experience in high reliability data engineering environments.
• Inability to work independently with heavy execution demands.
• Lack of any programming language + SQL exposure (manager expects at least a foundational high level language).
Best vs. Average Candidate
• Best:
o Has 5–7 yrs in large scale data engineering (Cosmos, Azure, Spark).
o Strong SQL + one high level language (C#, Scala, Java, Python).
o Prior Microsoft experience (S360, security expectations, internal processes).
o Very fast ramp up and can immediately execute tasks independently






