DeWinter Group

Sr. Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer on a 7-month W2 contract, paying $95–105/hr. Located hybrid in Westbrook, Maine, or fully remote for strong candidates, it requires 3+ years of AWS experience, Python proficiency, and expertise in building production data systems.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
840
-
🗓️ - Date
April 2, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Westbrook, ME
-
🧠 - Skills detailed
#Data Lake #Data Modeling #Scala #Observability #Cloud #"ETL (Extract #Transform #Load)" #API (Application Programming Interface) #AWS (Amazon Web Services) #Automation #Data Engineering #Data Ingestion #REST (Representational State Transfer) #Data Pipeline #Storage #REST API #Snowflake #Terraform #Python #Automated Testing #SQL (Structured Query Language) #Data Transformations #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Databricks #DevOps #IAM (Identity and Access Management) #Lambda (AWS Lambda)
Role description
Title: Sr. Data Engineer Job Type: W2 Contract Contract Length: 7 months Pay Range: $95–105/hr Start Date: ASAP Location: Hybrid preferred in Westbrook, Maine, with fully remote work within the United States available for exceptionally strong candidates.\\ About The Opportunity Our client, a leader in Enterprise Diagnostic Data and Technology, is looking for a skilled Sr. Data Engineer to join their team for a 7 months engagement. This project involves developing and maintaining the data systems and infrastructure that deliver diagnostic results to our customers in near real-time. This is a high-impact role that requires a self-motivated professional who can hit the ground running and deliver results quickly. Key Responsibilities & Deliverables This role is focused on the successful completion of specific tasks and deliverables. Your responsibilities will include: • Streaming Data Pipelines: Build and maintain streaming ingestion, API-based data workflows, and distributed data transformations using AWS services (Kinesis, Lambda, Glue/Spark, EMR, S3). • API Development: Develop and maintain REST APIs for data ingestion and consumption. • Data Modeling: Develop and optimize data lake and operational data models for near-real-time use cases, supporting downstream loading into Snowflake. • DevOps/Reliability: Contribute to the platform’s CI/CD, automated testing, and release workflows, working to improve reliability, automation, and observability. • On-Call Support: Participate in an on-call rotation (1 week duty rotation every 6 weeks) to provide support for the team’s data systems and services. Required Skills & Experience: We are looking for someone with a proven track record of successful contract engagements. The ideal candidate will have: • 3+ years of hands-on experience building production data systems or data services. • Deep expertise in AWS for building applications, including services like Lambda, S3, API Gateway, IAM, Glue/Spark, and Kinesis. This is not a SQL/ELT or Databricks-notebook-driven role—you need to be a subject matter expert in building production systems. • Solid proficiency in Python (and ideally Scala). • Demonstrated ability to debug and reason about distributed data systems (streaming pipelines, Spark on EMR/Glue, APIs, queues, storage layers). • Practical experience with infrastructure-as-code (Terraform or CloudFormation). • Strong communication skills to provide clear and concise status updates to the project team. • W2 only (No C2C or 1099 contractors)