Stott and May

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focused on building large-scale data pipelines, requiring strong experience with Kafka, Spark, Python or Java, and SQL. It offers a six-month contract, $70-100 per hour, and is fully remote within the EST time zone.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
800
-
πŸ—“οΈ - Date
February 27, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Chicago, IL
-
🧠 - Skills detailed
#DevOps #Data Quality #Java #GCP (Google Cloud Platform) #Spark (Apache Spark) #Terraform #Python #Azure #Cloud #Kafka (Apache Kafka) #ML (Machine Learning) #Data Engineering #Data Processing #SQL (Structured Query Language) #Data Pipeline #Batch
Role description
We’re working with a digital client hiring a Senior Data Engineer to build and scale large, distributed data pipelines. This is a pure Data Engineering requirement. It is not a DevOps or platform engineering role. The focus is pipeline design, streaming architecture, and high volume data processing in production. You’ll join an established team working on real time and batch data systems at scale. Six month initial contract, strong extension view. $70-100 per hour depending on experience, 40 hours per week. W2 or personal LLC only. No C2C. Fully remote, EST time zone. What you’ll be doing β€’ Building large scale batch and streaming data pipelines β€’ Working heavily with Kafka or similar event streaming platforms β€’ Developing distributed processing with Spark or Flink β€’ Optimizing throughput, reliability, and data quality β€’ Collaborating with data, analytics, and platform teams What we’re looking for β€’ Strong experience as a Data Engineer on large distributed systems β€’ Kafka in production environments β€’ Spark or similar distributed processing frameworks β€’ Strong Python or Java, solid SQL β€’ Cloud experience preferred, but we are open to engineers from GCP, Azure, or on prem backgrounds Nice to have β€’ Flink β€’ Terraform exposure β€’ Experience supporting downstream analytics or ML High scale data engineering. Modern cloud stack. Extensions likely.