

Stott and May
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focused on building large-scale data pipelines, requiring strong experience with Kafka, Spark, Python or Java, and SQL. It offers a six-month contract, $70-100 per hour, and is fully remote within the EST time zone.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
800
-
ποΈ - Date
February 27, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Chicago, IL
-
π§ - Skills detailed
#DevOps #Data Quality #Java #GCP (Google Cloud Platform) #Spark (Apache Spark) #Terraform #Python #Azure #Cloud #Kafka (Apache Kafka) #ML (Machine Learning) #Data Engineering #Data Processing #SQL (Structured Query Language) #Data Pipeline #Batch
Role description
Weβre working with a digital client hiring a Senior Data Engineer to build and scale large, distributed data pipelines.
This is a pure Data Engineering requirement. It is not a DevOps or platform engineering role. The focus is pipeline design, streaming architecture, and high volume data processing in production.
Youβll join an established team working on real time and batch data systems at scale.
Six month initial contract, strong extension view.
$70-100 per hour depending on experience, 40 hours per week.
W2 or personal LLC only. No C2C.
Fully remote, EST time zone.
What youβll be doing
β’ Building large scale batch and streaming data pipelines
β’ Working heavily with Kafka or similar event streaming platforms
β’ Developing distributed processing with Spark or Flink
β’ Optimizing throughput, reliability, and data quality
β’ Collaborating with data, analytics, and platform teams
What weβre looking for
β’ Strong experience as a Data Engineer on large distributed systems
β’ Kafka in production environments
β’ Spark or similar distributed processing frameworks
β’ Strong Python or Java, solid SQL
β’ Cloud experience preferred, but we are open to engineers from GCP, Azure, or on prem backgrounds
Nice to have
β’ Flink
β’ Terraform exposure
β’ Experience supporting downstream analytics or ML
High scale data engineering. Modern cloud stack. Extensions likely.
Weβre working with a digital client hiring a Senior Data Engineer to build and scale large, distributed data pipelines.
This is a pure Data Engineering requirement. It is not a DevOps or platform engineering role. The focus is pipeline design, streaming architecture, and high volume data processing in production.
Youβll join an established team working on real time and batch data systems at scale.
Six month initial contract, strong extension view.
$70-100 per hour depending on experience, 40 hours per week.
W2 or personal LLC only. No C2C.
Fully remote, EST time zone.
What youβll be doing
β’ Building large scale batch and streaming data pipelines
β’ Working heavily with Kafka or similar event streaming platforms
β’ Developing distributed processing with Spark or Flink
β’ Optimizing throughput, reliability, and data quality
β’ Collaborating with data, analytics, and platform teams
What weβre looking for
β’ Strong experience as a Data Engineer on large distributed systems
β’ Kafka in production environments
β’ Spark or similar distributed processing frameworks
β’ Strong Python or Java, solid SQL
β’ Cloud experience preferred, but we are open to engineers from GCP, Azure, or on prem backgrounds
Nice to have
β’ Flink
β’ Terraform exposure
β’ Experience supporting downstream analytics or ML
High scale data engineering. Modern cloud stack. Extensions likely.






