

aKUBE
Senior Data Engineer - Product Performance Data -1573
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focusing on Product Performance Data, lasting 10 months, with a pay rate up to $96/hr. Key skills include Apache Spark with Scala, advanced SQL, and experience with clickstream data.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
768
-
ποΈ - Date
February 11, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
New York City Metropolitan Area
-
π§ - Skills detailed
#A/B Testing #Python #SQL (Structured Query Language) #Scala #Spark (Apache Spark) #Airflow #Data Quality #Databricks #Libraries #Datasets #Data Engineering #Apache Spark
Role description
City: Bristol, CT or NYC
Onsite/ Hybrid/ Remote: Hybrid (4 days a week Onsite)
Duration: 10 months
Rate Range: Up to $96/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
β’ Apache Spark with Scala as the primary production language
β’ Advanced SQL on large datasets (complex joins, window functions, performance tuning)
β’ Hands-on experience with clickstream or user browse event data at scale
β’ Databricks pipeline development with end-to-end ownership
β’ Airflow for orchestration of SLA-driven production pipelines
Responsibilities:
β’ Build and maintain large-scale Spark pipelines processing clickstream and telemetry data.
β’ Develop shared Scala and Python libraries to standardize business logic across pipelines.
β’ Own Databricks workloads orchestrated via Airflow with strict uptime SLAs.
β’ Deliver analytics-ready datasets supporting product performance and reporting.
β’ Define and document standards for pipeline design, partitioning, and data quality.
Qualifications:
β’ 5+ years of hands-on data engineering experience.
β’ Strong production experience with Spark, Scala, SQL, and Databricks.
β’ Experience supporting high-volume, event-based data platforms.
β’ Bachelorβs degree or equivalent experience.
Nice to Have:
β’ Experience with media, streaming, or consumer-facing product analytics data
β’ Familiarity with telemetry or quality-of-service datasets
β’ Experience supporting experimentation or A/B testing data
City: Bristol, CT or NYC
Onsite/ Hybrid/ Remote: Hybrid (4 days a week Onsite)
Duration: 10 months
Rate Range: Up to $96/hr on W2 depending on experience (no C2C or 1099 or sub-contract)
Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B
Must Have:
β’ Apache Spark with Scala as the primary production language
β’ Advanced SQL on large datasets (complex joins, window functions, performance tuning)
β’ Hands-on experience with clickstream or user browse event data at scale
β’ Databricks pipeline development with end-to-end ownership
β’ Airflow for orchestration of SLA-driven production pipelines
Responsibilities:
β’ Build and maintain large-scale Spark pipelines processing clickstream and telemetry data.
β’ Develop shared Scala and Python libraries to standardize business logic across pipelines.
β’ Own Databricks workloads orchestrated via Airflow with strict uptime SLAs.
β’ Deliver analytics-ready datasets supporting product performance and reporting.
β’ Define and document standards for pipeline design, partitioning, and data quality.
Qualifications:
β’ 5+ years of hands-on data engineering experience.
β’ Strong production experience with Spark, Scala, SQL, and Databricks.
β’ Experience supporting high-volume, event-based data platforms.
β’ Bachelorβs degree or equivalent experience.
Nice to Have:
β’ Experience with media, streaming, or consumer-facing product analytics data
β’ Familiarity with telemetry or quality-of-service datasets
β’ Experience supporting experimentation or A/B testing data





