

ALL KNOWN SERVICES
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Bentonville, AR, for 6 months at a W2 pay rate. Key skills include Scala, GCP, Spark, and Kafka. Requires 8+ years IT experience and 4+ years with GCP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
408
-
🗓️ - Date
February 28, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bentonville, AR
-
🧠 - Skills detailed
#NoSQL #Databricks #Data Engineering #AI (Artificial Intelligence) #Big Data #Data Architecture #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #PySpark #Data Quality #Cloud #Databases #Scala #Monitoring #SQL Queries #Redshift #Airflow #Docker #SQL (Structured Query Language) #Snowflake #BigQuery #Data Lake #Kafka (Apache Kafka) #Hadoop #GCP (Google Cloud Platform) #Python #Kubernetes
Role description
Job Title: Lead II – Software Engineering (Senior Data Engineer)
Location: Bentonville, AR (Onsite – 5 Days)
Duration: 6 Months
Notes: W2 only | Video pre-screen required | Scala + GCP mandatory.
Primary Responsibilities:
• Design and develop ETL/ELT pipelines using Spark (Scala/PySpark).
• Build real-time streaming solutions using Kafka / Pub-Sub.
• Develop scalable data architectures (Data Lakes, BigQuery, Dataproc).
• Optimize Spark jobs, SQL queries, and performance tuning.
• Implement data quality, monitoring, and alerting frameworks.
Required:
• 8+ years total IT experience.
• 4+ years recent GCP experience.
• Strong Scala, Spark, PySpark, Python, SQL.
• Big Data tools: Hadoop, BigQuery, Dataproc, Pub/Sub, Vertex AI, Cloud Functions.
• Experience with streaming technologies (Kafka).
Preferred:
• Airflow, Databricks, Docker, Kubernetes.
• Snowflake / Redshift / NoSQL databases.
Job Title: Lead II – Software Engineering (Senior Data Engineer)
Location: Bentonville, AR (Onsite – 5 Days)
Duration: 6 Months
Notes: W2 only | Video pre-screen required | Scala + GCP mandatory.
Primary Responsibilities:
• Design and develop ETL/ELT pipelines using Spark (Scala/PySpark).
• Build real-time streaming solutions using Kafka / Pub-Sub.
• Develop scalable data architectures (Data Lakes, BigQuery, Dataproc).
• Optimize Spark jobs, SQL queries, and performance tuning.
• Implement data quality, monitoring, and alerting frameworks.
Required:
• 8+ years total IT experience.
• 4+ years recent GCP experience.
• Strong Scala, Spark, PySpark, Python, SQL.
• Big Data tools: Hadoop, BigQuery, Dataproc, Pub/Sub, Vertex AI, Cloud Functions.
• Experience with streaming technologies (Kafka).
Preferred:
• Airflow, Databricks, Docker, Kubernetes.
• Snowflake / Redshift / NoSQL databases.






