

Saransh Inc
Data Engineer with Spark Streaming & GCP - Only W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8+ years of experience in Python, SQL, and GCP, focusing on Apache Spark and streaming technologies. It is a W2 contract position based in Bentonville, AR, requiring expertise in real-time data pipelines.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bentonville, AR
-
🧠 - Skills detailed
#Data Lake #Kubernetes #Spark SQL #Kafka (Apache Kafka) #BigQuery #Apache Spark #Redshift #Data Pipeline #Batch #"ETL (Extract #Transform #Load)" #Data Warehouse #NoSQL #Snowflake #Scala #GCP (Google Cloud Platform) #Programming #ML (Machine Learning) #Apache Kafka #Databricks #Docker #SQL (Structured Query Language) #Databases #SQL Queries #Data Architecture #Data Processing #Data Ingestion #Java #Big Data #Spark (Apache Spark) #Data Engineering #PySpark #Python #Azure #Cloud #Monitoring #Airflow #Data Accuracy #Data Quality
Role description
Role: Data Engineer (Python, Scala, SQL)
Location: Bentonville, AR (Onsite from Day 1)
Job Type: W2 Contract
Note: Only W2 - No C2C (or) Third-party candidates
Mandatory Areas
• 8+ years of experience in Python, SQL, and potentially Scala/Java
• Big Data: Expertise in Apache Spark (Spark SQL, DataFrames, Streaming).
• 4+ Years in GCP
Description
We are seeking a Data Engineer with Spark & Streaming skills that builds real-time, scalable data pipelines using tools like Spark, Kafka, and cloud services (GCP) to ingest, transform, and deliver data for analytics and ML.
Responsibilities
• Design, develop, and maintain ETL/ELT data pipelines for batch and real-time data ingestion, transformation, and loading using Spark (PySpark/Scala) and streaming technologies (Kafka, Flink).
• Build and optimize scalable data architectures, including data lakes, data warehouses (BigQuery), and streaming platforms.
• Performance Tuning: Optimize Spark jobs, SQL queries, and data processing workflows for speed, efficiency, and cost-effectiveness
• Data Quality: Implement data quality checks, monitoring, and alerting systems to ensure data accuracy and consistency.
Required Skills & Qualifications
• Programming: Strong proficiency in Python, SQL, and potentially Scala/Java.
• Big Data: Expertise in Apache Spark (Spark SQL, DataFrames, Streaming).
• Streaming: Experience with messaging queues like Apache Kafka, or Pub/Sub.
• Cloud: Familiarity with GCP, Azure data services.
• Databases: Knowledge of data warehousing (Snowflake, Redshift) and NoSQL databases.
• Tools: Experience with Airflow, Databricks, Docker, Kubernetes is a plus.
Role: Data Engineer (Python, Scala, SQL)
Location: Bentonville, AR (Onsite from Day 1)
Job Type: W2 Contract
Note: Only W2 - No C2C (or) Third-party candidates
Mandatory Areas
• 8+ years of experience in Python, SQL, and potentially Scala/Java
• Big Data: Expertise in Apache Spark (Spark SQL, DataFrames, Streaming).
• 4+ Years in GCP
Description
We are seeking a Data Engineer with Spark & Streaming skills that builds real-time, scalable data pipelines using tools like Spark, Kafka, and cloud services (GCP) to ingest, transform, and deliver data for analytics and ML.
Responsibilities
• Design, develop, and maintain ETL/ELT data pipelines for batch and real-time data ingestion, transformation, and loading using Spark (PySpark/Scala) and streaming technologies (Kafka, Flink).
• Build and optimize scalable data architectures, including data lakes, data warehouses (BigQuery), and streaming platforms.
• Performance Tuning: Optimize Spark jobs, SQL queries, and data processing workflows for speed, efficiency, and cost-effectiveness
• Data Quality: Implement data quality checks, monitoring, and alerting systems to ensure data accuracy and consistency.
Required Skills & Qualifications
• Programming: Strong proficiency in Python, SQL, and potentially Scala/Java.
• Big Data: Expertise in Apache Spark (Spark SQL, DataFrames, Streaming).
• Streaming: Experience with messaging queues like Apache Kafka, or Pub/Sub.
• Cloud: Familiarity with GCP, Azure data services.
• Databases: Knowledge of data warehousing (Snowflake, Redshift) and NoSQL databases.
• Tools: Experience with Airflow, Databricks, Docker, Kubernetes is a plus.






