AllianceIT Inc

Senior Big Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Big Data Engineer based in Bentonville, AR, with a contract length of unspecified duration, offering $50/hr on W2. Key skills include Scala, Spark, Python, and GCP tools. Requires 8+ years of IT experience, including 4+ years in GCP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bentonville, AR
-
🧠 - Skills detailed
#Data Lake #Kubernetes #Spark SQL #Kafka (Apache Kafka) #BigQuery #Apache Spark #Redshift #Data Pipeline #Batch #"ETL (Extract #Transform #Load)" #Data Warehouse #NoSQL #AI (Artificial Intelligence) #Snowflake #Scala #GCP (Google Cloud Platform) #Hadoop #Programming #Apache Kafka #Databricks #Docker #SQL (Structured Query Language) #Databases #SQL Queries #Data Architecture #Data Processing #Data Ingestion #Data Modeling #Big Data #Java #Spark (Apache Spark) #Data Engineering #PySpark #Python #Azure #Cloud #Monitoring #Airflow #Data Accuracy #Data Quality
Role description
Position: Senior Big Data Engineer Location – Bentonville AR Onsite W2 Position 50/Hr on W2 Must Have Skills – Data Engineer with Scala Skill 1 – Scala, Spark, Python, SQL, Bigdata, Hadoop GCP data tools: BigQuery, Dataproc, Vertex AI, Pub/Sub, Cloud Functions Skill 2 – PySpark, Python, SparkSQL, and data modeling Responsibilities: As a Senior Data Engineer, you will Design, develop, and maintain ETL/ELT data pipelines for batch and real-time data ingestion, transformation, and loading using Spark (PySpark/Scala) and streaming technologies (Kafka, Flink). Build and optimize scalable data architectures, including data lakes, data warehouses (BigQuery), and streaming platforms. Performance Tuning: Optimize Spark jobs, SQL queries, and data processing workflows for speed, efficiency, and cost-effectiveness Data Quality: Implement data quality checks, monitoring, and alerting systems to ensure data accuracy and consistency. Required Skills & Qualifications: Programming: Strong proficiency in Python, SQL, and potentially Scala/Java. Big Data: Expertise in Apache Spark (Spark SQL, DataFrames, Streaming). Streaming: Experience with messaging queues like Apache Kafka, or Pub/Sub. Cloud: Familiarity with GCP, Azure data services. Databases: Knowledge of data warehousing (Snowflake, Redshift) and NoSQL databases. Tools: Experience with Airflow, Databricks, Docker, Kubernetes is a plus. Experience Level: Total IT Experience – Minimum 8 years GCP - 4 + years of recent GCP experience