

Neutrino Advisory, an Inc 5000 Company
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 8+ years of IT experience, specializing in Spark, Scala, and GCP. Contract length is unspecified, with a competitive pay rate. Key skills include ETL/ELT development, data architecture, and streaming technologies.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 19, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Bentonville, AR
-
🧠 - Skills detailed
#Scala #ML (Machine Learning) #Data Architecture #Data Processing #Apache Kafka #Data Quality #Azure #Data Engineering #Java #Kafka (Apache Kafka) #NoSQL #Data Warehouse #Python #Data Pipeline #Cloud #Airflow #Programming #Kubernetes #Data Accuracy #PySpark #Data Lake #Databases #Apache Spark #Batch #BigQuery #Data Ingestion #Redshift #Spark SQL #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Snowflake #Docker #GCP (Google Cloud Platform) #SQL (Structured Query Language) #SQL Queries #Monitoring #Big Data #Databricks
Role description
We are seeking a Data Engineer with Spark & SCALA ; Streaming skills builds real-time, scalable data pipelines using tools like Spark, Kafka, and cloud services (GCP) to ingest, transform, and deliver data for analytics and ML.
As a Senior Data Engineer, you will:
• Design, develop, and maintain ETL/ELT data pipelines for batch and real-time data ingestion, transformation, and loading using Spark (PySpark/Scala) and streaming technologies (Kafka, Flink).
• Build and optimize scalable data architectures, including data lakes, data warehouses (BigQuery), and streaming platforms.
• Performance Tuning: Optimize Spark jobs, SQL queries, and data processing workflows for speed, efficiency, and cost-effectiveness
• Data Quality: Implement data quality checks, monitoring, and alerting systems to ensure data accuracy and consistency.
Required Skills & Qualifications:
• At least 8 years of IT experience
• 4 + years of recent GCP experience
• Programming: Strong proficiency in Python, SQL, and potentially Scala/Java.
• Big Data: Expertise in Apache Spark (Spark SQL, DataFrames, Streaming).
• Streaming: Experience with messaging queues like Apache Kafka, or Pub/Sub.
• Cloud: Familiarity with GCP, Azure data services.
• Databases: Knowledge of data warehousing (Snowflake, Redshift) and NoSQL databases.
• Tools: Experience with Airflow, Databricks, Docker, Kubernetes is a plus.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
We are seeking a Data Engineer with Spark & SCALA ; Streaming skills builds real-time, scalable data pipelines using tools like Spark, Kafka, and cloud services (GCP) to ingest, transform, and deliver data for analytics and ML.
As a Senior Data Engineer, you will:
• Design, develop, and maintain ETL/ELT data pipelines for batch and real-time data ingestion, transformation, and loading using Spark (PySpark/Scala) and streaming technologies (Kafka, Flink).
• Build and optimize scalable data architectures, including data lakes, data warehouses (BigQuery), and streaming platforms.
• Performance Tuning: Optimize Spark jobs, SQL queries, and data processing workflows for speed, efficiency, and cost-effectiveness
• Data Quality: Implement data quality checks, monitoring, and alerting systems to ensure data accuracy and consistency.
Required Skills & Qualifications:
• At least 8 years of IT experience
• 4 + years of recent GCP experience
• Programming: Strong proficiency in Python, SQL, and potentially Scala/Java.
• Big Data: Expertise in Apache Spark (Spark SQL, DataFrames, Streaming).
• Streaming: Experience with messaging queues like Apache Kafka, or Pub/Sub.
• Cloud: Familiarity with GCP, Azure data services.
• Databases: Knowledge of data warehousing (Snowflake, Redshift) and NoSQL databases.
• Tools: Experience with Airflow, Databricks, Docker, Kubernetes is a plus.
We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.






