

Hire Tech Services
Lead Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a contract length of "unknown" and a pay rate of "unknown." It requires local St. Louis residency, expertise in Scala, PySpark, Databricks, AWS, and AI agent knowledge for data ingestion and streaming platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date
April 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
St Louis, MO
-
🧠 - Skills detailed
#Impala #AI (Artificial Intelligence) #Data Engineering #Monitoring #Python #NiFi (Apache NiFi) #Data Ingestion #Databricks #Java #PySpark #Hadoop #AWS (Amazon Web Services) #Cloudera #Spark (Apache Spark) #Scala #Kafka (Apache Kafka) #Splunk #Cloud #SQL (Structured Query Language) #Big Data
Role description
Data Engineering Services\_ St. Louis, MO (Hybrid, 3 Days Onsite)
Must haves - Scala, Pyspark, Databricks and AWS
Note: Candidate must be local to St. Louis and the final round of Interview is a On-site Interview.
Role Overview:
Team Focus: Supports critical data ingestion, streaming and model monitoring platforms that enable real time decisioning and fraud risk capabilities. The team contributes to enhancements & modernization. The main team focus right now is model monitoring, second is modernization of applications/legacy systems and third, leveraging AI tools
High level overview: Engineer with experience in Scala, Python or Java, Big Data+ AWS/Databricks. Needs AI agent knowledge to create agents within the team to automate the processes
Core skills:
• Data Engineering / Streaming Platforms (Kafka, real-time pipelines using scala/python, NIFI)
• Big Data Platforms (Spark, Hadoop/Ozone, Hive or impala, AWS, Databricks, cloudera manager platform)
• Production Support & Platform Reliability (using Splunk for monitoring, troubleshooting, performance tuning)
• Strong SQL
• AI agent knowledge
Data Engineering Services\_ St. Louis, MO (Hybrid, 3 Days Onsite)
Must haves - Scala, Pyspark, Databricks and AWS
Note: Candidate must be local to St. Louis and the final round of Interview is a On-site Interview.
Role Overview:
Team Focus: Supports critical data ingestion, streaming and model monitoring platforms that enable real time decisioning and fraud risk capabilities. The team contributes to enhancements & modernization. The main team focus right now is model monitoring, second is modernization of applications/legacy systems and third, leveraging AI tools
High level overview: Engineer with experience in Scala, Python or Java, Big Data+ AWS/Databricks. Needs AI agent knowledge to create agents within the team to automate the processes
Core skills:
• Data Engineering / Streaming Platforms (Kafka, real-time pipelines using scala/python, NIFI)
• Big Data Platforms (Spark, Hadoop/Ozone, Hive or impala, AWS, Databricks, cloudera manager platform)
• Production Support & Platform Reliability (using Splunk for monitoring, troubleshooting, performance tuning)
• Strong SQL
• AI agent knowledge





