Data Pipeline Engineer(Enable AI/ML)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Pipeline Engineer (Enable AI/ML) in Dallas, TX (Hybrid) on a contract basis. Requires 3–8 years in Financial Services or Big Tech, expertise in AWS, Spark, Kafka, ETL, SQL, and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 26, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Cloud #Data Pipeline #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Data Processing #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #ML (Machine Learning) #AI (Artificial Intelligence) #Python #Spark (Apache Spark) #Strategy #Data Strategy #Data Modeling #Migration
Role description
Job Title: Data Pipeline Engineer Location: Dallas TX (Hybrid) local Only Job Type: Contract We need local candidates only Responsibilities: Unified cloud/data strategy: centralize into functional models (real-time, analytical, operational), modernize platforms, enable AI/ML. Mandatory Requirements 3–8 years of experience building and optimizing data pipelines within Financial Services or Big Tech environments Hands-on expertise with AWS, Spark, Kafka, and ETL frameworks Strong proficiency in data modeling and experience with lakehouse migration projects Proven track record of consolidating fragmented data sources into centralized platforms Advanced skills in SQL and Python Nice to Have Experience with real-time data processing in regulated environments Familiarity with AI/ML data enablement use cases