MPower Plus

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position in "Tampa, FL | Plano, TX | Jersey City, NJ" with a focus on building scalable data pipelines. Key skills include Big Data, Cloud technologies, DevOps, and experience with Hadoop and microservices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 1, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Tampa, FL
-
🧠 - Skills detailed
#Kudu #Big Data #Data Pipeline #NoSQL #Pig #"ETL (Extract #Transform #Load)" #Hadoop #RDBMS (Relational Database Management System) #Logstash #Data Science #Docker #Spark (Apache Spark) #Deployment #HDFS (Hadoop Distributed File System) #Kafka (Apache Kafka) #Databases #Kubernetes #Scala #Data Engineering #REST API #Microservices #Automation #Elasticsearch #ML (Machine Learning) #Cloud #Data Modeling #DevOps #REST (Representational State Transfer) #Impala
Role description
Job Title: Data Engineer Location: Tampa, FL | Plano, TX | Jersey City, NJ Job Type: Contract Job Summary We are seeking a Data Engineer to design and build scalable data pipelines that ingest, cleanse, and standardize data for analytics and machine learning use cases. The role involves collaborating with Data Scientists and Data Modelers to operationalize ML models and deploy outputs via microservices, dashboards, reports, and automated notifications. Key Responsibilities • Build and maintain end-to-end data pipelines for ingestion, transformation, and standardization • Deploy and operationalize machine learning models into production environments • Enable model output delivery via microservices, dashboards, reporting, or email automation • Ensure performance, scalability, and reliability of data workflows • Collaborate with cross-functional teams including Data Science and Data Modeling teams Required Skills & Experience • Big Data and Cloud technologies for building data pipelines • Strong DevOps experience (CI/CD, source control, deployments) • Containerization tools: Docker, Kubernetes • Experience with tools like Druid, Elasticsearch, Logstash • Strong knowledge of Hadoop ecosystem (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr) • Solid understanding of distributed systems, data structures, and algorithms • Experience with Microservices architecture • Hands-on with REST APIs and authentication mechanisms • Knowledge of RDBMS and NoSQL databases