Hydrogen Group

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer III in Houston, TX, on-site for a 5-month contract at $104-109/hr. Requires 5+ years in data engineering, expertise in Python, SQL, Apache Airflow, Kubernetes, and machine learning concepts.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
872
-
🗓️ - Date
January 6, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Datasets #Python #NumPy #Deployment #Scala #Data Processing #Data Pipeline #ML (Machine Learning) #Data Analysis #AI (Artificial Intelligence) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Kubernetes #Leadership #API (Application Programming Interface) #Pytest #Data Quality #Data Science #Data Engineering #Airflow #Pandas #Apache Airflow #GIT
Role description
Data Engineer III Houston, TX (on-site) Duration: 5 month contract Pay: $104-109/hr We are seeking a Data Engineer to join a growing Data Engineering and Advanced Analytics team. In this role, you will work closely with data scientists and business stakeholders to solve complex, real-world problems using machine learning, data science, and artificial intelligence. This position plays a key role in establishing and advancing data engineering best practices across the organization. The ideal candidate brings strong technical expertise, a collaborative mindset, and the ability to contribute technical leadership while building scalable, production-ready data solutions. Must Have Skills • 5+ years of experience in data engineering or related roles • Python (Pandas, NumPy, Pytest, Scikit-Learn) • SQL • Apache Airflow • Kubernetes • CI/CD pipelines • Git • Test-Driven Development (TDD) • API development • Working knowledge of machine learning concepts and workflows Key Responsibilities • Design, build, test, and maintain scalable data pipeline architectures • Develop and support data-intensive applications and APIs • Automate manual data processes to improve efficiency, scalability, and reliability • Transform raw data into structured, actionable datasets through robust algorithms • Operationalize analytical, statistical, and machine learning models in production environments • Collaborate with data analysts and data scientists to enable automated data processing and deployment • Implement data quality checks to ensure accuracy, consistency, and reliability across datasets • Manage multiple analytics initiatives independently while supporting diverse business needs ...