Hydrogen Group

Data Engineer II

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer II in Juno Beach, FL, on a 12-month contract at $40-45/hr. Requires 3+ years of data engineering experience, proficiency in SQL and Python, and familiarity with cloud platforms. Experience with enterprise systems preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
360
-
🗓️ - Date
April 14, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Juno Beach, FL
-
🧠 - Skills detailed
#Data Integration #Data Lineage #XML (eXtensible Markup Language) #"ETL (Extract #Transform #Load)" #SAP #Data Quality #Data Ingestion #Classification #Documentation #AWS (Amazon Web Services) #Datasets #Azure #AI (Artificial Intelligence) #Data Architecture #Migration #Data Engineering #Cloud #GCP (Google Cloud Platform) #Data Modeling #Data Access #REST (Representational State Transfer) #BigQuery #SQL (Structured Query Language) #ML (Machine Learning) #JSON (JavaScript Object Notation) #Automation #Semantic Models #Storage #Python #Data Pipeline #Apache Beam #Data Mart #Compliance #dbt (data build tool) #Monitoring #Data Integrity #Dataflow #Scala #Data Migration #Batch
Role description
Data Engineer II Juno Beach, FL (100% on-site) Duration: 12-month contract (W2, not eligible for C2C) Pay: $40-45/hr The Data Engineer is a core member of the HR IT and Corporate Services technology organization, responsible for designing, building, and maintaining the data infrastructure that powers analytics, automation, and AI-enabled capabilities across the enterprise. This role operates at the intersection of HR and Corporate Services systems and a modern cloud data platform, ensuring that data from operational systems is clean, structured, governed, and reliable for downstream reporting, self-service analytics, and intelligent applications. Key Responsibilities Data Pipeline Development & Maintenance • Design, build, and maintain scalable data pipelines that ingest and transform data from enterprise source systems (e.g., HR, finance, procurement, and service management platforms) into a cloud data platform (e.g., BigQuery, Cloud Storage, Pub/Sub or equivalent) • Ensure pipelines are reliable, observable, and resilient, with automated monitoring, alerting, and recovery mechanisms • Support both batch and near-real-time data ingestion patterns Semantic Layer & Data Modeling • Define and maintain consistent business definitions for core enterprise entities such as employee, position, organizational unit, cost center, job classification, and related constructs • Develop and maintain dimensional models and curated data marts for analytics, reporting, and AI consumption • Resolve inconsistencies in data definitions across multiple source systems AI & Automation Enablement • Structure and prepare datasets to support AI use cases, including Retrieval-Augmented Generation (RAG) and LLM grounding • Collaborate with architects and AI teams to define stable and well-governed data contracts between data pipelines and AI models • Enable automation and AI-driven workflows by delivering clean, contextual, and well-structured data for enterprise agents and digital assistants Data Quality & Governance • Implement data quality rules, validation frameworks, and monitoring across the data ecosystem • Proactively identify and resolve data integrity issues before they impact reporting or AI outputs • Support compliance, auditability, and data lineage documentation requirements • Partner with business data owners to enforce data standards and governance practices Analytics & Self-Service Enablement • Build curated datasets and semantic models that enable self-service analytics for business stakeholders • Translate business requirements into reusable, scalable data products • Collaborate with analytics and reporting teams to ensure consistent and trusted data access Platform & Integration Support • Work with integration teams to understand and manage data contracts across system boundaries • Contribute to data architecture decisions within broader transformation and modernization initiatives • Support data migration and system transformation efforts, including updates to pipelines and data models as systems evolve Required Skills & Experience Technical • 3+ years of experience in data engineering, data integration, or related roles • Strong proficiency in SQL and Python for data transformation and pipeline development • Experience with cloud data platforms (Google Cloud Platform preferred; Azure or AWS acceptable) • Familiarity with data pipeline and transformation tools (e.g., Dataflow, Apache Beam, dbt, or similar) • Understanding of APIs and data formats such as REST, JSON, XML, and flat files • Knowledge of data warehousing concepts, dimensional modeling, and semantic layer design Domain • Experience working with enterprise systems (e.g., HR, finance, procurement, or service management platforms) preferred • Ability to translate business requirements into technical data solutions Mindset • Treats data as a product, focusing on usability, reliability, and long-term maintainability • Comfortable working with complex and inconsistent source systems • Proactive approach to identifying and resolving data quality issues Preferred Skills • Experience with integration platforms or middleware (e.g., SAP CPI or equivalent) • Familiarity with dbt for transformation and semantic modeling • Exposure to AI/ML data preparation, including RAG and LLM grounding concepts • Experience in regulated environments where auditability and compliance are important • Cloud data engineering certification (or working toward one) such as Google Cloud Professional Data Engineer