Abacus Service Corporation

Data Engineer — HRIT

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer — HRIT in Juno Beach, FL, for a 12-month contract at $W2. Requires 3+ years in data engineering, proficiency in SQL and Python, and experience with Google Cloud Platform. HR system exposure preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 14, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Juno Beach, FL
-
🧠 - Skills detailed
#Data Integration #XML (eXtensible Markup Language) #"ETL (Extract #Transform #Load)" #REST API #SAP #Data Quality #AWS (Amazon Web Services) #Azure #AI (Artificial Intelligence) #Data Engineering #Cloud #GCP (Google Cloud Platform) #REST (Representational State Transfer) #BigQuery #SQL (Structured Query Language) #ML (Machine Learning) #JSON (JavaScript Object Notation) #Automation #Python #Data Pipeline #Apache Beam #dbt (data build tool) #Dataflow
Role description
Job Title: Data Engineer — HRIT Location: Juno Beach, FL - Onsite Duration: 12 months contract Local Candidates - Only W2 Description: • The Data Engineer is a foundational role within the HR IT and Corporate Services IT organization, responsible for building and sustaining the data infrastructure that powers analytics, automation, and AI-driven capabilities across the enterprise. This role sits at the intersection of HR/Corporate Services systems and the Google Cloud Platform data stack, ensuring that data flowing from operational platforms is clean, structured, governed, and ready to support intelligent applications. • AI activation agenda — including Gemini Enterprise, Vertex AI, and the HR Services 2027 initiative — the Data Engineer ensures the semantic layer above core systems remains coherent and trustworthy. Without this foundation, automation becomes brittle and AI outputs become unreliable. Required Skills & Experience • 3+ years of experience in data engineering, data integration, or a closely related role • Proficiency in SQL and Python for data transformation and pipeline development • Experience with cloud data platforms — Google Cloud Platform (BigQuery) preferred; Azure or AWS considered • Familiarity with data pipeline frameworks (e.g., Apache Beam, dbt, Dataflow, or equivalent) • Working knowledge of REST APIs and data exchange patterns (JSON, XML, flat file) • Understanding of dimensional modeling, data warehousing concepts, and semantic layer design • Domain • Exposure to HR or enterprise business systems (SAP SuccessFactors, ServiceNow, SAP S/4HANA, or similar) strongly preferred • Ability to work with business stakeholders to translate data needs into technical requirements • Mindset • Treats data as a product — thinks about consumers, reliability, and usability, not just pipeline execution • Comfortable operating in an environment where source systems are complex and definitions are inconsistent • Proactive about data quality — finds breaks before users do Preferred Skills: • Experience with SAP CPI or other middleware/integration platforms • Familiarity with dbt for data transformation and semantic modeling • Exposure to LLM grounding, RAG patterns, or AI/ML data preparation • Experience supporting regulated industries (energy, finance, healthcare) where auditability matters • Google Cloud Professional Data Engineer certification (or in progress)