Mondo

Operational Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Operational Data Engineer on a 6-month contract, hybrid in Egg Harbor, NJ, paying $65-70/hour. Requires 5+ years in data engineering, strong Azure Databricks and SQL skills, and experience with CI/CD practices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
February 26, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Egg Harbor, NJ
-
🧠 - Skills detailed
#Data Engineering #Data Ingestion #Storage #ADLS (Azure Data Lake Storage) #Airflow #Automation #Cloud #Documentation #Batch #DevOps #Azure Data Factory #Data Pipeline #Python #ADF (Azure Data Factory) #Strategy #Azure Databricks #Data Quality #GitHub #AI (Artificial Intelligence) #Observability #Data Lineage #Azure ADLS (Azure Data Lake Storage) #Azure SQL #SQL (Structured Query Language) #Deployment #Data Architecture #Data Lake #Microsoft Power BI #Leadership #ChatGPT #Databricks #Computer Science #Monitoring #Visualization #Azure #Scala #Vault #Power Automate #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #Logging #BI (Business Intelligence) #Data Modeling #Datasets
Role description
Job Title: Operational Data Engineer Location-Type: Hybrid (Location: Egg Harbor, NJ) Start Date Is: ASAP (Beginning of March) Duration: 6 Month Contract (option to extend) Compensation Range: $65-70/hour W2 Responsibilities As an Operational Data Engineer, you will play a key role in designing, building, and operating highly reliable operational data platforms that support business‐critical systems and near‐real‐time workflows. This role emphasizes data availability, resiliency, observability, and cross‐system integration, while providing technical leadership within the Operational Data team. Key responsibilities include: • Lead the design, development, and support of operational data pipelines serving systems such as eCommerce, OMS, WMS, 3PL, integrations, and other operational platforms • Architect and implement incremental, CDC‐based, event‐driven, and batch data pipelines to support downstream consumers including Databricks, operational dashboards, analytics platforms, and business applications • Partner closely with Integration teams, BI/Data Engineering, QA, Platform, and business stakeholders to define operational data contracts, SLAs, and reliability expectations • Own and evolve the data consumption layer that enables multiple applications and platforms to consume consistent and trusted operational data • Proactively monitor pipelines and jobs, perform deep root cause analysis for failures, performance degradation, and data quality issues, and drive long‐term remediation • Define and implement data quality, reconciliation, validation, and control frameworks for operational datasets • Lead CI/CD strategy and implementation for operational data workflows, including environment promotion, rollback, and deployment automation • Create and maintain comprehensive operational documentation, including architecture diagrams, data lineage, runbooks, and support playbooks • Establish and promote best practices around performance tuning, scalability, resiliency, observability, and error handling • Mentor and guide junior engineers, providing technical direction, design reviews, and best‐practice coaching • Participate in or lead production support and on‐call rotations for business‐critical operational data workloads • Collaborate on initiatives involving seasonal readiness, peak‐load preparation, and operational hardening Required: • Bachelor's degree in Computer Science, Information Technology, or a related field; or equivalent practical experience • 5+ years of experience in data engineering, operational data platforms, integrations, or production data systems • Strong hands‐on experience with Azure Databricks, Azure Data Factory, Azure Data Lake Storage (ADLS), Azure SQL / SQL Database, and Azure Key Vault • Advanced proficiency in SQL and Python, including performance tuning and troubleshooting in production environments • Strong experience with SQL Database platforms in cloud environments (Azure SQL, managed SQL services, performance optimization) • Strong experience with incremental loading, CDC patterns, operational data modeling, and large‐scale data ingestion • Solid understanding of cloud‐native data architectures and distributed systems • Experience with DevOps and CI/CD practices, including source control, automated deployments, and environment management • Hands‐on experience with job orchestration and scheduling tools (e.g., Airflow, VisualCron, or equivalent) • Strong communication skills with the ability to collaborate across technical and business teams • Proven ability to operate effectively in a high‐availability, operationally critical environment Nice to Have: • Experience building or supporting Power Automate workflows for operational automation and integrations • Experience defining and maintaining data lineage across operational and analytical data platforms • Familiarity with event‐driven architectures, messaging systems, or streaming platforms • Experience with monitoring, logging, and alerting frameworks for data platforms • Exposure to Power BI or other visualization tools for operational insights • Experience using AI‐assisted development tools such as Microsoft Copilot, GitHub Copilot, Databricks Assistant, or ChatGPT • Basic understanding of AI/ML concepts and how operational data supports ML workflows Benefits: • This role is eligible to enroll in both Mondo's health insurance plan and retirement plan. Mondo defers to the applicable State or local law for paid sick leave eligibility