Reqroute, Inc

Data Engineering

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Columbus, Ohio, with a 6-month contract at a pay rate of "X". Requires 5+ years of experience in data engineering, proficiency in Python, SQL, Apache Spark, and knowledge of data governance frameworks. Hybrid work model.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 22, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Columbus, OH
-
🧠 - Skills detailed
#Data Layers #Alation #Batch #Monitoring #Data Vault #"ETL (Extract #Transform #Load)" #PySpark #SQL (Structured Query Language) #Scala #Delta Lake #Cloud #Data Pipeline #Azure #Vault #Data Engineering #Data Governance #Apache Spark #ADF (Azure Data Factory) #GDPR (General Data Protection Regulation) #Terraform #Automated Testing #Data Catalog #Agile #Python #GIT #AI (Artificial Intelligence) #ML (Machine Learning) #Observability #Dimensional Modelling #Data Processing #Compliance #Data Lake #Requirements Gathering #Azure Data Factory #Deployment #DataOps #Apache Iceberg #Data Quality #Snowflake #Spark (Apache Spark) #Version Control #Datasets
Role description
Data Engineer – Data Engineering Location: Columbus, Ohio (On-site / Hybrid) Role Summary We are seeking a skilled Data Engineer with 5 years of hands-on experience building and maintaining robust data pipelines, data lakes, and analytical platforms. Based in Ohio, this is an individual contributor role with direct engagement with business stakeholders across lululemon’s Ohio-based operations. The successful candidate will own end-to-end data engineering deliverables independently, translating business requirements into scalable, production-grade data solutions that power analytics and AI/ML workloads. Key Responsibilities Β· Design, build, and maintain scalable ETL/ELT pipelines using Python, Apache Spark, and Azure Data Factory to ingest data from diverse retail and operational sources into a centralised data lake (Microsoft Fabric / OneLake) Engage directly with Ohio-based business teams (supply chain, store operations, finance, and merchandising) to gather data requirements, understand domain logic, and translate business needs into well-defined data models and pipeline specifications β€’ Independently own the full data engineering lifecycle for assigned domains β€” from requirements gathering and data modelling through to pipeline deployment, monitoring, and ongoing optimisation β€’ Build and manage Bronze, Silver, and Gold data layers in the lakehouse architecture, applying data quality checks, schema validation, and partitioning strategies to ensure reliable, performant datasets for downstream analytics and ML teams β€’ Participate actively in agile ceremonies (sprint planning, stand-ups, retrospectives), self-manage delivery against sprint commitments, and proactively surface risks or blockers without requiring escalation β€’ Implement and enforce data quality frameworks, lineage tracking, and cataloguing standards using Microsoft Purview, ensuring datasets meet governance and compliance requirements (GDPR, CCPA) β€’ Support and contribute to Global Fulfillment and supply chain data initiatives, acting as the primary data engineering liaison for Ohio-based operational teams and ensuring timely delivery of data products that enable real-time decision-making β€’ Stay current with emerging data engineering tools, patterns (e.g. data mesh, streaming architectures), and Microsoft Fabric capabilities; apply relevant advancements to continuously improve the data platform β€’ β€’ Qualifications β€’ 5+ years of hands-on experience as a Data Engineer, with a proven track record of independently delivering production-grade data pipelines and data products in a cloud-based environment β€’ Proficiency in Python and SQL for data transformation, with hands-on experience using Apache Spark (PySpark) for large-scale batch and streaming data processing β€’ Solid understanding of data modelling concepts (dimensional modelling, star/snowflake schema, data vault) and experience building lakehouse architectures with Delta Lake or Apache Iceberg β€’ Demonstrated ability to work directly with non-technical business stakeholders β€” gathering requirements, explaining data concepts in plain language, and iterating quickly on feedback to deliver business value β€’ Experience with version control (Git), CI/CD pipelines, and DataOps practices β€” including automated testing of data pipelines and Infrastructure-as-Code (Terraform or Bicep) β€’ Familiarity with data governance frameworks, data cataloguing (Microsoft Purview or Apache Atlas), and implementing data quality rules and observability monitoring within pipelines β€’ Strong analytical mindset, attention to detail, and self-starter attitude β€” comfortable driving work forward independently in a fast-paced retail technology environment with minimal day-to-day supervision