Enzo Tech Group

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a lucrative contract-to-hire basis, 100% remote. Requires 5+ years in Data Engineering, expertise in AWS Glue, Spark, and experience with regulated healthcare datasets. Pay rate is competitive.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 17, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Quality #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Spark (Apache Spark) #Data Engineering #Terraform #Data Lake #Datasets #Amazon Redshift #PySpark #Scala #"ETL (Extract #Transform #Load)" #AWS Glue #SQL (Structured Query Language) #Redshift #Data Catalog #Python
Role description
100% Remote Lucrative Contract-To-Hire If you’ve built real, production-grade AWS Glue pipelines β€” not just experimented with them β€” this is the kind of role that lets you own it properly. We’re partnering with a growing Healthcare / PBM (Pharmacy Benefit Management) organization investing heavily in its AWS data platform. Glue is the backbone. This isn’t maintenance work β€” it’s platform-level engineering handling large-scale pharmacy and claims data in a secure, HIPAA-aware environment. What You’ll Be Doing: β€’ Architect and optimize AWS Glue ETL pipelines (Jobs, Crawlers, Workflows, Data Catalog) β€’ Build and maintain scalable S3 data lake structures β€’ Design and tune Redshift warehouse environments β€’ Implement efficient partitioning and performance optimization strategies β€’ Handle large-scale pharmacy and claims datasets β€’ Drive data quality, governance, and reliability standards β€’ Partner with analytics, clinical, and product stakeholders to deliver trusted data assets β€’ Influence architecture decisions and long-term platform direction Core Tech Stack: β€’ AWS Glue (Jobs, Crawlers, Workflows, DynamicFrames) β€’ PySpark / Spark β€’ Amazon S3 β€’ Amazon Redshift β€’ Python & SQL β€’ Infrastructure-as-Code exposure (CDK or Terraform a strong plus) What We’re Looking For: β€’ 5+ years in Data Engineering β€’ Proven experience building production-level Glue pipelines β€’ Strong Spark/PySpark capability β€’ Experience working with large, regulated datasets β€’ Background in Healthcare, PBM, payer, or insurance data environments preferred β€’ Comfortable owning architecture β€” not just executing tickets