Enzo Tech Group

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include strong Python and SQL expertise, experience with AWS, and building ETL pipelines. Familiarity with streaming data and regulated environments is a plus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 17, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Scala #Snowflake #SQL (Structured Query Language) #Datasets #AWS (Amazon Web Services) #Airflow #Athena #Cloud #ML (Machine Learning) #AWS S3 (Amazon Simple Storage Service) #Logging #Databases #dbt (data build tool) #Redshift #Lambda (AWS Lambda) #Data Quality #Data Pipeline #Automation #Kafka (Apache Kafka) #Monitoring #Data Processing #Data Engineering #S3 (Amazon Simple Storage Service) #Python #"ETL (Extract #Transform #Load)"
Role description
A fast-moving, product-focused team is looking for a Data Engineer who knows how to move, shape, and scale data properly. This role sits close to engineering, analytics, and product, meaning your work directly powers decision-making and data-driven products. You’ll be responsible for building robust ETL pipelines, working deep within AWS, and using Python to turn raw data into reliable datasets that teams across the business can trust. What You’ll Be Doing: β€’ Design, build, and maintain scalable ETL / ELT pipelines β€’ Ingest data from multiple sources including APIs, databases, files, and streaming platforms β€’ Optimize pipelines for performance, reliability, and cost efficiency β€’ Work closely with stakeholders to translate business needs into scalable data solutions β€’ Ensure data quality, monitoring, and logging are embedded within pipelines β€’ Support analytics, reporting, and downstream data products Core Tech Stack: β€’ AWS: S3, Glue, Lambda, Redshift, Athena, EMR (or similar) β€’ Python: Data processing, orchestration, automation β€’ ETL / ELT: Glue, Airflow, CDK, dbt, or custom frameworks β€’ Databases: SQL-based systems (Redshift, Postgres, Snowflake) What We’re Looking For: β€’ Strong hands-on experience working as a Data Engineer β€’ Solid Python development for data pipelines and automation β€’ Real-world experience building and maintaining ETL pipelines β€’ Strong SQL skills and data modelling fundamentals β€’ Comfortable working in a cloud-first AWS environment Nice to Have: β€’ Experience working with streaming data (Kafka or Kinesis) β€’ Familiarity with CI/CD for data pipelines β€’ Exposure to analytics or machine learning workloads β€’ Experience working in regulated environments (e.g., healthcare or fintech)