Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 6-month contract, paying up to $70/hr W2. It requires 4-5 years of AWS data engineering experience, advanced Python skills, and expertise in lakehouse architectures and Apache Iceberg. Remote work is available in Minneapolis MN or Denver CO.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date discovered
September 20, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Greater Minneapolis-St. Paul Area
-
🧠 - Skills detailed
#Slowly Changing Dimensions #Cloud #"ETL (Extract #Transform #Load)" #DataOps #Documentation #SQL (Structured Query Language) #S3 (Amazon Simple Storage Service) #Scala #Python #Data Catalog #AWS (Amazon Web Services) #Data Layers #Data Engineering #Terraform #Apache Iceberg #Delta Lake #Argo #Data Quality #Tableau #Athena #Data Pipeline
Role description
6 month contract W2 only Up to $70/hr. W2 DOQ Remote: Minneapolis MN OR Denver CO We are seeking Senior Data Engineers with deep expertise in building scalable, cloud-native data platforms on AWS. This is a hands-on engineering role focused on designing and implementing modern lakehouse architectures using some AWS managed services, open table formats(Iceberg) and compute running in our EKS/Argowf environments. Advanced Python Engineering Skills β€’ Strong proficiency in Python for data engineering tasks. β€’ Experience with modular, testable code and production-grade pipelines. β€’ Not looking for SQL-heavy DBAs or analysts; this is a software engineering role. β€’ AWS Lakehouse Architecture Expertise β€’ Proven experience designing and implementing lakehouse architectures on AWS. β€’ Experience with key AWS services: S3, Glue, Athena, Glue Data Catalog, Lake Formation, Quicksights, CloudWatch, etc. β€’ Experience with AWS Quicksights(Preferred), Tableau or Cognos β€’ ETL Pipeline Development β€’ Bonus: Experience with EKS-based orchestration using EMR on EKS or Argo Workflows. β€’ Open Table Formats β€’ Deep understanding of Apache Iceberg (preferred), Delta Lake, or Apache Hudi. β€’ Experience implementing time-travel, schema evolution, and partitioning strategies. β€’ Medallion Architecture Implementation β€’ Experience designing and implementing Bronze, Silver, and Gold data layers. β€’ Understanding of ingestion, transformation, and curation best practices. β€’ Slowly Changing Dimensions (SCD Type 2) β€’ Solid grasp of SCD2 semantics and implementation strategies in distributed data systems. β€’ Strong communication and documentation skills. β€’ Ability to work independently and collaborate with cross-functional teams, including tech leads, architects, and product managers. Degree or Certification required? β€’ None Years of Experience? β€’ 4-5 within data engineering / AWS Nice to Haves? β€’ Experience with DataOps practices and CI/CD for data pipelines. β€’ Familiarity with Terraform or CloudFormation for infrastructure-as-code. β€’ Exposure to data quality frameworks like Deequ or Great Expectations. β€’ Undergrad degree β€’ Iceberg on AWS