Recru

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with 7+ years of data engineering experience, 3+ years in AWS, Spark, ETL, and DevOps. Contract length is unspecified, pay rate is "unknown," and work location is "remote."
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 25, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Data Lake #Athena #Cloud #Data Pipeline #Databricks #Libraries #Scripting #Spark (Apache Spark) #Data Engineering #Agile #DevOps #Python #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Scrum
Role description
• • NO C2C • • Data Engineer – AWS / Spark / Python About the Role We’re looking for an AWS Data Engineer to join an agile team building and optimizing data pipelines that support large‑scale energy and power data platforms. This role blends data engineering with strong software engineering practices. What You’ll Do • Build and optimize ETL pipelines for power and electricity data. • Develop and tune Spark jobs using Databricks. • Work in AWS using tools such as Glue Catalogs and Athena. • Write production‑quality Python code (packages, shared libraries, SDKs) in a team‑based software engineering environment. • Support key initiatives including MDP ITRON integration and CI/CD pipeline refactoring. • Participate as an active member of a Scrum team, contributing to design, development, and technical QA. What You’ll Bring • 7+ years of experience in data engineering, systems implementation, or technical architecture. • 3+ years of hands‑on experience with: AWS cloud architecture Spark / Databricks ETL pipelines and data lake or warehouse architectures DevOps and CI/CD pipelines Strong Python software development experience (beyond scripting). Experience working in Agile/Scrum environments.