

Wave Talent
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (AWS) on a ~12-month contract, remote (US-based). Pay rate is $80–$100/hr. Key skills include AWS data lakes, large-scale datasets, and AI tools. Healthcare experience and HIPAA knowledge are preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
800
-
🗓️ - Date
February 20, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Florida, United States
-
🧠 - Skills detailed
#Datasets #AWS (Amazon Web Services) #"ETL (Extract #Transform #Load)" #Scala #Data Engineering #Data Lake #S3 (Amazon Simple Storage Service) #AWS S3 (Amazon Simple Storage Service) #AI (Artificial Intelligence) #Cloud
Role description
🚀 Senior Data Engineer (AWS)
📌 Contract Details
📍 Location: Remote (US-Based) — Tampa Bay (St. Pete/Tampa)
🕒 Hours: 30–40 hrs/week once ramped
⏳ Length: ~12-month contract (long-term engagement)
💰 Pay Rate: $80–$100/hr depending on seniority
We’re hiring a senior-level, AI-forward Data Engineer to help architect and scale a large healthcare data lake in AWS.
This is not an old-school ETL role. We’re looking for a modern engineer who actively leverages AI tools in their daily workflow to accelerate development, pipeline design, and complex data transformation work.
🏗️ What You’ll Be Building
• 📥 Ingest large healthcare data dumps into AWS (S3)
• 🧱 Architect and scale a centralized data lake
• 🔁 Build and maintain pipelines for:
• 🕵️ ♂️ De-identification + expert determination workflows
• 🥇 Creating a unified “gold copy” across datasets
• 📦 Enable downstream commercial data products
• 📊 Operate in very large-scale environments (up to 2 PB)
• 🧠 Work with complex, text-heavy healthcare datasets (clinical + patient text fields)
You’ll start with 3 datasets and help scale to 350+ datasets.
🔍 What We’re Looking For
✅ Must-Have
• Strong Data Engineering background in modern cloud architectures
• Deep experience building data lakes in AWS (S3 required)
• Comfortable working with large-scale datasets (TB → PB scale)
• Proven experience designing scalable ingestion + transformation pipelines
• AI-forward workflow — actively using tools such as Cursor, Copilot, Claude, or similar tools in day-to-day development
⭐ Nice-to-Have
• Healthcare, health-tech, or HR systems experience
• Experience working with HIPAA/PHI datasets
• Background in de-identification or sensitive data handling
🌟 Who Will Thrive Here?
• 🛠️ Engineers who build, not maintain legacy pipelines
• ⚡ AI-native developers who use modern tools to move faster
• 🧩 Data engineers comfortable with ambiguity and scale
• 🧠 Builders who want to shape architecture early
If you’re excited about scaling a healthcare data lake from 3 to 350+ datasets and building foundational infrastructure for commercial data products — let’s connect. 🤝
🚀 Senior Data Engineer (AWS)
📌 Contract Details
📍 Location: Remote (US-Based) — Tampa Bay (St. Pete/Tampa)
🕒 Hours: 30–40 hrs/week once ramped
⏳ Length: ~12-month contract (long-term engagement)
💰 Pay Rate: $80–$100/hr depending on seniority
We’re hiring a senior-level, AI-forward Data Engineer to help architect and scale a large healthcare data lake in AWS.
This is not an old-school ETL role. We’re looking for a modern engineer who actively leverages AI tools in their daily workflow to accelerate development, pipeline design, and complex data transformation work.
🏗️ What You’ll Be Building
• 📥 Ingest large healthcare data dumps into AWS (S3)
• 🧱 Architect and scale a centralized data lake
• 🔁 Build and maintain pipelines for:
• 🕵️ ♂️ De-identification + expert determination workflows
• 🥇 Creating a unified “gold copy” across datasets
• 📦 Enable downstream commercial data products
• 📊 Operate in very large-scale environments (up to 2 PB)
• 🧠 Work with complex, text-heavy healthcare datasets (clinical + patient text fields)
You’ll start with 3 datasets and help scale to 350+ datasets.
🔍 What We’re Looking For
✅ Must-Have
• Strong Data Engineering background in modern cloud architectures
• Deep experience building data lakes in AWS (S3 required)
• Comfortable working with large-scale datasets (TB → PB scale)
• Proven experience designing scalable ingestion + transformation pipelines
• AI-forward workflow — actively using tools such as Cursor, Copilot, Claude, or similar tools in day-to-day development
⭐ Nice-to-Have
• Healthcare, health-tech, or HR systems experience
• Experience working with HIPAA/PHI datasets
• Background in de-identification or sensitive data handling
🌟 Who Will Thrive Here?
• 🛠️ Engineers who build, not maintain legacy pipelines
• ⚡ AI-native developers who use modern tools to move faster
• 🧩 Data engineers comfortable with ambiguity and scale
• 🧠 Builders who want to shape architecture early
If you’re excited about scaling a healthcare data lake from 3 to 350+ datasets and building foundational infrastructure for commercial data products — let’s connect. 🤝






