

FUSTIS LLC
AWS Data Engineer (Healthcare) - W2 Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer (Healthcare) with a 6-month contract, paying $70-$75 W2. Remote work is available, focusing on Snowflake, DBT, SQL, and Python. Requires 6-8 years of data pipeline experience, preferably in healthcare.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date
January 31, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#dbt (data build tool) #Cloud #Data Quality #Kubernetes #BI (Business Intelligence) #Data Engineering #S3 (Amazon Simple Storage Service) #Tableau #Airflow #Scala #GitHub #Docker #Python #Data Modeling #AWS (Amazon Web Services) #Databases #IAM (Identity and Access Management) #Jenkins #Kafka (Apache Kafka) #Terraform #Snowflake #Data Lake #Spark (Apache Spark) #Data Pipeline #Matillion #Monitoring #R #Redshift #Looker #Complex Queries #Data Architecture #Data Warehouse #"ETL (Extract #Transform #Load)" #Apache Airflow #SQL (Structured Query Language) #AI (Artificial Intelligence) #RDS (Amazon Relational Database Service) #Data Integrity
Role description
Job Role: Data Engineer
Location: Remote (NYC, NY)
Work Auth: USC/GC only
Pay rate: $70 - 75 W2 MAX
Duration: 6 months
Interview Process/# of Rounds: 2 rounds via Teams video
NOTE:
This one will be focused more on general operational engineering position - does not have a specific project focus.
Start from day one - must be hands on starting immediately with Snowflake / DBT / SQL / Python these are all non-negotiables (system design or architect level not necessary)
Can follow road map already in place = healthcare industry a big plus because will be able to accelerate quickly and boost chances of an offer (i.e. understanding insurance claims data) -
Understanding end to end data pipeline structuring can wear multiple hats
EST or CT preferred but open to PST because most of team is ET/CT
Less on system design in the interview process (2nd round)
What You'll Do:
β Implement a data architecture and infrastructure that aligns with business objectives. Collaborate closely with Application Engineers and Product Managers to ensure that the technical infrastructure robustly supports client requirements
β Create ETL and data pipeline solutions for efficient loading of data into the warehouse along with their testing to ensure reliability and optimal performance
β Collect, validate, and provide high-quality data, ensuring data integrity
β Champion data democratization efforts, facilitating accessibility to data for relevant stakeholders
β Guide the team with regard to technical best practices and contribute substantially to the architecture of our systems
β Supporting operational work like onboarding new customers to our data products and participating in on-call for the team
β Engage with cross-functional teams, including Solutions Delivery, Business Intelligence, Predictive Analytics, and Enterprise Services, to address and support any data-related technical issues or requirements
Youβre a great fit for this role if:
β At least 6-8 years of experience in working with data warehouses, data lakes, and ETL pipelines
β Proven experience with building optimized data pipelines using Snowflake and dbt
β Expert in orchestrating data pipelines using Apache Airflow, including authoring, scheduling, and monitoring workflows Unite Us
β Exposure to AWS and proficiency in cloud services such as EKS(Kubernetes), ECS, S3, RDS, IAM etc.
β Experience designing and implementing CI/CD workflows using GitHub Actions, Codeship, Jenkins etc.
β Experience with tools like Terraform, Docker, Kafka
β Strong experience with Spark using Scala and Python
β Advanced SQL knowledge, with experience in pulling complex queries, query authoring, and strong familiarity with Snowflake and various relational databases like Redshift, Postgres, etc.
β Experience with data modeling and system design architecting scalable data platforms and applications for large enterprise clients.
β A dedicated focus on building high-performance systems
β Exposure to building data quality frameworks
β Strong problem-solving and troubleshooting skills, with the ability to identify and resolve data engineering issues and system failures
β Excellent communication skills, with the ability to communicate technical information to non-technical stakeholders and collaborate effectively with cross-functional teams
β The ability to envision and construct scalable solutions that meet diverse needs for enterprise clients with dedicated data teams
Nice to have:
β Previous engagement with healthcare and/or social determinants of health data products.
β Experience leveraging agentic-assisted coding tools (eg:, Cursor, Codex AI, Amazon Q, GitHub Copilot)
β Experience working with R
β Experience with processing health care eligibility and claims data
β Exposure to Matillion ETL
β Experience using and building solutions to support various reporting and data user tools (Tableau, Looker, etc
Job Role: Data Engineer
Location: Remote (NYC, NY)
Work Auth: USC/GC only
Pay rate: $70 - 75 W2 MAX
Duration: 6 months
Interview Process/# of Rounds: 2 rounds via Teams video
NOTE:
This one will be focused more on general operational engineering position - does not have a specific project focus.
Start from day one - must be hands on starting immediately with Snowflake / DBT / SQL / Python these are all non-negotiables (system design or architect level not necessary)
Can follow road map already in place = healthcare industry a big plus because will be able to accelerate quickly and boost chances of an offer (i.e. understanding insurance claims data) -
Understanding end to end data pipeline structuring can wear multiple hats
EST or CT preferred but open to PST because most of team is ET/CT
Less on system design in the interview process (2nd round)
What You'll Do:
β Implement a data architecture and infrastructure that aligns with business objectives. Collaborate closely with Application Engineers and Product Managers to ensure that the technical infrastructure robustly supports client requirements
β Create ETL and data pipeline solutions for efficient loading of data into the warehouse along with their testing to ensure reliability and optimal performance
β Collect, validate, and provide high-quality data, ensuring data integrity
β Champion data democratization efforts, facilitating accessibility to data for relevant stakeholders
β Guide the team with regard to technical best practices and contribute substantially to the architecture of our systems
β Supporting operational work like onboarding new customers to our data products and participating in on-call for the team
β Engage with cross-functional teams, including Solutions Delivery, Business Intelligence, Predictive Analytics, and Enterprise Services, to address and support any data-related technical issues or requirements
Youβre a great fit for this role if:
β At least 6-8 years of experience in working with data warehouses, data lakes, and ETL pipelines
β Proven experience with building optimized data pipelines using Snowflake and dbt
β Expert in orchestrating data pipelines using Apache Airflow, including authoring, scheduling, and monitoring workflows Unite Us
β Exposure to AWS and proficiency in cloud services such as EKS(Kubernetes), ECS, S3, RDS, IAM etc.
β Experience designing and implementing CI/CD workflows using GitHub Actions, Codeship, Jenkins etc.
β Experience with tools like Terraform, Docker, Kafka
β Strong experience with Spark using Scala and Python
β Advanced SQL knowledge, with experience in pulling complex queries, query authoring, and strong familiarity with Snowflake and various relational databases like Redshift, Postgres, etc.
β Experience with data modeling and system design architecting scalable data platforms and applications for large enterprise clients.
β A dedicated focus on building high-performance systems
β Exposure to building data quality frameworks
β Strong problem-solving and troubleshooting skills, with the ability to identify and resolve data engineering issues and system failures
β Excellent communication skills, with the ability to communicate technical information to non-technical stakeholders and collaborate effectively with cross-functional teams
β The ability to envision and construct scalable solutions that meet diverse needs for enterprise clients with dedicated data teams
Nice to have:
β Previous engagement with healthcare and/or social determinants of health data products.
β Experience leveraging agentic-assisted coding tools (eg:, Cursor, Codex AI, Amazon Q, GitHub Copilot)
β Experience working with R
β Experience with processing health care eligibility and claims data
β Exposure to Matillion ETL
β Experience using and building solutions to support various reporting and data user tools (Tableau, Looker, etc






