FUSTIS LLC

AWS Data Engineer (Healthcare) - W2 Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer (Healthcare) with a 6-month contract, paying $70-$75 W2. Remote work is available, focusing on Snowflake, DBT, SQL, and Python. Requires 6-8 years of data pipeline experience, preferably in healthcare.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
600
-
πŸ—“οΈ - Date
January 31, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#dbt (data build tool) #Cloud #Data Quality #Kubernetes #BI (Business Intelligence) #Data Engineering #S3 (Amazon Simple Storage Service) #Tableau #Airflow #Scala #GitHub #Docker #Python #Data Modeling #AWS (Amazon Web Services) #Databases #IAM (Identity and Access Management) #Jenkins #Kafka (Apache Kafka) #Terraform #Snowflake #Data Lake #Spark (Apache Spark) #Data Pipeline #Matillion #Monitoring #R #Redshift #Looker #Complex Queries #Data Architecture #Data Warehouse #"ETL (Extract #Transform #Load)" #Apache Airflow #SQL (Structured Query Language) #AI (Artificial Intelligence) #RDS (Amazon Relational Database Service) #Data Integrity
Role description
Job Role: Data Engineer Location: Remote (NYC, NY) Work Auth: USC/GC only Pay rate: $70 - 75 W2 MAX Duration: 6 months Interview Process/# of Rounds: 2 rounds via Teams video NOTE: This one will be focused more on general operational engineering position - does not have a specific project focus. Start from day one - must be hands on starting immediately with Snowflake / DBT / SQL / Python these are all non-negotiables (system design or architect level not necessary) Can follow road map already in place = healthcare industry a big plus because will be able to accelerate quickly and boost chances of an offer (i.e. understanding insurance claims data) - Understanding end to end data pipeline structuring can wear multiple hats EST or CT preferred but open to PST because most of team is ET/CT Less on system design in the interview process (2nd round) What You'll Do: ● Implement a data architecture and infrastructure that aligns with business objectives. Collaborate closely with Application Engineers and Product Managers to ensure that the technical infrastructure robustly supports client requirements ● Create ETL and data pipeline solutions for efficient loading of data into the warehouse along with their testing to ensure reliability and optimal performance ● Collect, validate, and provide high-quality data, ensuring data integrity ● Champion data democratization efforts, facilitating accessibility to data for relevant stakeholders ● Guide the team with regard to technical best practices and contribute substantially to the architecture of our systems ● Supporting operational work like onboarding new customers to our data products and participating in on-call for the team ● Engage with cross-functional teams, including Solutions Delivery, Business Intelligence, Predictive Analytics, and Enterprise Services, to address and support any data-related technical issues or requirements You’re a great fit for this role if: ● At least 6-8 years of experience in working with data warehouses, data lakes, and ETL pipelines ● Proven experience with building optimized data pipelines using Snowflake and dbt ● Expert in orchestrating data pipelines using Apache Airflow, including authoring, scheduling, and monitoring workflows Unite Us ● Exposure to AWS and proficiency in cloud services such as EKS(Kubernetes), ECS, S3, RDS, IAM etc. ● Experience designing and implementing CI/CD workflows using GitHub Actions, Codeship, Jenkins etc. ● Experience with tools like Terraform, Docker, Kafka ● Strong experience with Spark using Scala and Python ● Advanced SQL knowledge, with experience in pulling complex queries, query authoring, and strong familiarity with Snowflake and various relational databases like Redshift, Postgres, etc. ● Experience with data modeling and system design architecting scalable data platforms and applications for large enterprise clients. ● A dedicated focus on building high-performance systems ● Exposure to building data quality frameworks ● Strong problem-solving and troubleshooting skills, with the ability to identify and resolve data engineering issues and system failures ● Excellent communication skills, with the ability to communicate technical information to non-technical stakeholders and collaborate effectively with cross-functional teams ● The ability to envision and construct scalable solutions that meet diverse needs for enterprise clients with dedicated data teams Nice to have: ● Previous engagement with healthcare and/or social determinants of health data products. ● Experience leveraging agentic-assisted coding tools (eg:, Cursor, Codex AI, Amazon Q, GitHub Copilot) ● Experience working with R ● Experience with processing health care eligibility and claims data ● Exposure to Matillion ETL ● Experience using and building solutions to support various reporting and data user tools (Tableau, Looker, etc