Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 14 years of experience in healthcare. It is a remote W2 contract position requiring proficiency in Python, SQL, ETL pipelines, AWS, Databricks, Snowflake, and FHIR experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Terraform #Data Mapping #Python #SQS (Simple Queue Service) #AWS (Amazon Web Services) #PySpark #Spark (Apache Spark) #FHIR (Fast Healthcare Interoperability Resources) #Snowflake #Databricks #SQL (Structured Query Language) #Spark SQL #GitLab #Cloud #Data Engineering #Data Analysis #"ETL (Extract #Transform #Load)" #S3 (Amazon Simple Storage Service)
Role description
Job Title: Senior Data Engineer with Healthcare Experience: 14 years Contract: W2 Only No C2C Visa: GC, USC & H4 EAD Work Mode: Remote Job Description : β€’ Proficient with Python and SQL β€’ Build and manage efficient ETL pipelines using Databricks workflows or other orchestration framework β€’ Familiarity with both structured and semi-structured data and ingesting and processing this data using PySpark β€’ Fundamental AWS services (S3, SQS) or similar services for other clouds β€’ Terraform and GitLab CICD β€’ Query tuning and performance optimization in SQL and/or SparkSQL β€’ Familiarity with data warehousing (snowflake or similar) β€’ Spark SQL β€’ AWS, Databricks, Snowflake β€’ Experience working for a cloud-based data services provider to large healthcare clients. β€’ FHIR experience β€’ Spark SQL β€’ AWS, Databricks, Snowflake β€’ Experience working for a cloud-based data services provider to large healthcare clients. β€’ FHIR experience β€’ Additional information: β€’ Work with clients & implementation team and understand the data distribution requirements β€’ Perform data analysis and data mapping required to produce client output. β€’ Build the data distribution extracts and scripts Optimize the performance of the data extract scripts