
Senior Data Engineer (ACO - Healthcare)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (ACO - Healthcare) on a contract basis, offering $50.00 - $85.00 per hour. Requires 3–5+ years of CMS CCLF experience, expertise in Databricks, Python, SQL, and strong ACO knowledge. Remote work location.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date discovered
August 21, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Remote
-
🧠 - Skills detailed
#Datasets #Python #Data Processing #Data Engineering #"ETL (Extract #Transform #Load)" #Normalization #Data Analysis #CMS (Content Management System) #Databricks #Debugging #Delta Lake #SQL (Structured Query Language)
Role description
We are urgently seeking a Senior ACO Data Engineer with deep experience in CMS claims data and expertise in Databricks (bronze/silver/gold layers) to take ownership of data engineering efforts across two active ACO contracts. The incoming engineer must understand the data end-to-end, from ingestion and curation to analytics-ready outputs, and must be hands-on with both engineering and pipeline design, not just data analysis or downstream consumption.
You’ll be stepping into a high-impact role vacated by a strong resource, with minimal transition time. You must be comfortable owning the codebase, adapting logic, managing QA issues, and supporting new data domains.
Responsibilities:
Ingest, normalize, and structure CMS CCLF, BNX, ALR, and PSF files using Databricks and Python/SQL
Own and maintain bronze → silver → gold layer pipelines in Databricks (Delta Lake architecture)
Engineer features and metrics for ACO reporting, performance evaluation, and cost/utilization modeling
Perform QA and debugging for logic changes requested by client
Extend pipeline to include telemedicine data, RAF scores, and chronic condition tracking
Understand and enforce MBI normalization for beneficiary mapping across time
Work independently with minimal ramp-up and quickly become the go-to resource for engineering logic
Collaborate with the delivery team to anticipate and solve pipeline, QA, and enrichment challenges
Must-Have Skills & Experience:
CMS/ACO Expertise
3–5+ years working with CMS CCLF datasets
Deep understanding of ACO attribution, cost-of-care breakdowns, and risk adjustment methodologies (HCC, RAF, ACG scores)
Experience implementing MBI normalization for longitudinal beneficiary tracking
Data Engineering & Platform Expertise
Databricks: Proven experience with bronze/silver/gold layers, Delta Lake, distributed processing
Strong Python for data processing and transformation
SQL: Advanced SQL skills including CTEs, window functions, and stored procedures
Pipeline development: ingestion → transformation → analytics-ready layer
Experience integrating external enrichment layers (e.g., Johns Hopkins ACG, CMS HCC)
QA & Production Readiness
Ability to debug existing logic, resolve QA issues, and respond quickly to client-reported bugs
Hands-on experience adapting legacy logic and deploying changes into production Databricks environment
Job Type: Contract
Pay: $50.00 - $85.00 per hour
Application Question(s):
Are you a USC or GC holder? (Yes/No)
Desired hourly rate?
This role is time-sensitive, could you confirm your availability to begin work immediately if offered the position?
Work Location: Remote
We are urgently seeking a Senior ACO Data Engineer with deep experience in CMS claims data and expertise in Databricks (bronze/silver/gold layers) to take ownership of data engineering efforts across two active ACO contracts. The incoming engineer must understand the data end-to-end, from ingestion and curation to analytics-ready outputs, and must be hands-on with both engineering and pipeline design, not just data analysis or downstream consumption.
You’ll be stepping into a high-impact role vacated by a strong resource, with minimal transition time. You must be comfortable owning the codebase, adapting logic, managing QA issues, and supporting new data domains.
Responsibilities:
Ingest, normalize, and structure CMS CCLF, BNX, ALR, and PSF files using Databricks and Python/SQL
Own and maintain bronze → silver → gold layer pipelines in Databricks (Delta Lake architecture)
Engineer features and metrics for ACO reporting, performance evaluation, and cost/utilization modeling
Perform QA and debugging for logic changes requested by client
Extend pipeline to include telemedicine data, RAF scores, and chronic condition tracking
Understand and enforce MBI normalization for beneficiary mapping across time
Work independently with minimal ramp-up and quickly become the go-to resource for engineering logic
Collaborate with the delivery team to anticipate and solve pipeline, QA, and enrichment challenges
Must-Have Skills & Experience:
CMS/ACO Expertise
3–5+ years working with CMS CCLF datasets
Deep understanding of ACO attribution, cost-of-care breakdowns, and risk adjustment methodologies (HCC, RAF, ACG scores)
Experience implementing MBI normalization for longitudinal beneficiary tracking
Data Engineering & Platform Expertise
Databricks: Proven experience with bronze/silver/gold layers, Delta Lake, distributed processing
Strong Python for data processing and transformation
SQL: Advanced SQL skills including CTEs, window functions, and stored procedures
Pipeline development: ingestion → transformation → analytics-ready layer
Experience integrating external enrichment layers (e.g., Johns Hopkins ACG, CMS HCC)
QA & Production Readiness
Ability to debug existing logic, resolve QA issues, and respond quickly to client-reported bugs
Hands-on experience adapting legacy logic and deploying changes into production Databricks environment
Job Type: Contract
Pay: $50.00 - $85.00 per hour
Application Question(s):
Are you a USC or GC holder? (Yes/No)
Desired hourly rate?
This role is time-sensitive, could you confirm your availability to begin work immediately if offered the position?
Work Location: Remote