

Ender-IT
Azure Data Engineer- Payer_Remote Work
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer with a 12+ month contract, remote work location, and a pay rate of “$X/hour.” Requires 7+ years of data engineering experience, expertise in SQL and Python, and healthcare payer domain experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 13, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#dbt (data build tool) #Azure ADLS (Azure Data Lake Storage) #AWS (Amazon Web Services) #ADLS (Azure Data Lake Storage) #Documentation #Storage #SQL (Structured Query Language) #Airflow #"ETL (Extract #Transform #Load)" #GitHub #Databricks #BI (Business Intelligence) #Data Engineering #GIT #Snowflake #Spark (Apache Spark) #Data Lake #Data Quality #Python #Azure Data Factory #GCP (Google Cloud Platform) #Azure #Datasets #Cloud #PySpark #ADF (Azure Data Factory) #Data Pipeline
Role description
Azure Data engineer- Payer
Location: Remote work
Duration: 12 Months +
U.S. Work Authorization required (No sponsorship available). W2 candidates only. No C2C or sub-vendor submissions.
Senior, hands‑on data engineering contractor to support Modern data platform delivery across Health Payer Domains. This role requires independent ownership of end‑to‑end data pipelines using modern cloud and Lakehouse architecture. This is a hit‑the‑ground‑running role with minimal ramp‑up.
1. Key Responsibilities
• Design, build, and operate end‑to‑end data pipelines
• Source ingestion → transformation → analytics‑ready datasets
• Develop transformations and data models using SQL and Python
• Implement automated data quality checks and validations
• Follow Git‑based development
• Collaborate with business/domain stakeholders on data rules and definitions
• Ensure solutions are secure, auditable, reliable, and supportable
• Produce clear technical documentation and operational notes
1. Required Technologies (Must‑Have)
Candidates must have strong, recent hands‑on experience with most of the following:
Languages
• SQL
• Python (PySpark preferred)
Data Platforms / Storage
• Cloud data platforms (Azure preferred; AWS/GCP acceptable)
• Azure Data Lake Storage Gen2 (ADLS Gen2) or equivalent
• Object storage–based data lakes
• Parquet format
• Lakehouse concepts (Iceberg and/or Delta)
Transformation & Modeling
• dbt (dbt Core and/or dbt Cloud)
Source Control, GitOps & CI/CD
• GitHub
• Pull request–based development
• CI/CD pipelines (GitHub Actions or equivalent)
Compute (one or more)
• Snowflake
• Microsoft Fabric
• Databricks
1. Preferred / Nice‑to‑Have Technologies
• Microsoft Fabric Data Factory / Azure Data Factory
• Fabric or Databricks Notebooks
• Orchestration tools (Airflow, Dagster, or similar)
• Healthcare / payer domain experience (Clinical, Claims, Provider)
1. Required Experience
• 7+ years of hands‑on data engineering experience
• Proven delivery of production‑grade data pipelines
• Experience working independently with minimal supervision
• Strong understanding of data quality, reliability, and operational readiness
1. Ideal Candidate Profile
• Senior, hands‑on data engineer
• Engineering generalist (not tool‑only or reporting‑only)
• Comfortable working in ambiguous, fast‑moving environments
• Strong delivery and ownership mindset
1. Recruiter / Vendor Screening Notes
• Not an entry‑level role
• Not a BI‑only or reporting‑focused role
• Candidates must demonstrate end‑to‑end pipeline ownership
• Hybrid role — candidates must be open to periodic on‑site presence
Azure Data engineer- Payer
Location: Remote work
Duration: 12 Months +
U.S. Work Authorization required (No sponsorship available). W2 candidates only. No C2C or sub-vendor submissions.
Senior, hands‑on data engineering contractor to support Modern data platform delivery across Health Payer Domains. This role requires independent ownership of end‑to‑end data pipelines using modern cloud and Lakehouse architecture. This is a hit‑the‑ground‑running role with minimal ramp‑up.
1. Key Responsibilities
• Design, build, and operate end‑to‑end data pipelines
• Source ingestion → transformation → analytics‑ready datasets
• Develop transformations and data models using SQL and Python
• Implement automated data quality checks and validations
• Follow Git‑based development
• Collaborate with business/domain stakeholders on data rules and definitions
• Ensure solutions are secure, auditable, reliable, and supportable
• Produce clear technical documentation and operational notes
1. Required Technologies (Must‑Have)
Candidates must have strong, recent hands‑on experience with most of the following:
Languages
• SQL
• Python (PySpark preferred)
Data Platforms / Storage
• Cloud data platforms (Azure preferred; AWS/GCP acceptable)
• Azure Data Lake Storage Gen2 (ADLS Gen2) or equivalent
• Object storage–based data lakes
• Parquet format
• Lakehouse concepts (Iceberg and/or Delta)
Transformation & Modeling
• dbt (dbt Core and/or dbt Cloud)
Source Control, GitOps & CI/CD
• GitHub
• Pull request–based development
• CI/CD pipelines (GitHub Actions or equivalent)
Compute (one or more)
• Snowflake
• Microsoft Fabric
• Databricks
1. Preferred / Nice‑to‑Have Technologies
• Microsoft Fabric Data Factory / Azure Data Factory
• Fabric or Databricks Notebooks
• Orchestration tools (Airflow, Dagster, or similar)
• Healthcare / payer domain experience (Clinical, Claims, Provider)
1. Required Experience
• 7+ years of hands‑on data engineering experience
• Proven delivery of production‑grade data pipelines
• Experience working independently with minimal supervision
• Strong understanding of data quality, reliability, and operational readiness
1. Ideal Candidate Profile
• Senior, hands‑on data engineer
• Engineering generalist (not tool‑only or reporting‑only)
• Comfortable working in ambiguous, fast‑moving environments
• Strong delivery and ownership mindset
1. Recruiter / Vendor Screening Notes
• Not an entry‑level role
• Not a BI‑only or reporting‑focused role
• Candidates must demonstrate end‑to‑end pipeline ownership
• Hybrid role — candidates must be open to periodic on‑site presence






