

Data Engineer - SQL, Python, ETL
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in SQL and Python, focusing on ETL in healthcare payer environments. The 6-month contract offers a pay rate of "X" and requires expertise in Teradata/Snowflake and relevant certifications.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
440
-
ποΈ - Date discovered
August 1, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Mining #Migration #Data Pipeline #DevSecOps #SQL Server #Microsoft SQL #Snowflake #Informatica #Data Manipulation #Deployment #Continuous Deployment #SQL (Structured Query Language) #Java #Quality Assurance #SAS #Automation #Data Processing #Microsoft SQL Server #Teradata #Mathematics #AWS (Amazon Web Services) #Python #Data Engineering #MS SQL (Microsoft SQL Server) #Scala #Big Data #"ETL (Extract #Transform #Load)" #Strategy #Computer Science #GCP (Google Cloud Platform) #Programming #Data Ingestion #Statistics #C# #Cloud #Data Extraction
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We have a 6 month contract to hire for a highly analytical and collaborative Data Engineer with 5+ years of hands-on experience in SQL and Python, specializing in large-scale data processing, transformation, and report development in healthcare payer environments. They should have deep expertise in SQL (preferably with Teradata or Snowflake), Python for data analytics, and at least 3 years of experience with ETL tools such as SAS, Informatica, or MicroStrategy.
This person thrives in cross-functional settings, engaging both IT and business stakeholders to clarify data requirements, extract and transform health-related data, and automate regulatory report generation. They should be comfortable working in a production cloud infrastructure, possess strong critical thinking skills, and demonstrate a track record of delivering validated, quality-assured data solutions in a managed care or payer context. Familiarity with state regulatory reporting and a passion for building scalable, compliant data pipelines is essential.
Shift: 9-5pm EST
β’ MUST HAVES:5+ yearsβ experience as a senior Python developer working directly with large volumes of data
β’ 5+ Years SQL (Ideally with Teradata or Snowflake)
β’ 5+ Years Python (ideally working with data and report development)
β’ 3+ Years other ETL tools (e.g., SAS, Informatica, Microstrategy)
β’ NICE TO HAVES:Relevant certifications such as CISSP, CISM, or Microsoft certifications (e.g., SC-400, SC-200, AZ-500).
β’ Experience with other DLP solutions
β’ Experience with other cloud platforms (e.g., AWS, GCP).
β’ Disqualifiers:Solid experience and understanding of health care data in a payor organization
β’ Lack of experience working directly with data and/or in a managed production environment
β’ About this Role:This requisition is to support the NJ market in the migration project.
β’ Our team specifically supports the market with regulatory reports that are submitted to state regulators.
β’ We are a highly flexible and collaborative team, working with SMEβs in both the health plan and corporate
β’ Responsibilities:A typical day will be spent dividing time between interactions with the health plan (clarifying requirements and expectations), doing ET/L (data extraction, transformation, and possibly loading), research, development in collaboration with assigned analyst, and automation.
β’ This role will provide great experience working with large volumes of data across multiple domains in the managed care realm β primarily health-related data. Further, individuals in this role work with both IT and business partners to solve complex data and reporting needs where critical thinking and innovation come into play. This will offer an opportunity to leverage Python in a unique manner
Job Description:
Position Purpose:
Develops and operationalizes data pipelines to make data available for consumption (reports and advanced analytics), including data ingestion, data transformation, data validation / quality, data pipeline optimization, and orchestration. Engages with the DevSecOps Engineer during continuous integration and continuous deployment.
Education/Experience:
A Bachelor's degree in a quantitative or business field (e.g., statistics, mathematics, engineering, computer science).
Requires 2 β 4 years of related experience.
Or equivalent experience acquired through accomplishments of applicable knowledge, duties, scope and skill reflective of the level of this position.
Technical Skills:
One or more of the following skills are desired.
Experience with Big Data; Data Processing
Experience with Other: diagnosing system issues, engaging in data validation, and providing quality assurance testing
Experience with Data Manipulation; Data Mining
Experience with Other:
β’ Experience working in a production cloud infrastructure
Experience with one or more of the following C# (Programming Language); Java (Programming Language); Programming Concepts; Programming Tools; Python (Programming Language); SQL (Programming Language)
Knowledge of Microsoft SQL Servers; SQL (Programming Language)
Soft Skills:
Intermediate - Seeks to acquire knowledge in area of specialty
Intermediate - Ability to identify basic problems and procedural irregularities, collect data, establish facts, and draw valid conclusions
Intermediate - Ability to work independently