Intellectt Inc

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 3–6 years of experience, focusing on healthcare data integration using R and Python. Contract length is unspecified, with a pay rate of "unknown." Key skills include ETL processes, SQL, and cloud platforms (AWS/Azure/GCP).
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 1, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Gainesville, FL
-
🧠 - Skills detailed
#Tableau #Azure #Data Governance #Data Manipulation #Data Science #SQL Server #Informatica #PySpark #AWS (Amazon Web Services) #Data Processing #Data Privacy #Datasets #Scala #"ETL (Extract #Transform #Load)" #FHIR (Fast Healthcare Interoperability Resources) #BigQuery #PostgreSQL #Spark SQL #BI (Business Intelligence) #GCP (Google Cloud Platform) #Airflow #DevOps #Automation #Spark (Apache Spark) #Security #Redshift #Data Management #dbt (data build tool) #SQLAlchemy #Visualization #Databricks #Data Integration #Data Engineering #Data Pipeline #Talend #Pandas #Computer Science #Cloud #ML (Machine Learning) #Python #Microsoft Power BI #MySQL #Databases #R #Metadata #SQL (Structured Query Language) #Compliance #Synapse #Data Lake
Role description
About the Role: We are looking for a skilled Data Engineer with strong expertise in R and Python to support our healthcare client in building and maintaining robust data pipelines and analytical infrastructure. The ideal candidate will have experience working with healthcare datasets (EHR/EMR, claims, patient data) and a deep understanding of data integration, transformation, and governance within a regulated environment. Key Responsibilities: • Design, build, and maintain scalable data pipelines for ingestion, transformation, and processing of healthcare data (EHR, claims, lab, and patient data). • Develop ETL/ELT processes using Python and R to support analytics, reporting, and machine learning workflows. • Integrate data from multiple healthcare systems and APIs using standard formats (HL7, FHIR, CCD). • Collaborate with data scientists, analysts, and clinicians to ensure clean, reliable, and accessible data for research and analytics. • Optimize data workflows for performance, scalability, and reliability. • Ensure compliance with HIPAA and other healthcare data privacy regulations. • Develop scripts and automation for data validation, quality checks, and metadata management. • Work closely with cloud and DevOps teams to deploy and maintain data infrastructure in AWS/Azure/GCP environments. Required Qualifications: • Bachelor’s or Master’s degree in Computer Science, Information Systems, Data Engineering, or related field. • 3–6 years of experience as a Data Engineer or similar role. • Strong proficiency in Python (Pandas, PySpark, SQLAlchemy) and R for data manipulation and transformation. • Experience with SQL and relational databases (PostgreSQL, MySQL, SQL Server, etc.). • Working knowledge of data integration and ETL tools (Airflow, dbt, Talend, Informatica, etc.). • Experience handling healthcare data formats (HL7, FHIR, ICD-10, CPT). • Familiarity with data privacy and security standards (HIPAA, PHI). Preferred Qualifications: • Experience with cloud data platforms (AWS Redshift, GCP BigQuery, Azure Synapse). • Exposure to data lake / lakehouse architectures. • Knowledge of Spark, Databricks, or distributed data processing frameworks. • Familiarity with data visualization and reporting tools (Tableau, Power BI, or R Shiny). • Understanding of data governance, lineage, and metadata management frameworks. • Strong communication and collaboration skills in cross-functional healthcare teams.