DELVIX

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer contract position for 3+ years, offering $50.00 - $65.00 per hour, remote work. Requires 3+ years in data engineering, proficiency in Python and SQL, ETL orchestration experience, and familiarity with cloud platforms and data governance.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date
December 5, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
California
-
🧠 - Skills detailed
#Automation #Snowflake #Hadoop #Scala #Pandas #Migration #Oracle #DevOps #Data Lake #Data Governance #Metadata #Big Data #Collibra #Data Security #Documentation #Kafka (Apache Kafka) #Spark (Apache Spark) #SQL Server #Cloud #Azure #Azure DevOps #Data Integration #R #SQL (Structured Query Language) #AWS (Amazon Web Services) #Airflow #GitHub #SAS #Infrastructure as Code (IaC) #Security #Redshift #MySQL #Data Quality #Python #BigQuery #Data Framework #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Compliance #Observability #API (Application Programming Interface) #XML (eXtensible Markup Language) #REST (Representational State Transfer) #Terraform #Data Engineering #Data Architecture #Datasets #JSON (JavaScript Object Notation)
Role description
Job Title: Data Engineer Employment Type: ContractDepartment: Public Health – Data & Informatics Position Overview We are seeking a skilled Data Engineer to support data integration, pipeline development, and modernization efforts for public health systems. The engineer will collaborate with cross-functional teams to design scalable workflows, improve data quality, and streamline ingestion from diverse data sources including LINKS, Vital Records, labs, and registries. Key Responsibilities Assess feasibility and technical requirements for LINKS β†’ Data Lake integration. Collaborate with OPH Immunization Program, OPH Bureau of Health Informatics, and STChealth on data specifications and recurring ingestion pipelines. Build and optimize ETL workflows for LINKS and complementary datasets. Design scalable workflows to ensure data quality, integrity, and identity resolution. Implement data governance, observability, and lineage tracking across pipelines. Mentor junior engineers, support testing, and ensure best practices in architecture and orchestration. Prepare and deliver documentation for technical and non-technical stakeholders. Mandatory Qualifications 3+ years of experience in data engineering and/or data architecture. 2+ years of experience using Python for ETL and automation (pandas, requests, API integration). 2+ years of hands-on SQL experience, including stored procedures and performance tuning (Oracle, SQL Server, MySQL preferred). 1+ year of experience with ETL orchestration tools (Prefect, Airflow, or equivalent). 1+ year experience with cloud platforms (Azure, AWS, or GCP) including data onboarding/migration. 1+ year exposure to data lake/medallion architecture (bronze, silver, gold). 2+ years of experience in cross-functional communication and technical documentation. Preferred (Nice-to-Have) Skills 5+ years of experience in data engineering roles. Experience integrating or developing REST/JSON or XML APIs. Familiarity with CI/CD pipelines (GitHub Actions, Azure DevOps). Exposure to Infrastructure as Code (Terraform, CloudFormation). Experience with data governance & metadata tools (Atlan, OpenMetadata, Collibra). Experience with public health/healthcare datasets, including PHI/PII handling. Knowledge of SAS and R workflows. Experience with other SQL platforms (Postgres, Snowflake, Redshift, BigQuery). Familiarity with data quality frameworks (Great Expectations, Deequ). Experience with real-time/streaming tools (Kafka, Spark Streaming). Knowledge of big data frameworks (Spark, Hadoop). Understanding of data security and compliance frameworks such as HIPAA. Job Type: Contract Pay: $50.00 - $65.00 per hour Work Location: Remote