

Predacticaβ’
Sr. Data Engineer with Snowflake, DBT & Python Skills
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with expertise in Snowflake, dbt, and Python, offering a contract length of "unknown" and a pay rate of "$XX/hour." Key skills include ETL design, data modeling, and cloud platform experience, preferably in healthcare.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
November 1, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Azure #Data Governance #Data Science #Data Modeling #Data Transformations #AWS (Amazon Web Services) #Data Privacy #Datasets #Scala #"ETL (Extract #Transform #Load)" #Snowflake #FHIR (Fast Healthcare Interoperability Resources) #BigQuery #Storage #GCP (Google Cloud Platform) #API (Application Programming Interface) #Lambda (AWS Lambda) #Automation #Data Warehouse #Security #Azure Data Factory #Public Cloud #dbt (data build tool) #Deployment #AWS Lambda #Data Engineering #Data Pipeline #Version Control #Cloud #Data Ingestion #Complex Queries #Data Quality #ADF (Azure Data Factory) #Python #S3 (Amazon Simple Storage Service) #Monitoring #Observability #SQL (Structured Query Language) #Documentation #Compliance #Data Lake
Role description
We are seeking an experienced Senior Data Engineer with strong expertise in Snowflake, dbt, and Python to design, develop, and maintain data pipelines and analytics frameworks across large-scale data environments. The ideal candidate will bring hands-on experience in building efficient ETL processes, data modeling, and performance optimization across public cloud environments. Experience in the healthcare domain is a strong plus.
Key Responsibilities
β’ Design, build, and maintain robust ETL/ELT pipelines using dbt and Snowflake for structured and semi-structured data sources.
β’ Develop efficient and reusable data models following data warehouse best practices and ensure adherence to data governance standards.
β’ Write high-quality, maintainable Python code to automate data workflows, transformations, and integrations with APIs or other data systems.
β’ Implement data quality checks, monitoring, and alerting to ensure reliability and accuracy of datasets.
β’ Collaborate with analytics, data science, and product teams to translate business requirements into scalable data models and transformations.
β’ Optimize Snowflake environments for cost, query performance, and storage utilization.
β’ Leverage cloud-native tools (AWS/GCP/Azure) for orchestration, data ingestion, and integration across systems.
β’ Contribute to architectural discussions and implement best practices for version control, CI/CD, and deployment automation for dbt projects.
β’ Ensure compliance with data privacy and security policies, particularly in regulated industries like healthcare (HIPAA, PHI, etc.).
Required Skills & Qualifications
β’ 6+ years of hands-on experience with Snowflake, including performance tuning, role-based access control, and data modeling.
β’ 6+ years of experience in dbt (data transformations, testing, documentation, deployment).
β’ Strong background in ETL/ELT design, implementation, and orchestration.
β’ 6+ years of experience in Python for data engineering tasks, including API integration and automation.
β’ Solid understanding of data warehousing concepts, dimensional modeling (star/snowflake schema), and data lake architectures.
β’ Experience working with at least one public cloud platform (AWS, Azure, or GCP) and familiarity with services like AWS Lambda, S3, Glue, GCP BigQuery, or Azure Data Factory is advantageous.
β’ Strong SQL skills and the ability to optimize complex queries.
β’ Excellent communication and problem-solving skills.
Nice to Have
β’ Prior experience in healthcare data (e.g., EMR/EHR, claims, HL7/FHIR formats, HIPAA compliance).
β’ Exposure to data observability or lineage tracking tools.
We are seeking an experienced Senior Data Engineer with strong expertise in Snowflake, dbt, and Python to design, develop, and maintain data pipelines and analytics frameworks across large-scale data environments. The ideal candidate will bring hands-on experience in building efficient ETL processes, data modeling, and performance optimization across public cloud environments. Experience in the healthcare domain is a strong plus.
Key Responsibilities
β’ Design, build, and maintain robust ETL/ELT pipelines using dbt and Snowflake for structured and semi-structured data sources.
β’ Develop efficient and reusable data models following data warehouse best practices and ensure adherence to data governance standards.
β’ Write high-quality, maintainable Python code to automate data workflows, transformations, and integrations with APIs or other data systems.
β’ Implement data quality checks, monitoring, and alerting to ensure reliability and accuracy of datasets.
β’ Collaborate with analytics, data science, and product teams to translate business requirements into scalable data models and transformations.
β’ Optimize Snowflake environments for cost, query performance, and storage utilization.
β’ Leverage cloud-native tools (AWS/GCP/Azure) for orchestration, data ingestion, and integration across systems.
β’ Contribute to architectural discussions and implement best practices for version control, CI/CD, and deployment automation for dbt projects.
β’ Ensure compliance with data privacy and security policies, particularly in regulated industries like healthcare (HIPAA, PHI, etc.).
Required Skills & Qualifications
β’ 6+ years of hands-on experience with Snowflake, including performance tuning, role-based access control, and data modeling.
β’ 6+ years of experience in dbt (data transformations, testing, documentation, deployment).
β’ Strong background in ETL/ELT design, implementation, and orchestration.
β’ 6+ years of experience in Python for data engineering tasks, including API integration and automation.
β’ Solid understanding of data warehousing concepts, dimensional modeling (star/snowflake schema), and data lake architectures.
β’ Experience working with at least one public cloud platform (AWS, Azure, or GCP) and familiarity with services like AWS Lambda, S3, Glue, GCP BigQuery, or Azure Data Factory is advantageous.
β’ Strong SQL skills and the ability to optimize complex queries.
β’ Excellent communication and problem-solving skills.
Nice to Have
β’ Prior experience in healthcare data (e.g., EMR/EHR, claims, HL7/FHIR formats, HIPAA compliance).
β’ Exposure to data observability or lineage tracking tools.






