Predacticaβ„’

Sr. Data Engineer with Snowflake & DBT

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with Snowflake & dbt, offering a contract of "X months" at a pay rate of "$Y/hour". Key skills include 6+ years in Snowflake, dbt, Python, and healthcare experience is a plus. Remote work is available.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 20, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#API (Application Programming Interface) #Snowflake #Data Modeling #Storage #Data Ingestion #BigQuery #Documentation #Observability #"ETL (Extract #Transform #Load)" #Version Control #Lambda (AWS Lambda) #S3 (Amazon Simple Storage Service) #Data Engineering #Public Cloud #Datasets #ADF (Azure Data Factory) #Data Pipeline #Compliance #Data Quality #Security #Scala #AWS (Amazon Web Services) #Data Lake #Data Science #dbt (data build tool) #Data Privacy #Automation #Azure #SQL (Structured Query Language) #Azure Data Factory #Data Warehouse #Cloud #GCP (Google Cloud Platform) #Python #Monitoring #Deployment #Complex Queries #Data Governance #AWS Lambda #FHIR (Fast Healthcare Interoperability Resources) #Data Transformations
Role description
About the job We are seeking an experienced Senior Data Engineer with strong expertise in Snowflake, dbt, and Python to design, develop, and maintain data pipelines and analytics frameworks across large-scale data environments. The ideal candidate will bring hands-on experience in building efficient ETL processes, data modeling, and performance optimization across public cloud environments. Experience in the healthcare domain is a strong plus. Key Responsibilities β€’ Design, build, and maintain robust ETL/ELT pipelines using dbt and Snowflake for structured and semi-structured data sources. β€’ Develop efficient and reusable data models following data warehouse best practices and ensure adherence to data governance standards. β€’ Write high-quality, maintainable Python code to automate data workflows, transformations, and integrations with APIs or other data systems. β€’ Implement data quality checks, monitoring, and alerting to ensure reliability and accuracy of datasets. β€’ Collaborate with analytics, data science, and product teams to translate business requirements into scalable data models and transformations. β€’ Optimize Snowflake environments for cost, query performance, and storage utilization. β€’ Leverage cloud-native tools (AWS/GCP/Azure) for orchestration, data ingestion, and integration across systems. β€’ Contribute to architectural discussions and implement best practices for version control, CI/CD, and deployment automation for dbt projects. β€’ Ensure compliance with data privacy and security policies, particularly in regulated industries like healthcare (HIPAA, PHI, etc.). Required Skills & Qualifications β€’ 6+ years of hands-on experience with Snowflake, including performance tuning, role-based access control, and data modeling. β€’ 6+ years of experience in dbt (data transformations, testing, documentation, deployment). β€’ Strong background in ETL/ELT design, implementation, and orchestration. β€’ 6+ years of experience in Python for data engineering tasks, including API integration and automation. β€’ Solid understanding of data warehousing concepts, dimensional modeling (star/snowflake schema), and data lake architectures. β€’ Experience working with at least one public cloud platform (AWS, Azure, or GCP) and familiarity with services like AWS Lambda, S3, Glue, GCP BigQuery, or Azure Data Factory is advantageous. β€’ Strong SQL skills and the ability to optimize complex queries. β€’ Excellent communication and problem-solving skills. Nice to Have β€’ Prior experience in healthcare data (e.g., EMR/EHR, claims, HL7/FHIR formats, HIPAA compliance). β€’ Exposure to data observability or lineage tracking tools.