Data Engineer with DBT

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with DBT, requiring 12-14 years of experience, a pay rate of "X", and a contract length of "Y". Key skills include Snowflake, ETL processes, Python, SQL, and cloud platforms. Healthcare data experience is preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 21, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Documentation #DevOps #Data Engineering #"ETL (Extract #Transform #Load)" #Data Warehouse #Lambda (AWS Lambda) #Batch #SQL (Structured Query Language) #AWS (Amazon Web Services) #Python #SnowPipe #Data Processing #API (Application Programming Interface) #Qlik #AWS S3 (Amazon Simple Storage Service) #Cloud #Looker #Data Integration #S3 (Amazon Simple Storage Service) #DMS (Data Migration Service) #dbt (data build tool) #AWS DMS (AWS Database Migration Service) #Snowflake #Security #BI (Business Intelligence) #SQS (Simple Queue Service) #Kafka (Apache Kafka) #Microsoft Power BI #Tableau #Scala #Data Pipeline #Schema Design
Role description
β€’ 12- 14 Plus years - Develop data pipelines for streaming and batch data processing needs to move data in and out of Snowflake data warehouse β€’ Collaborate with engineering and DevOps team members to implement, test, deploy, and operate data pipelines and ETL solutions β€’ 10 Years Hands-on experience with Snowflake, including schema design, query optimization, and data load techniques β€’ 8 years of hands-on reporting experience leveraging Business Intelligence tools such as Looker, Qlik, Tableau, Power BI, etc. β€’ Experience with DBT, including model development, testing, and documentation β€’ Optimize data pipelines, ETL processes, and data integrations for large-scale data analytics use cases β€’ 7+ years of experience in building and maintaining data pipelines and ETL/ELT processes in data-intensive organizations β€’ Design, build, and maintain scalable data pipelines using Snowflake and DBT. β€’ Design, build, and maintain scalable data pipelines using Snowflake and DBT. β€’ Develop and manage ETL processes to ingest data from various sources into Snowflake. β€’ Strong coding skills with Python and SQL for manipulating and analyzing data β€’ Hands-on experience with data movement using Snowpipe, Snow SQL, etc. β€’ Able to build data integrations and ingestion pipelines for streaming and batch data β€’ 5 years of designing and implementing operational production grade large-scale data pipelines, ETL/ELT and data integration solutions β€’ Hands-on experience with cloud platforms such as AWS and Google Cloud β€’ Proficiency with Kafka, AWS S3, SQS, Lambda, Pub/Sub, AWS DMS, Glue Preferred - Good to Have β€’ Familiarity with API security frameworks, token management and user access control including OAuth, JWT etc., β€’ Background in healthcare data especially patient centric clinical data and provider data is a plus