

Holistic Partners, Inc
Data Engineer with Snowflake and DBT || W2 Only
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position focused on Snowflake and DBT, offering a contract in Oaks, PA. Key skills include GitLab CI/CD, SQL, and Apache Airflow. Experience with Azure and automated testing is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 26, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Oaks, PA
-
🧠 - Skills detailed
#BI (Business Intelligence) #Continuous Deployment #Documentation #Integration Testing #UAT (User Acceptance Testing) #Data Access #Version Control #SQL (Structured Query Language) #Automated Testing #Azure #Data Migration #Data Engineering #Data Quality #Data Pipeline #SnowSQL #Cloud #Visualization #Automation #Data Transformations #Azure cloud #Deployment #GIT #GitLab #Migration #Agile #"ETL (Extract #Transform #Load)" #Data Processing #Unit Testing #YAML (YAML Ain't Markup Language) #Data Analysis #Airflow #Infrastructure as Code (IaC) #Python #Scripting #dbt (data build tool) #DevOps #Snowflake #Apache Airflow
Role description
Job Opportunity:Data Engineer
Location:Hybrid 3 days Oaks, PA.
Duration: Contract
Key Responsibility
What you will do:
• Design, develop, and maintain CI/CD pipelines in GitLab for automated deployment of data platform components including dbt transformations, Airflow DAGs, and Snowflake database objects across development, QA, UAT, and production environments.
• Implement and optimize blue-green deployment patterns and environment promotion strategies to ensure zero-downtime releases and safe rollback capabilities for the Data Cloud infrastructure.
• Build automated testing integration within deployment pipelines to validate data transformations, Snowflake stored procedures, functions, and materialized views before production promotion.
• Collaborate with QA teams to integrate validation frameworks and testing portals into the CI/CD workflow, ensuring data quality gates are enforced at each stage of the deployment process.
• Transition into hands-on data engineering work developing Snowflake data shares for cross-functional data access and building reporting analytics warehouses that consolidate data from multiple source systems including Investran/KYC, Geneva RSL, and Investier.
• Develop and optimize Snowflake objects including views, stored procedures, functions, and materialized views to support reporting and analytics requirements while maintaining performance and cost efficiency.
• Work within the Data Cloud/Azure infrastructure team to deploy and manage data pipeline components, coordinating with parallel teams handling Snowflake extracts and Reporting/PowerBI workstreams.
What we need from you:
• Strong hands-on experience with GitLab CI/CD pipeline development and deployment automation, including YAML configuration, pipeline orchestration, and environment management strategies.
• Solid understanding of DevOps practices and principles including infrastructure as code, automated testing, continuous integration/continuous deployment, and version control workflows using Git.
• Proficiency in SQL for writing queries, stored procedures, and functions with the ability to implement data transformations based on provided specifications and requirements.
• Working knowledge of Snowflake architecture and database objects including tables, views, materialized views, stored procedures, functions, and data sharing capabilities.
• Experience with dbt (data build tool) for implementing SQL-based data transformations, including model development, testing, documentation, and deployment patterns based on existing designs.
• Familiarity with Apache Airflow for workflow orchestration, including DAG development, task dependencies, and scheduling strategies for data pipeline automation.
• Good Python scripting skills for automation tasks, data processing, and integration work between various platform components.
• Demonstrated ability to work in Agile environments and collaborate effectively with cross-functional teams including QA engineers, data analysts, business stakeholders, and infrastructure teams.
Additional knowledge/experience desired:
• Experience working with Azure cloud infrastructure and services, particularly as they relate to data platform deployments and CI/CD tooling integration.
• Knowledge of snowsql scripting and command-line interfaces for Snowflake automation and deployment scripting.
• Understanding of testing methodologies for data pipelines including unit testing, integration testing, and user acceptance testing coordination.
• Exposure to reporting and visualization tools such as PowerBI or similar business intelligence platforms.
What we would like from you:
• Experience managing deployments across complex multi-environment landscapes with clear separation between development, QA, UAT, and production tiers.
• Track record of implementing automated testing and validation within CI/CD pipelines to catch issues early and maintain high data quality standards.
• Strong problem-solving abilities with a mindset toward building reusable, maintainable automation solutions that can scale with project growth.
• Excellent communication skills and the ability to document technical processes clearly for knowledge transfer to QA teams and other stakeholders.
• Willingness to grow from a DevOps-focused role into broader data engineering responsibilities as the platform matures and pipeline automation stabilizes.
• Self-motivated approach to learning new technologies and adapting to the evolving needs of a large-scale data migration and analytics platform project.
Job Opportunity:Data Engineer
Location:Hybrid 3 days Oaks, PA.
Duration: Contract
Key Responsibility
What you will do:
• Design, develop, and maintain CI/CD pipelines in GitLab for automated deployment of data platform components including dbt transformations, Airflow DAGs, and Snowflake database objects across development, QA, UAT, and production environments.
• Implement and optimize blue-green deployment patterns and environment promotion strategies to ensure zero-downtime releases and safe rollback capabilities for the Data Cloud infrastructure.
• Build automated testing integration within deployment pipelines to validate data transformations, Snowflake stored procedures, functions, and materialized views before production promotion.
• Collaborate with QA teams to integrate validation frameworks and testing portals into the CI/CD workflow, ensuring data quality gates are enforced at each stage of the deployment process.
• Transition into hands-on data engineering work developing Snowflake data shares for cross-functional data access and building reporting analytics warehouses that consolidate data from multiple source systems including Investran/KYC, Geneva RSL, and Investier.
• Develop and optimize Snowflake objects including views, stored procedures, functions, and materialized views to support reporting and analytics requirements while maintaining performance and cost efficiency.
• Work within the Data Cloud/Azure infrastructure team to deploy and manage data pipeline components, coordinating with parallel teams handling Snowflake extracts and Reporting/PowerBI workstreams.
What we need from you:
• Strong hands-on experience with GitLab CI/CD pipeline development and deployment automation, including YAML configuration, pipeline orchestration, and environment management strategies.
• Solid understanding of DevOps practices and principles including infrastructure as code, automated testing, continuous integration/continuous deployment, and version control workflows using Git.
• Proficiency in SQL for writing queries, stored procedures, and functions with the ability to implement data transformations based on provided specifications and requirements.
• Working knowledge of Snowflake architecture and database objects including tables, views, materialized views, stored procedures, functions, and data sharing capabilities.
• Experience with dbt (data build tool) for implementing SQL-based data transformations, including model development, testing, documentation, and deployment patterns based on existing designs.
• Familiarity with Apache Airflow for workflow orchestration, including DAG development, task dependencies, and scheduling strategies for data pipeline automation.
• Good Python scripting skills for automation tasks, data processing, and integration work between various platform components.
• Demonstrated ability to work in Agile environments and collaborate effectively with cross-functional teams including QA engineers, data analysts, business stakeholders, and infrastructure teams.
Additional knowledge/experience desired:
• Experience working with Azure cloud infrastructure and services, particularly as they relate to data platform deployments and CI/CD tooling integration.
• Knowledge of snowsql scripting and command-line interfaces for Snowflake automation and deployment scripting.
• Understanding of testing methodologies for data pipelines including unit testing, integration testing, and user acceptance testing coordination.
• Exposure to reporting and visualization tools such as PowerBI or similar business intelligence platforms.
What we would like from you:
• Experience managing deployments across complex multi-environment landscapes with clear separation between development, QA, UAT, and production tiers.
• Track record of implementing automated testing and validation within CI/CD pipelines to catch issues early and maintain high data quality standards.
• Strong problem-solving abilities with a mindset toward building reusable, maintainable automation solutions that can scale with project growth.
• Excellent communication skills and the ability to document technical processes clearly for knowledge transfer to QA teams and other stakeholders.
• Willingness to grow from a DevOps-focused role into broader data engineering responsibilities as the platform matures and pipeline automation stabilizes.
• Self-motivated approach to learning new technologies and adapting to the evolving needs of a large-scale data migration and analytics platform project.






