

Zolon Tech Inc.
Sr Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer, offering a hybrid contract for 5+ years, with a pay rate of "unknown." Candidates should have extensive Snowflake and SQL expertise, experience with Snowpark (Python), and a background in healthcare data environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 1, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Columbia, MD
-
🧠 - Skills detailed
#Azure #Collibra #Data Governance #Data Vault #Kafka (Apache Kafka) #Data Modeling #Data Transformations #Scala #"ETL (Extract #Transform #Load)" #Snowflake #Batch #DevOps #Automation #Alation #Jenkins #dbt (data build tool) #Deployment #Data Engineering #Data Pipeline #Cloud #Data Quality #Complex Queries #Data Ingestion #Snowpark #Python #Vault #Metadata #SQL (Structured Query Language) #Compliance #Data Architecture
Role description
Job Description:
• Preferred candidates local to MD, VA and DC.
• This role is a critical contributor in designing, developing, and optimizing cloud-based data solutions using Snowflake.
• You'll leverage advanced Snowflake capabilities, build modern data pipelines, and enable scalable analytics and reporting for enterprise healthcare operations.
• The ideal candidate will demonstrate deep Snowflake and SQL expertise, hands-on experience with Snowpark (Python), and a strong foundation in data architecture, governance, and automation.
Key Responsibilities:
• Design, develop, and optimize data pipelines and transformations within Snowflake using SQL and Snowpark (Python).
• Build and maintain Streams, Tasks, Materialized Views, and Dashboards to enable real-time and scheduled data operations.
• Develop and automate CI/CD pipelines for Snowflake deployments(Jenkins)
• Collaborate with data architects, analysts, and cloud engineers to design scalable and efficient data models.
• Implement data quality, lineage, and governance frameworks aligned with enterprise standards and compliance (e.g., HIPAA, PHI/PII).
• Monitor data pipelines for performance, reliability, and cost efficiency; proactively optimize workloads and resource utilization.
• Integrate Snowflake with dbt, Kafka for end-to-end orchestration and streaming workflows.
• Conduct root cause analysis and troubleshooting for complex data and performance issues in production.
• Collaborate across technology and business teams to translate complex data needs into elegant, maintainable solutions.
Required Skills & Experience:
• 5+ years of experience in data engineering or equivalent field.
• 3+ years hands-on experience with Snowflake Data Cloud, including:
• Streams, Tasks, Dashboards, and Materialized Views
• Performance tuning, resource monitors, and warehouse optimization
• Strong proficiency in SQL (complex queries, stored procedures, optimization).
• Proficiency in Python, with demonstrated experience using Snowpark for data transformations.
• Experience building CI/CD pipelines for Snowflake using modern DevOps tooling.
• Solid understanding of data modeling methodologies (Kimball, Data Vault, or 3NF).
• Experience with data governance, lineage, and metadata tools (Collibra, Alation, or Azure Purview).
• Strong troubleshooting, analytical, and communication skills with the ability to engage both technical and business audiences.
Preferred Qualifications:
• Experience with dbt, or Kafka for orchestration and streaming.
• Exposure to data quality frameworks such as Great Expectations or Monte Carlo.
• Understanding of real-time and batch data ingestion architectures.
• Snowflake Certification (SnowPro Core or Advanced).
• Prior experience in healthcare, insurance, or other regulated data environments.
Job Description:
• Preferred candidates local to MD, VA and DC.
• This role is a critical contributor in designing, developing, and optimizing cloud-based data solutions using Snowflake.
• You'll leverage advanced Snowflake capabilities, build modern data pipelines, and enable scalable analytics and reporting for enterprise healthcare operations.
• The ideal candidate will demonstrate deep Snowflake and SQL expertise, hands-on experience with Snowpark (Python), and a strong foundation in data architecture, governance, and automation.
Key Responsibilities:
• Design, develop, and optimize data pipelines and transformations within Snowflake using SQL and Snowpark (Python).
• Build and maintain Streams, Tasks, Materialized Views, and Dashboards to enable real-time and scheduled data operations.
• Develop and automate CI/CD pipelines for Snowflake deployments(Jenkins)
• Collaborate with data architects, analysts, and cloud engineers to design scalable and efficient data models.
• Implement data quality, lineage, and governance frameworks aligned with enterprise standards and compliance (e.g., HIPAA, PHI/PII).
• Monitor data pipelines for performance, reliability, and cost efficiency; proactively optimize workloads and resource utilization.
• Integrate Snowflake with dbt, Kafka for end-to-end orchestration and streaming workflows.
• Conduct root cause analysis and troubleshooting for complex data and performance issues in production.
• Collaborate across technology and business teams to translate complex data needs into elegant, maintainable solutions.
Required Skills & Experience:
• 5+ years of experience in data engineering or equivalent field.
• 3+ years hands-on experience with Snowflake Data Cloud, including:
• Streams, Tasks, Dashboards, and Materialized Views
• Performance tuning, resource monitors, and warehouse optimization
• Strong proficiency in SQL (complex queries, stored procedures, optimization).
• Proficiency in Python, with demonstrated experience using Snowpark for data transformations.
• Experience building CI/CD pipelines for Snowflake using modern DevOps tooling.
• Solid understanding of data modeling methodologies (Kimball, Data Vault, or 3NF).
• Experience with data governance, lineage, and metadata tools (Collibra, Alation, or Azure Purview).
• Strong troubleshooting, analytical, and communication skills with the ability to engage both technical and business audiences.
Preferred Qualifications:
• Experience with dbt, or Kafka for orchestration and streaming.
• Exposure to data quality frameworks such as Great Expectations or Monte Carlo.
• Understanding of real-time and batch data ingestion architectures.
• Snowflake Certification (SnowPro Core or Advanced).
• Prior experience in healthcare, insurance, or other regulated data environments.





