

Charter Global
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focused on Snowflake, lasting 12 months+, remote in Columbia, MD. Requires 5+ years in data engineering, 3+ years with Snowflake, SQL, Python, and experience with CI/CD pipelines and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 4, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Columbia, MD
-
🧠 - Skills detailed
#Compliance #SQL (Structured Query Language) #DevOps #Data Pipeline #"ETL (Extract #Transform #Load)" #Deployment #Kafka (Apache Kafka) #dbt (data build tool) #Cloud #Automation #Data Quality #Snowflake #Data Architecture #Python #Complex Queries #Data Transformations #Snowpark #Jenkins #Scala #Data Engineering
Role description
Title: Data Engineer (Snowflake)
Duration: 12 Months+
Location: Columbia, MD (Remote Job)
Inperson Interview Required.
Note: Client is not willing to sponsor or transfer visa, only Independent visa holders can apply.
This role is a critical contributor in designing, developing, and optimizing cloud-based data solutions using Snowflake.
You’ll leverage advanced Snowflake capabilities, build modern data pipelines, and enable scalable analytics and reporting for enterprise healthcare operations. The ideal candidate will demonstrate deep Snowflake and SQL expertise, hands-on experience with Snowpark (Python), and a strong foundation in data architecture, governance, and automation.
Key Responsibilities:
Design, develop, and optimize data pipelines and transformations within Snowflake using SQL and Snowpark (Python).
Build and maintain Streams, Tasks, Materialized Views, and Dashboards to enable real-time and scheduled data operations.
Develop and automate CI/CD pipelines for Snowflake deployments(Jenkins)
Collaborate with data architects, analysts, and cloud engineers to design scalable and efficient data models.
Implement data quality, lineage, and governance frameworks aligned with enterprise standards and compliance (e.g., HIPAA, PHI/PII).
Monitor data pipelines for performance, reliability, and cost efficiency; proactively optimize workloads and resource utilization.
Integrate Snowflake with dbt, Kafka for end-to-end orchestration and streaming workflows.
Required Skills & Experience:
5+ years of experience in data engineering or equivalent field.
3+ years hands-on experience with Snowflake Data Cloud, including:
Streams, Tasks, Dashboards, and Materialized Views
Performance tuning, resource monitors, and warehouse optimization
Strong proficiency in SQL (complex queries, stored procedures, optimization).
Proficiency in Python, with demonstrated experience using Snowpark for data transformations.
Experience building CI/CD pipelines for Snowflake using modern DevOps tooling.
Title: Data Engineer (Snowflake)
Duration: 12 Months+
Location: Columbia, MD (Remote Job)
Inperson Interview Required.
Note: Client is not willing to sponsor or transfer visa, only Independent visa holders can apply.
This role is a critical contributor in designing, developing, and optimizing cloud-based data solutions using Snowflake.
You’ll leverage advanced Snowflake capabilities, build modern data pipelines, and enable scalable analytics and reporting for enterprise healthcare operations. The ideal candidate will demonstrate deep Snowflake and SQL expertise, hands-on experience with Snowpark (Python), and a strong foundation in data architecture, governance, and automation.
Key Responsibilities:
Design, develop, and optimize data pipelines and transformations within Snowflake using SQL and Snowpark (Python).
Build and maintain Streams, Tasks, Materialized Views, and Dashboards to enable real-time and scheduled data operations.
Develop and automate CI/CD pipelines for Snowflake deployments(Jenkins)
Collaborate with data architects, analysts, and cloud engineers to design scalable and efficient data models.
Implement data quality, lineage, and governance frameworks aligned with enterprise standards and compliance (e.g., HIPAA, PHI/PII).
Monitor data pipelines for performance, reliability, and cost efficiency; proactively optimize workloads and resource utilization.
Integrate Snowflake with dbt, Kafka for end-to-end orchestration and streaming workflows.
Required Skills & Experience:
5+ years of experience in data engineering or equivalent field.
3+ years hands-on experience with Snowflake Data Cloud, including:
Streams, Tasks, Dashboards, and Materialized Views
Performance tuning, resource monitors, and warehouse optimization
Strong proficiency in SQL (complex queries, stored procedures, optimization).
Proficiency in Python, with demonstrated experience using Snowpark for data transformations.
Experience building CI/CD pipelines for Snowflake using modern DevOps tooling.






