

Jobgether
Senior Data Engineer - Python and Snowflake
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer - Python and Snowflake, offering a contract length of "unknown" and a pay rate of $80 - $95 per hour. Key skills include Python, SQL, Snowflake, and dbt, with 7-10 years of data engineering experience required. Remote work is available.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
760
-
🗓️ - Date
October 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Redshift #Cloud #Data Transformations #dbt (data build tool) #Automation #AI (Artificial Intelligence) #Programming #Quality Assurance #Pandas #Airflow #Data Pipeline #Linux #SQLAlchemy #Data Lifecycle #GitHub #SQL (Structured Query Language) #Scala #Spark (Apache Spark) #Python #Datasets #Libraries #BI (Business Intelligence) #Snowflake #Microsoft Power BI #Docker #Data Engineering #Data Architecture #PySpark #Databases #BigQuery #"ETL (Extract #Transform #Load)"
Role description
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer - Python and Snowflake in the United States.
We are seeking an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and models in a dynamic, fast-paced environment. This role focuses on integrating data into cloud-based platforms, ensuring high-quality datasets for analytics, and supporting business intelligence initiatives. You will have the opportunity to influence data architecture, implement best practices, and work closely with analysts and stakeholders to deliver impactful insights. The position emphasizes collaboration, technical excellence, and creative problem-solving, allowing you to shape the data environment while contributing to meaningful, client-facing projects.
Accountabilities:
• Design, build, and maintain ETL/ELT pipelines using Python to integrate data from APIs, flat files, and relational databases into cloud platforms such as Snowflake
• Develop and optimize data transformations using dbt to support analytics and reporting requirements
• Implement data validation, testing, and quality assurance measures to ensure reliable datasets
• Manage and automate data workflows using modern orchestration and CI/CD tools like Airflow and GitHub Actions
• Prepare structured datasets for BI tools such as Power BI to support dashboards and reports
• Collaborate with analysts, BI developers, and business stakeholders to deliver end-to-end data solutions
• Define and promote data engineering standards, frameworks, and best practices within the team
Requirements
• 7-10 years of professional data engineering experience across the full data lifecycle
• Strong programming skills in Python, including data libraries such as Pandas, PySpark, or SQLAlchemy
• Advanced SQL expertise and hands-on experience with dbt for data transformations
• Experience with Snowflake or similar cloud data platforms (e.g., Redshift, BigQuery)
• Proficiency with Linux, Docker, and GitHub Actions for environment management and CI/CD automation
• Knowledge of data architecture principles, including modeling, lineage, and orchestration
• Familiarity with BI tools such as Power BI and experience supporting analytics teams
• Ability to work effectively in a collaborative, fast-paced environment with a proactive and adaptable mindset
Benefits
• Competitive contractor rate: $80 - $95 per hour (1099 basis), based on experience
• Opportunity to work with a growing data environment and shape scalable data pipelines
• Exposure to cutting-edge cloud data platforms and modern data engineering practices
• Collaborative, team-oriented environment with a focus on technical excellence
• Flexible remote work and potential for cross-functional collaboration with analysts and business stakeholders
Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements.
📊 It compares your profile to the job's core requirements and past success factors to determine your match score.
🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role.
🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed.
The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role.
Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team.
Thank you for your interest!
This position is posted by Jobgether on behalf of a partner company. We are currently looking for a Senior Data Engineer - Python and Snowflake in the United States.
We are seeking an experienced Senior Data Engineer to design, build, and optimize scalable data pipelines and models in a dynamic, fast-paced environment. This role focuses on integrating data into cloud-based platforms, ensuring high-quality datasets for analytics, and supporting business intelligence initiatives. You will have the opportunity to influence data architecture, implement best practices, and work closely with analysts and stakeholders to deliver impactful insights. The position emphasizes collaboration, technical excellence, and creative problem-solving, allowing you to shape the data environment while contributing to meaningful, client-facing projects.
Accountabilities:
• Design, build, and maintain ETL/ELT pipelines using Python to integrate data from APIs, flat files, and relational databases into cloud platforms such as Snowflake
• Develop and optimize data transformations using dbt to support analytics and reporting requirements
• Implement data validation, testing, and quality assurance measures to ensure reliable datasets
• Manage and automate data workflows using modern orchestration and CI/CD tools like Airflow and GitHub Actions
• Prepare structured datasets for BI tools such as Power BI to support dashboards and reports
• Collaborate with analysts, BI developers, and business stakeholders to deliver end-to-end data solutions
• Define and promote data engineering standards, frameworks, and best practices within the team
Requirements
• 7-10 years of professional data engineering experience across the full data lifecycle
• Strong programming skills in Python, including data libraries such as Pandas, PySpark, or SQLAlchemy
• Advanced SQL expertise and hands-on experience with dbt for data transformations
• Experience with Snowflake or similar cloud data platforms (e.g., Redshift, BigQuery)
• Proficiency with Linux, Docker, and GitHub Actions for environment management and CI/CD automation
• Knowledge of data architecture principles, including modeling, lineage, and orchestration
• Familiarity with BI tools such as Power BI and experience supporting analytics teams
• Ability to work effectively in a collaborative, fast-paced environment with a proactive and adaptable mindset
Benefits
• Competitive contractor rate: $80 - $95 per hour (1099 basis), based on experience
• Opportunity to work with a growing data environment and shape scalable data pipelines
• Exposure to cutting-edge cloud data platforms and modern data engineering practices
• Collaborative, team-oriented environment with a focus on technical excellence
• Flexible remote work and potential for cross-functional collaboration with analysts and business stakeholders
Jobgether is a Talent Matching Platform that partners with companies worldwide to efficiently connect top talent with the right opportunities through AI-driven job matching.
When you apply, your profile goes through our AI-powered screening process designed to identify top talent efficiently and fairly.
🔍 Our AI evaluates your CV and LinkedIn profile thoroughly, analyzing your skills, experience, and achievements.
📊 It compares your profile to the job's core requirements and past success factors to determine your match score.
🎯 Based on this analysis, we automatically shortlist the 3 candidates with the highest match to the role.
🧠 When necessary, our human team may perform an additional manual review to ensure no strong profile is missed.
The process is transparent, skills-based, and free of bias — focusing solely on your fit for the role.
Once the shortlist is completed, we share it directly with the company that owns the job opening. The final decision and next steps (such as interviews or additional assessments) are then made by their internal hiring team.
Thank you for your interest!