

Santcore Technologies
Snowflake Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Snowflake Data Engineer on a 6–12 month contract, paying $70-75/hr. Required skills include expert Python, dbt, and Snowflake proficiency, with experience in enterprise systems like Oracle Fusion and Salesforce. Must be able to convert to permanent.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
April 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#GitHub #Data Modeling #Data Pipeline #AI (Artificial Intelligence) #Oracle #Scala #Data Architecture #dbt (data build tool) #Data Integration #Documentation #Data Quality #"ETL (Extract #Transform #Load)" #Snowflake #Code Reviews #Python #Clustering #Data Engineering
Role description
Position: Staff Data Engineer
Location: Remote
Work Model: Remote
Duration: 6–12 Months Contract (Potential Conversion to Permanent)
Rate: 70-75/hr on 1099
Work Authorization: Must be able to convert to permanent (No H-1B)
Position Overview
We are seeking a Staff-level Data Engineer with senior-level expertise to design, build, and
maintain scalable data pipelines and architecture. This role requires deep experience in AI-
driven development and modern data engineering practices.
The ideal candidate will work closely with analytics, product, and engineering teams to develop
data models, enforce data quality standards, and contribute to the evolution of the data
platform.
Key Responsibilities
Design, build, and maintain scalable ELT pipelines from source to gold-layer using dbt
and Snowflake
Ingest and harmonize data from enterprise systems including Oracle Fusion, Salesforce,
HighRadius, and Salsify
Architect source-to-gold data models across ingestion, staging, business logic, and
analytics layers
Apply AI tools throughout the SDLC to improve development efficiency and quality
Collaborate with stakeholders to translate business requirements into data models
Enforce data quality, lineage, testing, and documentation standards
Contribute to data platform architecture and continuous improvement
Mentor team members through code reviews, pairing, and knowledge sharing
Required Technical Skills
Expert-level proficiency in Python for pipeline development, data transformation, and
orchestration
Advanced experience with dbt for data modeling, testing, and documentation
Deep expertise in Snowflake including performance tuning, clustering, time travel, and
data sharing
Strong understanding of source-to-gold data architecture, including medallion or data
mesh patterns
Experience working with enterprise source systems such as Oracle Fusion, Salesforce,
HighRadius, and Salsify
Strong experience with AI-assisted development tools including GitHub Copilot, OpenAI
CodeX, and Claude
Ability to demonstrate AI-augmented development productivity
Strong problem-solving and analytical skills
Required Software Skills
Python
dbt
Snowflake
GitHub Copilot
OpenAI CodeX
Claude
Preferred Qualifications
Experience in retail environments, including Order Management Systems (OMS)
Experience in multichannel commerce or omnichannel data integration
Experience in Consumer Packaged Goods (CPG) domain
Experience working with POS, inventory, fulfillment pipelines, or retail data flows
Additional Notes
Submission Requirements
Resume (Word or PDF)
Candidate location
Work authorization status (must be able to convert to permanent)
Contact information
Availability
Position: Staff Data Engineer
Location: Remote
Work Model: Remote
Duration: 6–12 Months Contract (Potential Conversion to Permanent)
Rate: 70-75/hr on 1099
Work Authorization: Must be able to convert to permanent (No H-1B)
Position Overview
We are seeking a Staff-level Data Engineer with senior-level expertise to design, build, and
maintain scalable data pipelines and architecture. This role requires deep experience in AI-
driven development and modern data engineering practices.
The ideal candidate will work closely with analytics, product, and engineering teams to develop
data models, enforce data quality standards, and contribute to the evolution of the data
platform.
Key Responsibilities
Design, build, and maintain scalable ELT pipelines from source to gold-layer using dbt
and Snowflake
Ingest and harmonize data from enterprise systems including Oracle Fusion, Salesforce,
HighRadius, and Salsify
Architect source-to-gold data models across ingestion, staging, business logic, and
analytics layers
Apply AI tools throughout the SDLC to improve development efficiency and quality
Collaborate with stakeholders to translate business requirements into data models
Enforce data quality, lineage, testing, and documentation standards
Contribute to data platform architecture and continuous improvement
Mentor team members through code reviews, pairing, and knowledge sharing
Required Technical Skills
Expert-level proficiency in Python for pipeline development, data transformation, and
orchestration
Advanced experience with dbt for data modeling, testing, and documentation
Deep expertise in Snowflake including performance tuning, clustering, time travel, and
data sharing
Strong understanding of source-to-gold data architecture, including medallion or data
mesh patterns
Experience working with enterprise source systems such as Oracle Fusion, Salesforce,
HighRadius, and Salsify
Strong experience with AI-assisted development tools including GitHub Copilot, OpenAI
CodeX, and Claude
Ability to demonstrate AI-augmented development productivity
Strong problem-solving and analytical skills
Required Software Skills
Python
dbt
Snowflake
GitHub Copilot
OpenAI CodeX
Claude
Preferred Qualifications
Experience in retail environments, including Order Management Systems (OMS)
Experience in multichannel commerce or omnichannel data integration
Experience in Consumer Packaged Goods (CPG) domain
Experience working with POS, inventory, fulfillment pipelines, or retail data flows
Additional Notes
Submission Requirements
Resume (Word or PDF)
Candidate location
Work authorization status (must be able to convert to permanent)
Contact information
Availability






