

Stott and May
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with strong experience in Snowflake and Databricks, focusing on building data pipelines in AWS. It is a six-month contract, fully remote, paying $70–100 per hour, requiring SQL and Python skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
800
-
🗓️ - Date
January 9, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#Python #Agile #Databricks #Data Science #Spark (Apache Spark) #AWS (Amazon Web Services) #ML (Machine Learning) #Data Pipeline #SQL (Structured Query Language) #Data Engineering #dbt (data build tool) #PySpark #Snowflake #Cloud
Role description
Stott and May are working with a digital client looking to bring on an experienced Data Engineer to support data platform build and optimization work across Snowflake and Databricks in AWS.
You will join an existing agile team (of 6 existing members) as a senior engineer, with the need to add up to three engineers.
This role suits someone who is hands on, comfortable working in modern cloud data stacks, and experienced in delivering production grade data pipelines in real world environments.
This is a six month initial contract with a strong view to extend in six month increments.
Rates are competitive, paying between $70–100 per hour depending on experience, at 40 hours per week.
We engage contractors via W2 or personal LLC only and cannot work with C2C third parties.
This position is FULLY REMOTE, working EST time zone.
What you will be doing
Building and maintaining data pipelines using Snowflake and Databricks
Designing ELT workflows for analytics and downstream consumption
Working with structured and semi structured data at scale
Collaborating with analytics, data science, and platform teams
Supporting performance, reliability, and cost optimization initiatives
What we are looking for
Strong experience as a Data Engineer in cloud environments
Hands on Snowflake experience in production
Hands on Databricks experience using Spark or PySpark
AWS experience across core services used in data platforms
Strong SQL skills and solid Python experience
Experience working in agile, delivery focused teams
Nice to have
dbt or analytics engineering exposure
Streaming or near real time data experience
Experience supporting data science or ML workloads
Why this role
Modern data stack
Meaningful engineering work, not just maintenance
Flexible engagement model, extensions likely
Opportunity to work with experienced data teams
Stott and May are working with a digital client looking to bring on an experienced Data Engineer to support data platform build and optimization work across Snowflake and Databricks in AWS.
You will join an existing agile team (of 6 existing members) as a senior engineer, with the need to add up to three engineers.
This role suits someone who is hands on, comfortable working in modern cloud data stacks, and experienced in delivering production grade data pipelines in real world environments.
This is a six month initial contract with a strong view to extend in six month increments.
Rates are competitive, paying between $70–100 per hour depending on experience, at 40 hours per week.
We engage contractors via W2 or personal LLC only and cannot work with C2C third parties.
This position is FULLY REMOTE, working EST time zone.
What you will be doing
Building and maintaining data pipelines using Snowflake and Databricks
Designing ELT workflows for analytics and downstream consumption
Working with structured and semi structured data at scale
Collaborating with analytics, data science, and platform teams
Supporting performance, reliability, and cost optimization initiatives
What we are looking for
Strong experience as a Data Engineer in cloud environments
Hands on Snowflake experience in production
Hands on Databricks experience using Spark or PySpark
AWS experience across core services used in data platforms
Strong SQL skills and solid Python experience
Experience working in agile, delivery focused teams
Nice to have
dbt or analytics engineering exposure
Streaming or near real time data experience
Experience supporting data science or ML workloads
Why this role
Modern data stack
Meaningful engineering work, not just maintenance
Flexible engagement model, extensions likely
Opportunity to work with experienced data teams






