

Stott and May
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month remote contract, paying $70-$80 W2 or $80-$90 LLC. Key skills include Python, SQL, and experience with cloud data warehouses (Snowflake, BigQuery, Redshift) and ETL/ELT pipelines.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date
March 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Quality #Datasets #Snowflake #Monitoring #dbt (data build tool) #AWS (Amazon Web Services) #Cloud #Storage #GCP (Google Cloud Platform) #Data Modeling #Python #SQL (Structured Query Language) #Batch #ML (Machine Learning) #PySpark #Scala #Data Pipeline #Data Warehouse #BigQuery #Databases #SaaS (Software as a Service) #Redshift #Data Engineering #Airflow #Data Framework #"ETL (Extract #Transform #Load)" #Documentation #Spark (Apache Spark)
Role description
Overview
We are looking for a Data Engineer to design and build scalable data pipelines while supporting the architecture and optimization of a modern cloud data warehouse. This role will focus on ingesting data from multiple sources, transforming it into reliable datasets, and ensuring it is efficiently structured for downstream analytics and reporting.
This is a 6 month, full remote, rolling contract.
This role is paying $70-$80 W2 $80-$90 LLC
Key Responsibilities
• Design and develop scalable ETL/ELT pipelines to ingest data from APIs, SaaS platforms, and internal systems
• Build and maintain data workflows for batch and near-real-time processing
• Develop data transformation logic using SQL and Python within the warehouse environment
• Optimize data warehouse performance, storage design, and query efficiency
• Implement data quality checks and monitoring across ingestion and transformation layers
• Collaborate with analytics teams to ensure datasets are reliable and properly structured for downstream use
• Maintain documentation and best practices for pipeline development and warehouse modeling
Required Experience
• Strong experience building data pipelines using Python and SQL
• Hands-on experience with modern cloud data warehouses (Snowflake, BigQuery, or Redshift)
• Experience with workflow orchestration tools such as Airflow or Prefect
• Experience processing large datasets using Spark or distributed data frameworks
• Strong understanding of data modeling and warehouse design principles
• Experience integrating data from APIs, databases, and SaaS applications
Nice to Have
• Experience working in high-volume data environments
• Familiarity with dbt for transformation workflows
• Exposure to machine learning data pipelines
Tech Stack Example
• Python
• SQL
• Snowflake / BigQuery
• Airflow
• Spark / PySpark
• Cloud (AWS or GCP)
Overview
We are looking for a Data Engineer to design and build scalable data pipelines while supporting the architecture and optimization of a modern cloud data warehouse. This role will focus on ingesting data from multiple sources, transforming it into reliable datasets, and ensuring it is efficiently structured for downstream analytics and reporting.
This is a 6 month, full remote, rolling contract.
This role is paying $70-$80 W2 $80-$90 LLC
Key Responsibilities
• Design and develop scalable ETL/ELT pipelines to ingest data from APIs, SaaS platforms, and internal systems
• Build and maintain data workflows for batch and near-real-time processing
• Develop data transformation logic using SQL and Python within the warehouse environment
• Optimize data warehouse performance, storage design, and query efficiency
• Implement data quality checks and monitoring across ingestion and transformation layers
• Collaborate with analytics teams to ensure datasets are reliable and properly structured for downstream use
• Maintain documentation and best practices for pipeline development and warehouse modeling
Required Experience
• Strong experience building data pipelines using Python and SQL
• Hands-on experience with modern cloud data warehouses (Snowflake, BigQuery, or Redshift)
• Experience with workflow orchestration tools such as Airflow or Prefect
• Experience processing large datasets using Spark or distributed data frameworks
• Strong understanding of data modeling and warehouse design principles
• Experience integrating data from APIs, databases, and SaaS applications
Nice to Have
• Experience working in high-volume data environments
• Familiarity with dbt for transformation workflows
• Exposure to machine learning data pipelines
Tech Stack Example
• Python
• SQL
• Snowflake / BigQuery
• Airflow
• Spark / PySpark
• Cloud (AWS or GCP)





