

Gravity IT Resources
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-month contract to hire, hybrid in Charlotte, NC. Requires 5-10 years of experience, strong skills in Databricks, Snowflake, SQL, PySpark, and familiarity with SAP ECC, preferably in supply-chain/manufacturing.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 6, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Charlotte Metro
-
π§ - Skills detailed
#PySpark #"ETL (Extract #Transform #Load)" #SSRS (SQL Server Reporting Services) #Spark SQL #Data Pipeline #Microsoft Power BI #SAP Hana #Data Storage #SQL (Structured Query Language) #Snowflake #Data Warehouse #Azure #Batch #Data Engineering #SAP #Consulting #BI (Business Intelligence) #SSIS (SQL Server Integration Services) #Azure Data Factory #ADF (Azure Data Factory) #Databricks #Spark (Apache Spark) #Oracle #Storage
Role description
Contract Senior Data Engineer
Location: Hybrid (Charlotte, NC)
Duration: 6 months - contract to hire
Team Size: 7 (6 staff + lead)
Overview
We are seeking a Senior Data Engineer to support the stabilization and optimization of our data warehouse. This is a hands-on, contract role with approximately 50% coding and 50% design/consulting responsibilities. The ideal candidate will have strong experience in Databricks, Snowflake, and SAP ECC, with a background in supply-chain or manufacturing data preferred.
Primary goals:
β’ Stabilize the bronze (raw extract) layer of the data warehouse.
β’ Optimize silver/gold medallion layers for performance and reliability.
β’ Reduce overnight ETL batch lag (current window: midnight β ~7 AM).
β’ Consult on pipeline design and recommend efficiency improvements.
Work model: Hybrid, approximately 3 days per week onsite
Key Responsibilities
β’ Participate in a workshop to clarify detailed scope and stabilization priorities.
β’ Collaborate with team members and stakeholders to design and implement efficient data pipelines.
β’ Build structurally sound data for the silver-layer warehouse using SQL and PySpark.
β’ Optimize ETL processes to enable near-real-time operational visibility.
β’ Support onboarding and knowledge transfer to other engineers.
Technical priorities:
β’ Databricks (notebooks, PySpark, SQL) β primary focus.
β’ Snowflake β data warehousing and optimization.
β’ SAP ECC / Oracle table structure knowledge β especially for supply chain/manufacturing data.
β’ Azure Data Factory β extraction pipelines.
β’ Power BI β dashboards and reporting; SSIS/SSRS not required.
Technical Stack & Architecture
β’ ETL Extraction: Azure Data Factory from SAP HANA
β’ Transformation: Databricks notebooks with PySpark/SQL
β’ Data Storage: Snowflake
β’ Consumption/Reporting: Power BI
β’ Batch Window: 12:00β12:30 AM β ~7:00 AM
Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making. Candidate will help redesign architecture for faster, more reliable processing.
Candidate Requirements
β’ Experience: 5β10 years as a Data Engineer or similar role. No less than 5.
β’ Strong hands-on experience in SQL, PySpark, and Databricks.
β’ Familiarity with Snowflake and SAP ECC.
β’ Background in supply-chain or manufacturing data preferred.
β’ Proven ability to consult on data pipeline design and performance optimization.
β’ Capable of working independently and collaboratively in a hybrid/remote setup.
Interview Process
1. Virtual one-on-one (30 min, with camera) β hiring manager.
1. Cultural/fit interview (30 min, onsite if local).
1. Panel interview (technical deep dive, same day; 1β1.5 hrs total).
Total candidate time: 1.5β2 hours
Contract Senior Data Engineer
Location: Hybrid (Charlotte, NC)
Duration: 6 months - contract to hire
Team Size: 7 (6 staff + lead)
Overview
We are seeking a Senior Data Engineer to support the stabilization and optimization of our data warehouse. This is a hands-on, contract role with approximately 50% coding and 50% design/consulting responsibilities. The ideal candidate will have strong experience in Databricks, Snowflake, and SAP ECC, with a background in supply-chain or manufacturing data preferred.
Primary goals:
β’ Stabilize the bronze (raw extract) layer of the data warehouse.
β’ Optimize silver/gold medallion layers for performance and reliability.
β’ Reduce overnight ETL batch lag (current window: midnight β ~7 AM).
β’ Consult on pipeline design and recommend efficiency improvements.
Work model: Hybrid, approximately 3 days per week onsite
Key Responsibilities
β’ Participate in a workshop to clarify detailed scope and stabilization priorities.
β’ Collaborate with team members and stakeholders to design and implement efficient data pipelines.
β’ Build structurally sound data for the silver-layer warehouse using SQL and PySpark.
β’ Optimize ETL processes to enable near-real-time operational visibility.
β’ Support onboarding and knowledge transfer to other engineers.
Technical priorities:
β’ Databricks (notebooks, PySpark, SQL) β primary focus.
β’ Snowflake β data warehousing and optimization.
β’ SAP ECC / Oracle table structure knowledge β especially for supply chain/manufacturing data.
β’ Azure Data Factory β extraction pipelines.
β’ Power BI β dashboards and reporting; SSIS/SSRS not required.
Technical Stack & Architecture
β’ ETL Extraction: Azure Data Factory from SAP HANA
β’ Transformation: Databricks notebooks with PySpark/SQL
β’ Data Storage: Snowflake
β’ Consumption/Reporting: Power BI
β’ Batch Window: 12:00β12:30 AM β ~7:00 AM
Operational challenge: Current pipelines deliver third-shift manufacturing data a day late, limiting timely decision-making. Candidate will help redesign architecture for faster, more reliable processing.
Candidate Requirements
β’ Experience: 5β10 years as a Data Engineer or similar role. No less than 5.
β’ Strong hands-on experience in SQL, PySpark, and Databricks.
β’ Familiarity with Snowflake and SAP ECC.
β’ Background in supply-chain or manufacturing data preferred.
β’ Proven ability to consult on data pipeline design and performance optimization.
β’ Capable of working independently and collaboratively in a hybrid/remote setup.
Interview Process
1. Virtual one-on-one (30 min, with camera) β hiring manager.
1. Cultural/fit interview (30 min, onsite if local).
1. Panel interview (technical deep dive, same day; 1β1.5 hrs total).
Total candidate time: 1.5β2 hours






