HireTalent - Diversity Staffing & Recruiting Firm

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of unspecified duration, offering a pay rate of "$X/hour". Candidates should have 10+ years of experience, strong SQL and Python skills, and expertise in Snowflake, Databricks, and SAP data sources. Remote work is available for U.S.-based candidates.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 12, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #BI (Business Intelligence) #dbt (data build tool) #Snowflake #Data Modeling #Scala #Data Engineering #Infrastructure as Code (IaC) #Python #Databricks #Streamlit #Terraform #Cloud #SAP #SAP Hana #SQL (Structured Query Language) #Data Processing #Data Pipeline
Role description
Senior Data Engineer πŸ“ Preference: Near Lake Forest, IL | Open to Remote (U.S.-based candidates) Our Client is looking for a highly technical, hands-on Senior Data Engineer to help power a large-scale enterprise data ecosystem. This isn’t a reporting role β€” it’s a build-and-optimize role for someone who thrives in complex environments and knows how to make data move fast and reliably. If you enjoy solving performance bottlenecks, designing scalable data models, and working across modern cloud platforms like Snowflake and Databricks, this is the kind of challenge that keeps things interesting. What You’ll Be Doing β€’ Build and maintain scalable data pipelines pulling from complex enterprise systems like SAP/S/4HANA β€’ Design, develop, and optimize advanced data models to support business intelligence and analytics β€’ Write and tune high-performance SQL and stored procedures to improve load times and reduce latency β€’ Work across Snowflake and Databricks in a modern cloud data stack β€’ Develop internal data tools and lightweight applications using Python and Streamlit β€’ Identify performance issues, troubleshoot bottlenecks, and implement sustainable optimization strategies This role requires someone who can operate independently, think architecturally, and execute hands-on. What You Bring β€’ 10+ years of experience in data engineering, database engineering, or related technical roles β€’ Deep expertise in SQL (stored procedures, query optimization, performance tuning) β€’ Strong Python skills in production data environments β€’ Experience with Snowflake and/or Databricks β€’ Hands-on experience working with SAP, S/4HANA, or SAP HANA data sources β€’ Strong understanding of data modeling principles and large-scale data processing Nice to Have β€’ Experience with Streamlit or similar Python-based data app frameworks β€’ Exposure to Infrastructure as Code (Terraform) β€’ Experience with dbt or modern ELT workflows Why This Role Stands Out β€’ High visibility impact across an enterprise-scale organization β€’ Modern cloud data stack β€’ Real ownership over performance, modeling, and architecture decisions β€’ Opportunity to build internal tools that teams actually use If you’re a senior-level data engineer who enjoys working deep in the stack and solving meaningful performance challenges, this could be a strong fit. Open to U.S.-based remote candidates, with preference for those near Lake Forest, Illinois.