Insight Global

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with an 8+ year background in data engineering, offering a contract length of "Unknown" and a pay rate of "Unknown." The position is on-site, requiring strong Python skills, SQL proficiency, and experience in manufacturing or supply chain analytics.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 4, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Forecasting #Python #AWS (Amazon Web Services) #Snowflake #Monitoring #Data Pipeline #Data Engineering #SaaS (Software as a Service) #BI (Business Intelligence) #SQL (Structured Query Language) #Data Governance #Storage #"ETL (Extract #Transform #Load)" #IoT (Internet of Things) #Automation #Palantir Foundry #Microsoft Power BI #Observability #R #Documentation #Scala #Data Modeling #Scripting #Cloud #Computer Science
Role description
Our client is seeking a highly technical Senior Data Engineer to join their Data Intelligence & Systems Architecture (DISA) team. This engineer will play a foundational role in shaping the enterprise data platform, owning ingestion, modeling, and activation of data that powers reporting, decision-making, and automation across the organization. This role partners directly with teams across Supply Chain & Manufacturing, Finance & Accounting, HR, Software, Field Operations, and R&D. The ideal candidate combines strong data engineering expertise with business curiosity, high ownership, and a bias toward action. Ranked Must‑Haves 1. Highly technical data engineering skillset 1. End-to-end ownership of data infrastructures or projects with measurable business impact 1. Personable, collaborative, action‑oriented, and comfortable working fully on-site Role Focus Areas: Material Operations β€’ Data pipelines supporting concrete production workflows β€’ Supply chain analytics β€’ Cost tracking, forecasting, inventory management Hardware Reliability β€’ Machine and equipment performance monitoring β€’ Predictive maintenance analytics β€’ Field service ticket analysis Technical Requirements β€’ Strong Python scripting (required) β€’ SQL proficiency (preferred) β€’ Palantir Foundry experience (highly valued but not required) β€’ Alternative experience: AWS, Snowflake, Power BI β€’ Proven end-to-end data infrastructure ownership β€’ Experience with IoT / hardware analytics (plus) β€’ Experience integrating data from APIs, machine data sources, ERP systems, SaaS tools, and cloud storage platforms Core Responsibilities β€’ Lead ingestion and transformation pipelines across internal tools, SaaS systems, and industrial data sources β€’ Model and maintain governed, high-quality data assets for reporting, diagnostics, forecasting, and automation β€’ Build analytics frameworks and operational dashboards with real-time visibility into cost, progress, equipment status, and material flow β€’ Partner with business and technical stakeholders to translate pain points into scalable data solutions β€’ Develop advanced analytics capabilities including predictive maintenance and proactive purchasing workflows β€’ Implement best practices in reliability, versioning, documentation, and testing β€’ Mentor team members and support a culture of excellence in data engineering Minimum Qualifications β€’ 8+ years in data engineering, analytics engineering, or backend software development β€’ Bachelor's degree in Computer Science, Data Engineering, Software Engineering, or related field β€’ Hands-on expertise with Python and SQL β€’ Experience delivering scalable data products in fast-paced environments β€’ Strong understanding of data modeling, business logic abstraction, and stakeholder engagement Preferred Experience β€’ Supporting Manufacturing, Field Operations, or Supply Chain teams with near real-time analytics β€’ Familiarity with platforms such as Procore, Coupa, or NetSuite β€’ Building predictive models or workflow automation β€’ Background in data governance, observability, or maintaining production-grade pipelines