

Primary Services
Data Engineer - Snowflake
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Snowflake, offering a contract length of "unknown" at a pay rate of "unknown." Key skills include expertise in Python, Snowflake SQL, and data visualization tools. A minimum of 5 years' experience in data engineering and familiarity with ERP/MRP systems are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
960
-
ποΈ - Date
March 20, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Houston, TX
-
π§ - Skills detailed
#BI (Business Intelligence) #Data Processing #Data Access #Version Control #Visualization #Automation #QlikView #Data Migration #Snowflake #Scala #Microsoft Power BI #Datasets #Cloud #SQL (Structured Query Language) #GitHub #Data Wrangling #Migration #Data Integrity #"ETL (Extract #Transform #Load)" #Deployment #Data Science #Data Engineering #API (Application Programming Interface) #Agile #Python #Data Architecture #Streamlit
Role description
Join a high-impact analytics initiative and build cutting-edge data solutions that drive real business outcomes. Work on complex supply chain challenges while leveraging modern cloud technologies in a fast-paced, innovation-focused environment.
Primary Services is actively recruiting for a Data Engineer to support a large, industry-leading organization. This role offers the opportunity to work as a hands-on technical expert, developing advanced data solutions that directly influence manufacturing and supply chain operations. The position requires a strong blend of data engineering expertise, business intelligence, and operational insight to deliver scalable, enterprise-grade analytics platforms.
Responsibilities
β’ Develop and optimize Snowflake-based data architectures, pipelines, and data models.
β’ Build advanced data solutions using Python and Snowflake SQL for large-scale datasets.
β’ Design and deploy interactive dashboards and analytics applications using tools such as Power BI, Sigma, QlikView, or Streamlit.
β’ Translate business requirements into scalable technical solutions that drive measurable outcomes.
β’ Integrate and transform ERP and MRP data to support supply chain and operational analytics.
β’ Support full lifecycle development including design, testing, deployment, and maintenance.
β’ Implement CI/CD pipelines and GitHub automation for code and data migrations across environments.
β’ Develop APIs and manage service accounts for secure and scalable data access.
β’ Deploy cloud-native and containerized solutions for enterprise analytics platforms.
β’ Participate in Agile sprint cycles including daily stand-ups and sprint planning sessions.
β’ Ensure data integrity, performance optimization, and adherence to development best practices.
β’ Collaborate with cross-functional stakeholders across multiple geographies.
Qualifications
β’ Bachelorβs degree or higher in a relevant field.
β’ Minimum of 5 years of professional experience in data engineering, data science, or analytics.
β’ Advanced proficiency in Python and Snowflake SQL.
β’ Strong experience with Snowflake architecture, optimization, and environment migrations.
β’ At least 3 years of experience working with ERP or MRP systems and data structures.
β’ Experience with manufacturing or supply chain processes including order-to-cash, procure-to-pay, and demand-to-deliver.
β’ Proficiency with data visualization tools such as Power BI, Sigma, QlikView, or Streamlit.
β’ Experience with large-scale relational data processing and data wrangling.
β’ Hands-on experience with CI/CD pipelines, GitHub automation, and version control.
β’ Experience deploying solutions in cloud environments using containerization technologies.
β’ Familiarity with API development, service accounts, and secure key management practices.
β’ Experience with test-driven development (TDD) and production deployment strategies with rollback capabilities.
β’ Experience with Agile development methodologies and sprint-based project execution.
β’ Proven ability to deliver ROI-driven analytics and enterprise-level data solutions.
Join a high-impact analytics initiative and build cutting-edge data solutions that drive real business outcomes. Work on complex supply chain challenges while leveraging modern cloud technologies in a fast-paced, innovation-focused environment.
Primary Services is actively recruiting for a Data Engineer to support a large, industry-leading organization. This role offers the opportunity to work as a hands-on technical expert, developing advanced data solutions that directly influence manufacturing and supply chain operations. The position requires a strong blend of data engineering expertise, business intelligence, and operational insight to deliver scalable, enterprise-grade analytics platforms.
Responsibilities
β’ Develop and optimize Snowflake-based data architectures, pipelines, and data models.
β’ Build advanced data solutions using Python and Snowflake SQL for large-scale datasets.
β’ Design and deploy interactive dashboards and analytics applications using tools such as Power BI, Sigma, QlikView, or Streamlit.
β’ Translate business requirements into scalable technical solutions that drive measurable outcomes.
β’ Integrate and transform ERP and MRP data to support supply chain and operational analytics.
β’ Support full lifecycle development including design, testing, deployment, and maintenance.
β’ Implement CI/CD pipelines and GitHub automation for code and data migrations across environments.
β’ Develop APIs and manage service accounts for secure and scalable data access.
β’ Deploy cloud-native and containerized solutions for enterprise analytics platforms.
β’ Participate in Agile sprint cycles including daily stand-ups and sprint planning sessions.
β’ Ensure data integrity, performance optimization, and adherence to development best practices.
β’ Collaborate with cross-functional stakeholders across multiple geographies.
Qualifications
β’ Bachelorβs degree or higher in a relevant field.
β’ Minimum of 5 years of professional experience in data engineering, data science, or analytics.
β’ Advanced proficiency in Python and Snowflake SQL.
β’ Strong experience with Snowflake architecture, optimization, and environment migrations.
β’ At least 3 years of experience working with ERP or MRP systems and data structures.
β’ Experience with manufacturing or supply chain processes including order-to-cash, procure-to-pay, and demand-to-deliver.
β’ Proficiency with data visualization tools such as Power BI, Sigma, QlikView, or Streamlit.
β’ Experience with large-scale relational data processing and data wrangling.
β’ Hands-on experience with CI/CD pipelines, GitHub automation, and version control.
β’ Experience deploying solutions in cloud environments using containerization technologies.
β’ Familiarity with API development, service accounts, and secure key management practices.
β’ Experience with test-driven development (TDD) and production deployment strategies with rollback capabilities.
β’ Experience with Agile development methodologies and sprint-based project execution.
β’ Proven ability to deliver ROI-driven analytics and enterprise-level data solutions.






