

Data Analytics
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Analytics position in Dallas, TX, lasting 6 months with possible extension. Requires 5+ years in data engineering/analytics, expertise in SQL, Snowflake, and ETL processes, preferably with manufacturing or supply chain experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 30, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Python #Batch #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Alteryx #SnowPipe #Azure #Databases #Agile #Snowflake #Azure Data Factory #Data Ingestion #Data Pipeline #SAP #Data Cleansing #Jenkins #Libraries #Databricks #Deployment #Datasets #ADF (Azure Data Factory) #Scala #Liquibase #Database Administration #Data Engineering #GitHub #SQL Queries #ML (Machine Learning) #Matillion #Pandas #DevOps #SSIS (SQL Server Integration Services)
Role description
Title - Data Analytics
Location - Dallas, TX
Duration - 6 months with possible extension
Key Responsibilities
β’ Develop and optimize complex batch and near-real-time SQL queries to meet product requirements and business needs.
β’ Design and implement core foundational datasets that is reusable, scalable, and performant.
β’ Architect, implement, deploy, and maintain data-driven solutions in Snowflake and Databricks.
β’ Develop and manage data pipelines supporting multiple reports, tools, and applications.
β’ Conduct advanced statistical analysis to yield actionable insights, identify correlations/trends, measure performance, and visualize disparate sources of data.
Required Skills and Experience
β’ 5+ years in in operations, supply chain, data engineering, data analytics, and/or database administration.
β’ Experience in design, implementations, and optimization in Snowflake or other relational SQL databases
β’ Experience with data cleansing, curation, mining, manipulation, and analysis from disparate systems
β’ Experience with configuration control (GitHub preferred) and Data DevOps practices using GitHub Actions, Jenkins or other deployment pipelines that provide Continuous Integration and Continuous Delivery (CI/CD)
β’ Experience with Data Extract, Transform and Load (ETL) processes using SQL as the foundation
β’ Experience with database structures, modeling implementation such as third normal form
Preferred Skills
β’ Experience with Manufacturing, Operations, and/or Supply Chain process and systems
β’ Experience using MRP/ERP systems (SAP)
β’ Experience with Python and related libraries such as Pandas for advanced data analytics
β’ Experience with schema deployment solutions such as Schema Change or Liquibase
β’ Working knowledge of Agile Software development methodologies
β’ Ability to filter, extract, and analyze information from large, complex datasets
β’ Great verbal and written communication skills to collaborate cross functionally
β’ Experience with data deployment solutions such as Snowflake Tasks, Matillion, SSIS, Azure Data Factory (ADF) or Alteryx
β’ Experience with Snowflake Streams, Stages and Snowpipes for data ingestion
β’ Experience with VS Code and GitHub Desktop for integrated development
β’ Experience with web application relational database models and APIs
β’ Knowledge of statistical modeling and machine learning methods
Title - Data Analytics
Location - Dallas, TX
Duration - 6 months with possible extension
Key Responsibilities
β’ Develop and optimize complex batch and near-real-time SQL queries to meet product requirements and business needs.
β’ Design and implement core foundational datasets that is reusable, scalable, and performant.
β’ Architect, implement, deploy, and maintain data-driven solutions in Snowflake and Databricks.
β’ Develop and manage data pipelines supporting multiple reports, tools, and applications.
β’ Conduct advanced statistical analysis to yield actionable insights, identify correlations/trends, measure performance, and visualize disparate sources of data.
Required Skills and Experience
β’ 5+ years in in operations, supply chain, data engineering, data analytics, and/or database administration.
β’ Experience in design, implementations, and optimization in Snowflake or other relational SQL databases
β’ Experience with data cleansing, curation, mining, manipulation, and analysis from disparate systems
β’ Experience with configuration control (GitHub preferred) and Data DevOps practices using GitHub Actions, Jenkins or other deployment pipelines that provide Continuous Integration and Continuous Delivery (CI/CD)
β’ Experience with Data Extract, Transform and Load (ETL) processes using SQL as the foundation
β’ Experience with database structures, modeling implementation such as third normal form
Preferred Skills
β’ Experience with Manufacturing, Operations, and/or Supply Chain process and systems
β’ Experience using MRP/ERP systems (SAP)
β’ Experience with Python and related libraries such as Pandas for advanced data analytics
β’ Experience with schema deployment solutions such as Schema Change or Liquibase
β’ Working knowledge of Agile Software development methodologies
β’ Ability to filter, extract, and analyze information from large, complex datasets
β’ Great verbal and written communication skills to collaborate cross functionally
β’ Experience with data deployment solutions such as Snowflake Tasks, Matillion, SSIS, Azure Data Factory (ADF) or Alteryx
β’ Experience with Snowflake Streams, Stages and Snowpipes for data ingestion
β’ Experience with VS Code and GitHub Desktop for integrated development
β’ Experience with web application relational database models and APIs
β’ Knowledge of statistical modeling and machine learning methods