Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is remote. Requires 5+ years in data engineering, experience with Snowflake, SQL, and ETL processes. US Citizenship is mandatory.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#GitHub #Data Analysis #"ETL (Extract #Transform #Load)" #Data Accuracy #Agile #Matillion #SQL Queries #SSIS (SQL Server Integration Services) #Jenkins #ML (Machine Learning) #SAP #Pandas #Scala #Liquibase #Databases #Alteryx #SnowPipe #Deployment #Datasets #Batch #SQL (Structured Query Language) #Data Engineering #Data Pipeline #ADF (Azure Data Factory) #Database Administration #DevOps #Data Cleansing #Databricks #Snowflake #Python
Role description
Job Title: Data Analytics Contractor Description: As a Data Analytics Contractor, you will analyze and interpret complex data sets to provide actionable insights and support decision-making processes. β€’ Collect, process, and analyze large datasets to identify trends, patterns, and insights. β€’ Develop and maintain data models, dashboards, and reports to visualize findings. β€’ Utilize statistical and analytical tools to perform data analysis and generate insights. β€’ Collaborate with project teams to understand data requirements and deliver solutions. β€’ Ensure data accuracy and integrity through validation and quality checks. Key Responsibilities β€’ Develop and optimize complex batch and near-real-time SQL queries to meet business needs. β€’ Design and implement core foundational datasets that are reusable, scalable, and performant. β€’ Architect, implement, deploy, and maintain data-driven solutions in Snowflake and Databricks. β€’ Develop and manage data pipelines supporting multiple reports, tools, and applications. β€’ Conduct advanced statistical analysis to yield actionable insights, identify correlations/trends, and visualize disparate data sources. Required Skills and Experience β€’ 5+ years in operations, supply chain, data engineering, data analytics, and/or database administration. β€’ Experience in design, implementation, and optimization in Snowflake or other relational SQL databases. β€’ Experience with data cleansing, curation, mining, manipulation, and analysis from disparate systems. β€’ Experience with configuration control (GitHub preferred) and Data DevOps practices using GitHub Actions, Jenkins, or other CI/CD pipelines. β€’ Experience with ETL processes using SQL as the foundation. β€’ Experience with database structures, modeling implementation (third normal form). β€’ US Citizenship required. Preferred Skills β€’ Experience with Manufacturing, Operations, and/or Supply Chain process and systems. β€’ Experience using MRP/ERP systems (SAP). β€’ Experience with Python (Pandas) for advanced analytics. β€’ Experience with schema deployment solutions (SchemaChange, Liquibase). β€’ Working knowledge of Agile methodologies. β€’ Strong verbal and written communication skills. β€’ Experience with data deployment solutions (Snowflake Tasks, Matillion, SSIS, ADF, Alteryx). β€’ Experience with Snowflake Streams, Stages, and Snowpipes for ingestion. β€’ Experience with VS Code and GitHub Desktop for integrated development. β€’ Experience with relational database models and APIs. β€’ Knowledge of statistical modeling and machine learning methods.