

Ledelsea
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Mid-Level Data Engineer in Arden Hills, MN, with a contract length of "unknown" and a pay rate of "unknown." Requires 4–6 years of experience in SQL, data engineering, and Snowflake, with industry experience in Manufacturing or Agriculture preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greater Minneapolis-St. Paul Area
-
🧠 - Skills detailed
#Apache Spark #API (Application Programming Interface) #Replication #Agile #Vault #Data Processing #Data Management #Scala #SQL (Structured Query Language) #Qlik #"ETL (Extract #Transform #Load)" #Data Architecture #Databricks #Data Modeling #Data Engineering #GIT #Programming #Snowflake #DevOps #Spark (Apache Spark) #Data Lake #Metadata #Data Lakehouse #Python #Automated Testing #Data Vault #Scripting #Data Security #Datasets #Automation #Data Replication #Version Control #Data Warehouse #Data Pipeline #Security #Business Analysis #Data Integration
Role description
•
•
•
•
• THIS ROLE IS OPEN TO LOCAL CANDIDATES ONLY
•
•
•
•
•
• Must be able to do onsite interviews in the Minneapolis/St. Paul area and work onsite in Arden Hills MN.
•
•
•
•
• PLEASE DO NOT APPLY IF YOU ARE NOT LOCAL TO THE STATE OF MINNESOTA
•
•
•
•
•
• Mid-Level Data Engineer
(4–6 Years of Experience)
Position Overview
As a Mid-Level Data Engineer, you will play a crucial role in developing and maintaining data solutions in collaboration with business analysts and under the guidance of Lead Data Engineers.
This role focuses on building reliable data pipelines, ensuring high-quality data assets, and supporting modern data architecture initiatives. You will contribute to data management practices that improve the accuracy, accessibility, and trustworthiness of organizational data.
Required Competencies & Skills
Experience
• 4–6 years of experience in SQL, data engineering, and data modeling techniques.
• Experience delivering large-scale reporting or data engineering initiatives across the full project lifecycle.
Data Platforms
• Snowflake: Minimum 1 year of hands-on experience.
• Experience working with data warehouses and data lakes.
Data Pipeline Development
• Experience working with heterogeneous datasets.
• Proven ability to design, build, and optimize:
• Data pipelines
• Pipeline architectures
• Integrated datasets
• Familiarity with data integration technologies including:
• ETL / ELT
• Data replication / CDC
• Message-oriented data movement
• API-based data integration
DevOps & Automation
• Experience with DevOps practices and CI/CD pipelines.
• Exposure to automated testing frameworks for data systems.
Programming & Scripting
• Experience with scripting languages, preferably Python.
Data Architecture
• Familiarity with modern data architectures, including:
• Data lakehouse
• Data warehouses
• Data lakes
Data Pipeline Orchestration
• Experience managing data flow orchestration and pipeline scheduling.
Metadata Management
• Experience implementing metadata management practices.
• Familiarity with metadata-driven design for improving governance and usability.
Data Security
• Working knowledge of data security practices, including:
• Encryption
• Data anonymization
• Data masking
Infrastructure & Scalability
• Understanding of distributed data processing environments and infrastructure considerations when scaling pipelines.
Version Control
• Strong understanding of version control systems (e.g., Git) for maintaining code integrity and collaboration.
Professional Skills
• Strong analytical and problem-solving abilities.
• Ability to conduct root cause analysis on data issues.
• Strong communication and coordination skills.
• Ability to work independently with minimal supervision.
Preferred Competencies & Skills
Data Platform Experience
Experience working with the following technologies:
• Snowflake
• Streams & Tasks
• Dynamic Tables
• Databricks or Apache Spark
• Qlik
• Replicate
• Compose
Preferred: At least
• 1 year of hands-on experience with Databricks and Snowflake repositories.
Development Methodologies
• Experience working in Agile environments.
Industry Experience
• Experience within the Manufacturing or Agriculture industries.
Data Modeling
• Knowledge of the Data Vault methodology, including modeling and architecture principles.
Innovation
• Ability to apply creative problem-solving and innovative thinking to technical challenges.
•
•
•
•
• THIS ROLE IS OPEN TO LOCAL CANDIDATES ONLY
•
•
•
•
•
• Must be able to do onsite interviews in the Minneapolis/St. Paul area and work onsite in Arden Hills MN.
•
•
•
•
• PLEASE DO NOT APPLY IF YOU ARE NOT LOCAL TO THE STATE OF MINNESOTA
•
•
•
•
•
• Mid-Level Data Engineer
(4–6 Years of Experience)
Position Overview
As a Mid-Level Data Engineer, you will play a crucial role in developing and maintaining data solutions in collaboration with business analysts and under the guidance of Lead Data Engineers.
This role focuses on building reliable data pipelines, ensuring high-quality data assets, and supporting modern data architecture initiatives. You will contribute to data management practices that improve the accuracy, accessibility, and trustworthiness of organizational data.
Required Competencies & Skills
Experience
• 4–6 years of experience in SQL, data engineering, and data modeling techniques.
• Experience delivering large-scale reporting or data engineering initiatives across the full project lifecycle.
Data Platforms
• Snowflake: Minimum 1 year of hands-on experience.
• Experience working with data warehouses and data lakes.
Data Pipeline Development
• Experience working with heterogeneous datasets.
• Proven ability to design, build, and optimize:
• Data pipelines
• Pipeline architectures
• Integrated datasets
• Familiarity with data integration technologies including:
• ETL / ELT
• Data replication / CDC
• Message-oriented data movement
• API-based data integration
DevOps & Automation
• Experience with DevOps practices and CI/CD pipelines.
• Exposure to automated testing frameworks for data systems.
Programming & Scripting
• Experience with scripting languages, preferably Python.
Data Architecture
• Familiarity with modern data architectures, including:
• Data lakehouse
• Data warehouses
• Data lakes
Data Pipeline Orchestration
• Experience managing data flow orchestration and pipeline scheduling.
Metadata Management
• Experience implementing metadata management practices.
• Familiarity with metadata-driven design for improving governance and usability.
Data Security
• Working knowledge of data security practices, including:
• Encryption
• Data anonymization
• Data masking
Infrastructure & Scalability
• Understanding of distributed data processing environments and infrastructure considerations when scaling pipelines.
Version Control
• Strong understanding of version control systems (e.g., Git) for maintaining code integrity and collaboration.
Professional Skills
• Strong analytical and problem-solving abilities.
• Ability to conduct root cause analysis on data issues.
• Strong communication and coordination skills.
• Ability to work independently with minimal supervision.
Preferred Competencies & Skills
Data Platform Experience
Experience working with the following technologies:
• Snowflake
• Streams & Tasks
• Dynamic Tables
• Databricks or Apache Spark
• Qlik
• Replicate
• Compose
Preferred: At least
• 1 year of hands-on experience with Databricks and Snowflake repositories.
Development Methodologies
• Experience working in Agile environments.
Industry Experience
• Experience within the Manufacturing or Agriculture industries.
Data Modeling
• Knowledge of the Data Vault methodology, including modeling and architecture principles.
Innovation
• Ability to apply creative problem-solving and innovative thinking to technical challenges.






