

Holistic Partners, Inc
Data Engineer-Local Only
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Cincinnati, OH (Hybrid) for a 6+ month contract, offering a strong SQL focus, experience with Snowflake, ETL, and data warehousing. Familiarity with Python and cloud platforms is a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 8, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Python #Data Science #Snowflake #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Engineering #Data Pipeline #Cloud #dbt (data build tool)
Role description
Job Title: Data Engineer
Location: Cincinnati, OH (Hybrid)
Duration: 6+Month
Interview Process: Video Interview
Role Overview
Fifth Third Bank is seeking a Data Engineer to join the Data Science Enablement team as a backfill contractor. This role will focus on building, optimizing, and supporting data pipelines and feature stores that enable machine learning models.
Must-Have Skills
• Strong SQL (heavy SQL focus)
• Snowflake
• ETL / Data Pipelines
• Data Warehousing experience
• Understanding of data engineering best practices and ML model lifecycle
Nice-to-Have Skills
• Python
• Cloud platforms
• General understanding of Data Science / ML workflows
Responsibilities
• Build and optimize data pipelines and feature stores using Snowflake and dbt
• Support existing ML models and improve performance and efficiency
• Collaborate with data scientists on data preparation for model training
• Work within best practices for data engineering and analytics
• Assist with ongoing optimization of existing data models and pipelines
Job Title: Data Engineer
Location: Cincinnati, OH (Hybrid)
Duration: 6+Month
Interview Process: Video Interview
Role Overview
Fifth Third Bank is seeking a Data Engineer to join the Data Science Enablement team as a backfill contractor. This role will focus on building, optimizing, and supporting data pipelines and feature stores that enable machine learning models.
Must-Have Skills
• Strong SQL (heavy SQL focus)
• Snowflake
• ETL / Data Pipelines
• Data Warehousing experience
• Understanding of data engineering best practices and ML model lifecycle
Nice-to-Have Skills
• Python
• Cloud platforms
• General understanding of Data Science / ML workflows
Responsibilities
• Build and optimize data pipelines and feature stores using Snowflake and dbt
• Support existing ML models and improve performance and efficiency
• Collaborate with data scientists on data preparation for model training
• Work within best practices for data engineering and analytics
• Assist with ongoing optimization of existing data models and pipelines





