

CBTS
Data Engineer (W2 Contract Only)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (W2 Contract Only) in Cincinnati, OH, for 12 months at a competitive pay rate. Key skills include ETL, Python, and SQL. Experience with machine learning projects and familiarity with AWS SageMaker, Snowflake, and dbt is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 6, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Data Pipeline #Cloud #AWS SageMaker #Databricks #Datasets #dbt (data build tool) #AWS (Amazon Web Services) #ML (Machine Learning) #Python #SageMaker #"ETL (Extract #Transform #Load)" #Deployment #Scala #Data Science #Data Engineering #Snowflake #SQL (Structured Query Language)
Role description
Role: Data Engineer
Location: Cincinnati, OH - In office 4 days a week minimum (Monday–Thursday)
Contract: 12 Months
Must Have Skills
• ETL
• Python
• SQL
Nice To Have
• AWS Sagemaker
• DBT
• Snowflake
Job Description:
• We’re hiring a Data Engineer to join our newly launched Machine Learning Data Enablement team with client. This team is focused on building high-quality, scalable data pipelines that power machine learning models across the enterprise, deployed in AWS SageMaker.
• We’re looking for an early-career professional who’s excited to grow in a hands-on data engineering role. Ideal candidates will have experience working on machine learning–related projects or have partnered with data science teams to support model development and deployment — and have a strong interest in enabling ML workflows through robust data infrastructure.
• You’ll work closely with data scientists and ML engineers to deliver curated, production-ready datasets and help shape how machine learning data is delivered across the bank. You should have solid SQL and Python skills, a collaborative mindset, and a strong interest in modern data tooling. Experience with Snowflake, dbt, or cloud data platforms is a strong plus. Familiarity with ML tools like SageMaker or Databricks is helpful but not required — we’re happy to help you learn.
• This is a hands-on role with high visibility and high impact. You’ll be joining a team at the ground level, helping to define how data powers machine learning at scale.
Role: Data Engineer
Location: Cincinnati, OH - In office 4 days a week minimum (Monday–Thursday)
Contract: 12 Months
Must Have Skills
• ETL
• Python
• SQL
Nice To Have
• AWS Sagemaker
• DBT
• Snowflake
Job Description:
• We’re hiring a Data Engineer to join our newly launched Machine Learning Data Enablement team with client. This team is focused on building high-quality, scalable data pipelines that power machine learning models across the enterprise, deployed in AWS SageMaker.
• We’re looking for an early-career professional who’s excited to grow in a hands-on data engineering role. Ideal candidates will have experience working on machine learning–related projects or have partnered with data science teams to support model development and deployment — and have a strong interest in enabling ML workflows through robust data infrastructure.
• You’ll work closely with data scientists and ML engineers to deliver curated, production-ready datasets and help shape how machine learning data is delivered across the bank. You should have solid SQL and Python skills, a collaborative mindset, and a strong interest in modern data tooling. Experience with Snowflake, dbt, or cloud data platforms is a strong plus. Familiarity with ML tools like SageMaker or Databricks is helpful but not required — we’re happy to help you learn.
• This is a hands-on role with high visibility and high impact. You’ll be joining a team at the ground level, helping to define how data powers machine learning at scale.






