

Gravity IT Resources
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Cincinnati, OH (Hybrid) with a contract length of "unknown" at a pay rate of "unknown." Requires 5+ years of data platform experience, advanced SQL and Python skills, and familiarity with Snowflake, dbt, and Airflow.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 6, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati Metropolitan Area
-
🧠 - Skills detailed
#Docker #Python #Visualization #Snowflake #Scala #Data Modeling #Data Pipeline #Observability #AWS (Amazon Web Services) #JSON (JavaScript Object Notation) #Documentation #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Governance #dbt (data build tool) #Code Reviews #Fivetran #Data Quality #Data Engineering #Airflow #Data Lifecycle #Automation #GIT
Role description
Senior Data Engineer
Location: Cincinnati, OH (Hybrid)
Travel: Occasional (approximately 3–5 trips per year)
Work Authorization: US Citizen or Green Card Holder ONLY (We can not consider H1B, OPT, EAD, etc..)
About the Role
We’re looking for a seasoned Senior Data Engineer to play a key role in designing, building, and scaling the data foundation that supports both analytics and operational use cases. In this role, you’ll own the end-to-end data lifecycle—from ingestion to transformation and modeling—ensuring our data is trusted, well-governed, and easy to use across the organization.
You’ll work closely with engineering, analytics, and business partners to deliver performant, well-documented data solutions that enable insights, automation, and operational efficiency.
What You’ll Be Responsible For
• Design, build, and optimize scalable data pipelines using Fivetran, Airflow (Astronomer), AWS, and related tooling
• Develop high-quality dbt transformations and Snowflake data models optimized for performance and reliability
• Ingest and integrate data from a variety of structured and semi-structured source systems
• Establish and reinforce data governance standards, testing practices, and overall data quality best practices
• Create and maintain clear documentation including data dictionaries, lineage diagrams, pipeline workflows, and recovery procedures
• Partner with analytics and visualization teams to ensure data models accurately reflect business logic and reporting needs
• Contribute to code reviews, testing strategies, and CI/CD pipelines using Git and containerized environments such as Docker
• Identify and implement improvements related to automation, observability, and data reliability
• Mentor junior engineers and support the adoption of sound engineering patterns and emerging technologies
• Engage with business stakeholders to understand how data is consumed and ensure solutions align with business objectives
What We’re Looking For
• 5+ years of hands-on experience building and maintaining modern data platforms
• Advanced SQL and Python skills for transformation, automation, and optimization
• Strong, practical experience with Snowflake, dbt, and Airflow
• Experience working with Fivetran, AWS, and containerized tooling
• Solid understanding of data modeling, warehousing concepts, and CI/CD processes
• Comfort working with semi-structured and unstructured data formats (JSON, Parquet, Avro)
• Exposure to containerization and infrastructure-as-code principles
• Excellent written and verbal communication skills, with the ability to explain technical concepts to non-technical audiences
• Experience in healthcare or other regulated environments is a plus
What Sets You Up for Success
• A strong sense of ownership and accountability for data quality and reliability
• A thoughtful, detail-oriented approach to problem solving
• Commitment to documentation, governance, and best engineering practices
• Curiosity and adaptability in a rapidly evolving technical landscape
• Ability to collaborate effectively across technical and business teams
Senior Data Engineer
Location: Cincinnati, OH (Hybrid)
Travel: Occasional (approximately 3–5 trips per year)
Work Authorization: US Citizen or Green Card Holder ONLY (We can not consider H1B, OPT, EAD, etc..)
About the Role
We’re looking for a seasoned Senior Data Engineer to play a key role in designing, building, and scaling the data foundation that supports both analytics and operational use cases. In this role, you’ll own the end-to-end data lifecycle—from ingestion to transformation and modeling—ensuring our data is trusted, well-governed, and easy to use across the organization.
You’ll work closely with engineering, analytics, and business partners to deliver performant, well-documented data solutions that enable insights, automation, and operational efficiency.
What You’ll Be Responsible For
• Design, build, and optimize scalable data pipelines using Fivetran, Airflow (Astronomer), AWS, and related tooling
• Develop high-quality dbt transformations and Snowflake data models optimized for performance and reliability
• Ingest and integrate data from a variety of structured and semi-structured source systems
• Establish and reinforce data governance standards, testing practices, and overall data quality best practices
• Create and maintain clear documentation including data dictionaries, lineage diagrams, pipeline workflows, and recovery procedures
• Partner with analytics and visualization teams to ensure data models accurately reflect business logic and reporting needs
• Contribute to code reviews, testing strategies, and CI/CD pipelines using Git and containerized environments such as Docker
• Identify and implement improvements related to automation, observability, and data reliability
• Mentor junior engineers and support the adoption of sound engineering patterns and emerging technologies
• Engage with business stakeholders to understand how data is consumed and ensure solutions align with business objectives
What We’re Looking For
• 5+ years of hands-on experience building and maintaining modern data platforms
• Advanced SQL and Python skills for transformation, automation, and optimization
• Strong, practical experience with Snowflake, dbt, and Airflow
• Experience working with Fivetran, AWS, and containerized tooling
• Solid understanding of data modeling, warehousing concepts, and CI/CD processes
• Comfort working with semi-structured and unstructured data formats (JSON, Parquet, Avro)
• Exposure to containerization and infrastructure-as-code principles
• Excellent written and verbal communication skills, with the ability to explain technical concepts to non-technical audiences
• Experience in healthcare or other regulated environments is a plus
What Sets You Up for Success
• A strong sense of ownership and accountability for data quality and reliability
• A thoughtful, detail-oriented approach to problem solving
• Commitment to documentation, governance, and best engineering practices
• Curiosity and adaptability in a rapidly evolving technical landscape
• Ability to collaborate effectively across technical and business teams






