Cypress HCM

Sr. Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with a contract length of "unknown" and a pay rate of $60 - $80 per hour. Key skills include SQL, Python, Snowflake, dbt, and AWS. Healthcare industry experience is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
January 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Observability #JSON (JavaScript Object Notation) #Visualization #Data Modeling #GIT #AWS (Amazon Web Services) #Data Governance #dbt (data build tool) #Data Quality #Fivetran #Snowflake #SQL (Structured Query Language) #Python #Documentation #Code Reviews #Scala #Data Pipeline #Airflow #"ETL (Extract #Transform #Load)" #Automation #Data Engineering #Docker
Role description
We are seeking an experienced and strategic Sr. Data Engineer to architect, implement, and optimize reliable data pipelines and models that power both our analytics and operational systems. This role owns the flow of data from ingestion through transformation and modeling in dbt and Snowflake, ensuring data is accurate, governed, accessible, and trusted across the organization. You will lead the effort across technical and business teams to deliver scalable, well-documented solutions that fuel insights, automation, and system performance. Responsibilities • Architect, implement, and optimize data pipelines using Fivetran, Airflow (via Astronomer), and AWS to support both analytical and operational workloads. • Design complex dbt transformations and Snowflake models for performance, reliability, and scalability. • Integrate structured and semi-structured source data from multiple systems. • Define and promote internal data governance standards, emphasize rigorous testing and adhere to data quality and engineering best practices. • Develop and maintain documentation, including data dictionaries, data flow diagrams, best practices, and data recovery processes to provide clear visibility into the data ecosystem. • Lead the collaboration with analytics and visualization engineers to ensure data models align with business logic and reporting needs. • Participate in code reviews, testing, and CI/CD workflows using Git and containerized environments such as Docker and drive best practices in these processes. • Continuously identify opportunities to improve data quality, automation, and observability. • Mentor junior engineers and provide guidance on technical design, coding standards, and adoption of emerging technologies. • Participate in cross-functional and business discussions to understand how cleaned and transformed data is used by the business, ensuring alignment with business goals. Qualifications • 5+ years of experience in data engineering, building and scaling modern data pipelines and models. • Advanced proficiency in SQL and Python for data transformation, automation, and performance tuning. • Deep hands-on experience with Snowflake, dbt, and orchestration tools like Airflow. • Strong knowledge with Fivetran, AWS, and Docker. • Strong understanding of data modeling, warehousing, and CI/CD workflows. • Clear written and verbal communication skills, including the ability to document and present technical work effectively to engineering teams and business stakeholders. • Experience with unstructured data (JSON, Parquet, Avro). • Exposure to containerization and infrastructure-as-code concepts. • Healthcare or regulated industry experience preferred. What You'll Need To Succeed • Commitment to data governance, documentation, and data quality. • Ownership mindset and accountability for delivering reliable solutions. • Strong analytical and problem-solving abilities with attention to detail. • Adaptable and proactive learner in a fast-evolving technical environment. • Effective communicator with both technical and non-technical stakeholders. Compensation: $60 - $80 per hour ID#: 140