DivIHN Integration Inc

Jr Data Engineer (ETL)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Jr Data Engineer (ETL) in Corning, NY, for 12 months at a pay rate of "TBD." Requires a Bachelor's degree and 2+ years in data engineering, ETL, SQL, and Apache Airflow. Experience in scientific environments is a plus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
May 15, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Corning, NY
-
🧠 - Skills detailed
#Monitoring #Airflow #Oracle #PostgreSQL #Database Utilities #SQL (Structured Query Language) #Databases #Data Engineering #Schema Design #Automation #Data Processing #Data Quality #Data Transformations #Observability #Scripting #SQL Queries #"ETL (Extract #Transform #Load)" #Apache Airflow #Datasets #DBeaver #Scala #Agile #Data Integrity #Programming #Data Pipeline #Computer Science #Version Control #Data Modeling #Python #Migration #Documentation
Role description
Title: Software Engineer – ETL Location: On-site at Corning, NY Duration: 12 Months Work Schedule: Typical 40 hours per week. May require working weekends/holidays or longer days to support projects. Travel: Limited to no travel required, and no on-call requirements. The manager is open to non-local candidates willing to relocate at their own expenses Only W2 candidates are eligible for this position. Third-party or C2C candidates will not be considered Description: Education and Experience: This position focuses on Data pipelines & workflows β€’ Bachelor’s degree in computer science, information systems, data engineering, or related field, or equivalent practical experience. May consider an Associate if the candidate has an additional 3-5 years of experience than what is required. β€’ 2+ years of professional experience in data engineering, ETL development, or related work, or equivalent hands-on experience β€’ Experience or interest in scientific software, materials science, research environments, or technically complex domains is a plus Scope of Position: 1. Embed within a cross-functional Agile team, participating in sprint planning, stand-ups, backlog refinement, and technical discussions. 1. Design, build, troubleshoot, and maintain ETL/ELT workflows that support application functionality, analytics, reporting, and scientific workflows. 1. Develop and manage data pipelines using Apache Airflow, ensuring reliable orchestration, scheduling, monitoring, and recovery of data processes. 1. Work with stakeholders including software developers, scientists, and engineers, to understand data sources, workflow requirements, and downstream data needs. 1. Extract, transform, validate, and load data across systems, including relational databases such as Postgres SQL and Oracle. 1. Write, optimize, and maintain complex SQL queries, scripts, and transformation logic to support operational and analytical use cases. 1. Troubleshoot data quality issues, ETL failures, pipeline bottlenecks, and schema inconsistencies; identify root causes and implement durable solutions. 1. Support database exploration, data validation, and troubleshooting using tools such as DBeaver and related database utilities. 1. Evaluate and help adopt new data tools and technologies, including lightweight analytics and transformation solutions (e.g. DuckDB) where appropriate. 1. Collaborate with engineering teams to support reliable integration between data pipelines, applications, APIs, and downstream consumers. 1. Assist with schema evolution, data modeling, migration planning, and data consistency across systems. 1. Document pipeline logic, data dependencies, transformation rules, and operational procedures to support maintainability and team knowledge sharing. 1. Help improve data engineering standards, observability, testing practices, and operational reliability across the team. 1. Regularly interact with scientists and engineers to understand research and technical workflows; experience in scientific or research environments is a strong plus. Technical Skills – 2+ years (or Commensurate Experience): 1. Experience designing, building, and troubleshooting ETL/ELT pipelines 1. Hands-on experience with workflow orchestration tools, preferably Apache Airflow 1. Strong experience writing and optimizing SQL 1. Experience working with relational databases, especially PostgreSQL and Oracle 1. Ability to develop and maintain data transformations, validation steps, and pipeline logic across multiple systems 1. Experience with database tools such as DBeaver or similar for query development, exploration, and troubleshooting 1. Familiarity with modern data processing and analytical tools such as DuckDB or interest in evaluating emerging data technologies 1. Understanding of data modeling, schema design, data integrity, and performance tuning 1. Experience troubleshooting pipeline failures, performance issues, and inconsistent or incomplete datasets 1. Familiarity with scripting or programming for pipeline development and automation; Python experience is strongly preferred 1. Understanding of version control and collaborative development workflows 1. Experience supporting production data systems with an emphasis on reliability, maintainability, and clear documentation Team Skills: 1. Confident collaborating with developers, scientists, analysts, and product stakeholders 1. Able to gather and clarify technical and data requirements and translate them into scalable data solutions 1. Strong communication skills around pipeline status, data quality issues, dependencies, and tradeoffs 1. Comfortable handling ambiguity, improving incomplete processes, and helping define best practices 1. Proactive in identifying opportunities to improve data workflows, tooling, performance, and operational stability Soft Skills: 1. Strong analytical and problem-solving skills 1. High attention to detail and commitment to data quality, consistency, and reliability 1. Demonstrated initiative in troubleshooting issues and improving pipeline robustness 1. Curiosity and willingness to evaluate and adopt new tools, technologies, and approaches 1. Ability to balance immediate operational needs with long-term maintainability and scalability 1. Comfortable proposing improvements, collaborating across teams, and building trust through reliable execution Interview Process: Phone screen, then either an onsite interview for local candidates or a Teams Meeting for non-local candidates