

ElevaIT Solutions
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Corning, NY, for 12 months at a competitive pay rate. Requires a Bachelor's degree, 2+ years in data engineering, and skills in ETL, SQL, Apache Airflow, and relational databases, ideally with scientific domain experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
304
-
ποΈ - Date
May 15, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Painted Post, NY
-
π§ - Skills detailed
#Monitoring #Airflow #Oracle #Database Utilities #SQL (Structured Query Language) #Databases #Data Engineering #Schema Design #Automation #Data Processing #Data Quality #Data Transformations #Observability #Scripting #SQL Queries #"ETL (Extract #Transform #Load)" #Apache Airflow #Datasets #DBeaver #Agile #Data Integrity #Programming #Data Pipeline #Computer Science #Version Control #Data Modeling #Python #Migration #Documentation
Role description
Title: Data Engineer
Location: Corning, NY, 100% onsite
Duration: 12 months
Education and Experience:
β’ This position focuses on Data pipelines & workflows
β’ β’ Bachelorβs degree in computer science, information systems, data engineering, or related field, or equivalent practical experience. May consider an Associates if the candidate has an additional 3-5 years experience than what is being required.
β’ 2+ years of professional experience in data engineering, ETL development, or related work, or equivalent hands-on experience
β’ Experience or interest in scientific software, materials science, research environments, or technically complex domains is a plus
SCOPE OF POSITION:
1. Embed within a cross-functional Agile team, participating in sprint planning, stand-ups, backlog refinement, and technical discussions.
1. Design, build, troubleshoot, and maintain ETL/ELT workflows that support application functionality, analytics, reporting, and scientific workflows.
1. Develop and manage data pipelines using Apache Airflow, ensuring reliable orchestration, scheduling, monitoring, and recovery of data processes.
1. Work with stakeholders including software developers, scientists, and engineers to understand data sources, workflow requirements, and downstream data needs.
1. Extract, transform, validate, and load data across systems, including relational databases such as Postgres SQL and Oracle.
1. Write, optimize, and maintain complex SQL queries, scripts, and transformation logic to support operational and analytical use cases.
1. Troubleshoot data quality issues, ETL failures, pipeline bottlenecks, and schema inconsistencies; identify root causes and implement durable solutions.
1. Support database exploration, data validation, and troubleshooting using tools such as DBeaver and related database utilities.
1. Evaluate and help adopt new data tools and technologies, including lightweight analytics and transformation solutions (e.g. DuckDB) where appropriate.
1. Collaborate with engineering teams to support reliable integration between data pipelines, applications, APIs, and downstream consumers.
1. Assist with schema evolution, data modeling, migration planning, and data consistency across systems.
1. Document pipeline logic, data dependencies, transformation rules, and operational procedures to support maintainability and team knowledge sharing.
1. Help improve data engineering standards, observability, testing practices, and operational reliability across the team.
1. Regularly interact with scientists and engineers to understand research and technical workflows; experience in scientific or research environments is a strong plus.
TECHNICAL SKILLS β 2+ years (or commensurate experience):
1. Experience designing, building, and troubleshooting ETL/ELT pipelines
1. Hands-on experience with workflow orchestration tools, preferably Apache Airflow
1. Strong experience writing and optimizing SQL
1. Experience working with relational databases, especially Postgres SQL and Oracle
1. Ability to develop and maintain data transformations, validation steps, and pipeline logic across multiple systems
1. Experience with database tools such as DBeaver or similar for query development, exploration, and troubleshooting
1. Familiarity with modern data processing and analytical tools such as DuckDB or interest in evaluating emerging data technologies
1. Understanding of data modeling, schema design, data integrity, and performance tuning
1. Experience troubleshooting pipeline failures, performance issues, and inconsistent or incomplete datasets
1. Familiarity with scripting or programming for pipeline development and automation; Python experience is strongly preferred
1. Understanding of version control and collaborative development workflows
1. Experience supporting production data systems with an emphasis on reliability, maintainability, and clear documentation
Title: Data Engineer
Location: Corning, NY, 100% onsite
Duration: 12 months
Education and Experience:
β’ This position focuses on Data pipelines & workflows
β’ β’ Bachelorβs degree in computer science, information systems, data engineering, or related field, or equivalent practical experience. May consider an Associates if the candidate has an additional 3-5 years experience than what is being required.
β’ 2+ years of professional experience in data engineering, ETL development, or related work, or equivalent hands-on experience
β’ Experience or interest in scientific software, materials science, research environments, or technically complex domains is a plus
SCOPE OF POSITION:
1. Embed within a cross-functional Agile team, participating in sprint planning, stand-ups, backlog refinement, and technical discussions.
1. Design, build, troubleshoot, and maintain ETL/ELT workflows that support application functionality, analytics, reporting, and scientific workflows.
1. Develop and manage data pipelines using Apache Airflow, ensuring reliable orchestration, scheduling, monitoring, and recovery of data processes.
1. Work with stakeholders including software developers, scientists, and engineers to understand data sources, workflow requirements, and downstream data needs.
1. Extract, transform, validate, and load data across systems, including relational databases such as Postgres SQL and Oracle.
1. Write, optimize, and maintain complex SQL queries, scripts, and transformation logic to support operational and analytical use cases.
1. Troubleshoot data quality issues, ETL failures, pipeline bottlenecks, and schema inconsistencies; identify root causes and implement durable solutions.
1. Support database exploration, data validation, and troubleshooting using tools such as DBeaver and related database utilities.
1. Evaluate and help adopt new data tools and technologies, including lightweight analytics and transformation solutions (e.g. DuckDB) where appropriate.
1. Collaborate with engineering teams to support reliable integration between data pipelines, applications, APIs, and downstream consumers.
1. Assist with schema evolution, data modeling, migration planning, and data consistency across systems.
1. Document pipeline logic, data dependencies, transformation rules, and operational procedures to support maintainability and team knowledge sharing.
1. Help improve data engineering standards, observability, testing practices, and operational reliability across the team.
1. Regularly interact with scientists and engineers to understand research and technical workflows; experience in scientific or research environments is a strong plus.
TECHNICAL SKILLS β 2+ years (or commensurate experience):
1. Experience designing, building, and troubleshooting ETL/ELT pipelines
1. Hands-on experience with workflow orchestration tools, preferably Apache Airflow
1. Strong experience writing and optimizing SQL
1. Experience working with relational databases, especially Postgres SQL and Oracle
1. Ability to develop and maintain data transformations, validation steps, and pipeline logic across multiple systems
1. Experience with database tools such as DBeaver or similar for query development, exploration, and troubleshooting
1. Familiarity with modern data processing and analytical tools such as DuckDB or interest in evaluating emerging data technologies
1. Understanding of data modeling, schema design, data integrity, and performance tuning
1. Experience troubleshooting pipeline failures, performance issues, and inconsistent or incomplete datasets
1. Familiarity with scripting or programming for pipeline development and automation; Python experience is strongly preferred
1. Understanding of version control and collaborative development workflows
1. Experience supporting production data systems with an emphasis on reliability, maintainability, and clear documentation






