

CBTS
Data Engineer (Datastage)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer (Datastage) contract position lasting 1 to 3 months, offering a competitive pay rate. Key skills include ETL pipeline development using IBM DataStage, Snowflake, and dbt, with a focus on data quality and testing.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 24, 2026
🕒 - Duration
1 to 3 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Cincinnati, OH
-
🧠 - Skills detailed
#Disaster Recovery #Data Accuracy #DataStage #Data Science #Unit Testing #Automation #Data Mining #Data Modeling #Programming #Scala #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Processing #Automated Testing #Data Quality #Regression #Data Engineering #Data Management #Monitoring #dbt (data build tool) #Snowflake
Role description
The need is to get A LOT of high-quality ETL code developed rapidly over the next few months. Validating code and ensuring data accuracy and correctness are key. Position will support and take direction from the Principal Data Engineer and the Solution Architect.
TECHNICAL SKILLS
Must Have
• Building and optimizing ETL pipelines using IBM DataStage, Snowflake, DBT, and MettleCI
• Experience with data pipeline testing (Unit testing for ETL/ELT components, Integration and regression testing, Data quality and validation frameworks, Automated testing within CI/CD workflows)
• Strong technical expertise to deliver efficient, accurate, and scalable data solutions that follow established governance and coding standards
This Data Engineer III contract role will support the Principal Data Engineer by developing, testing, and monitoring high‑quality ETL code for several time‑sensitive, enterprise‑level initiatives. The position requires strong technical expertise to deliver efficient, accurate, and scalable data solutions that follow established governance and coding standards. Core responsibilities include building and optimizing ETL pipelines using IBM DataStage, Snowflake, dbt, and MettleCI, with a focus on code quality, data accuracy, and operational reliability. The ideal candidate will be able to rapidly contribute to critical project milestones while maintaining strict adherence to best practices and governance processes.
Qualifications:
• Design, build, test, and maintain scalable data management systems and pipelines.
• Develop high performance algorithms, predictive models, prototypes, and custom software components to support analytics and data processing needs.
• Ensure all data systems adhere to business requirements, governance standards, and industry best practices.
• Demonstrated experience with data pipeline testing, including: Unit testing for ETL/ELT components, Integration and regression testing, Data quality and validation frameworks, Automated testing within CI/CD workflows
• Integrate new data management tools, engineering technologies, and automation capabilities into existing architectures.
• Establish and optimize processes for data modeling, data mining, and data production workflows.
• Research and identify new opportunities for leveraging existing data assets.
• Utilize a variety of programming languages, integration tools, and platforms to connect systems and enable seamless data flow.
• Collaborate closely with cross functional partners, including solution architects, IT teams, and data scientists, to achieve project objectives.
• Recommend and implement improvements to enhance data quality, reliability, and performance.
• Maintain and update disaster recovery procedures to ensure system resilience.
The need is to get A LOT of high-quality ETL code developed rapidly over the next few months. Validating code and ensuring data accuracy and correctness are key. Position will support and take direction from the Principal Data Engineer and the Solution Architect.
TECHNICAL SKILLS
Must Have
• Building and optimizing ETL pipelines using IBM DataStage, Snowflake, DBT, and MettleCI
• Experience with data pipeline testing (Unit testing for ETL/ELT components, Integration and regression testing, Data quality and validation frameworks, Automated testing within CI/CD workflows)
• Strong technical expertise to deliver efficient, accurate, and scalable data solutions that follow established governance and coding standards
This Data Engineer III contract role will support the Principal Data Engineer by developing, testing, and monitoring high‑quality ETL code for several time‑sensitive, enterprise‑level initiatives. The position requires strong technical expertise to deliver efficient, accurate, and scalable data solutions that follow established governance and coding standards. Core responsibilities include building and optimizing ETL pipelines using IBM DataStage, Snowflake, dbt, and MettleCI, with a focus on code quality, data accuracy, and operational reliability. The ideal candidate will be able to rapidly contribute to critical project milestones while maintaining strict adherence to best practices and governance processes.
Qualifications:
• Design, build, test, and maintain scalable data management systems and pipelines.
• Develop high performance algorithms, predictive models, prototypes, and custom software components to support analytics and data processing needs.
• Ensure all data systems adhere to business requirements, governance standards, and industry best practices.
• Demonstrated experience with data pipeline testing, including: Unit testing for ETL/ELT components, Integration and regression testing, Data quality and validation frameworks, Automated testing within CI/CD workflows
• Integrate new data management tools, engineering technologies, and automation capabilities into existing architectures.
• Establish and optimize processes for data modeling, data mining, and data production workflows.
• Research and identify new opportunities for leveraging existing data assets.
• Utilize a variety of programming languages, integration tools, and platforms to connect systems and enable seamless data flow.
• Collaborate closely with cross functional partners, including solution architects, IT teams, and data scientists, to achieve project objectives.
• Recommend and implement improvements to enhance data quality, reliability, and performance.
• Maintain and update disaster recovery procedures to ensure system resilience.






