

DataStage Sr. ETL Specialist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a "DataStage Sr. ETL Specialist" in Hartford, CT, with a long-term contract. Requires extensive DataStage experience, SQL proficiency, Unix Shell Scripting, and knowledge of data governance. Local candidates preferred; pay rate unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
August 8, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Hartford, CT
-
🧠 - Skills detailed
#Web Services #BI (Business Intelligence) #Datasets #Data Privacy #"ETL (Extract #Transform #Load)" #Security #Data Security #Cloud #SQL (Structured Query Language) #Batch #Unix #Continuous Deployment #Data Pipeline #Scripting #Data Warehouse #PySpark #DevOps #Talend #Data Analysis #SQL Queries #Computer Science #Data Engineering #DataStage #RDBMS (Relational Database Management System) #Deployment #Migration #Data Architecture #SAS #Snowflake #Spark (Apache Spark) #AWS (Amazon Web Services) #Oracle #Shell Scripting #Data Governance
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We are hiring for W2/ C-C contract role and who is local to Virginia.
Job Title: DataStage Sr. ETL Specialist
Location: Hartford, CT – Onsite
Duration: Long Term Contract
Must Have:
· Extensive hands-on experience in building Data Pipelines and ETL workflows using IBM DataStage, particularly within Data Warehouses and BI environments.
· Understanding and documenting complex DataStage workflows to maintain data pipelines.
· Experience working with diverse data sources such as RDBMS (Oracle, DB2), flat files, CSV, Mainframe datasets, and other target data stores.
· Strong skills in Data Analysis, writing complex SQL queries, and addressing performance considerations.
· Ability to reverse engineer ETL code to create mapping documents, identify source/target dependencies, and document transformations where necessary.
· Solid foundation in Data Modelling, Designing, and Data Architecture.
· Expertise in Unix Shell Scripting and Batch Jobs (AutoSys / Control-M).
· Familiarity with Data Governance, Data Security, and Data Privacy principles.
· Strong understanding of fundamental Computer Science concepts.
· Capability to define solution architecture and data models for project teams, including providing guidance on development tools, target platforms, operations, and security.
· Ability to fully immerse in product details, understand challenges, and connect them to data engineering solutions.
· Working knowledge of Continuous Integration/Continuous Deployment (CI/CD) pipelines, DevOps, and underlying deployment infrastructure.
Good to Have:
· Familiarity with Amazon Web Services (AWS) Cloud.
· Experience with Spark and PySpark.
· Knowledge of Snowflake.
· Exposure to ETL migration and Cloud Modernization initiatives.
· Experience with tools such as SAS and Talend.
Best Regards
Sushil. N
571-616-8875 (c) | Sushil.s@interonit.com