

Cyber Sphere
Senior DataStage Consultant Location-Hybrid @ Orlando, FL-Need Only Locals -No Non-Locals
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior DataStage Consultant on a contract basis, hybrid in Orlando, FL. Key skills include DataStage, ETL development, and strong SQL Server expertise. A Bachelor’s degree in a related field is required; local candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Orlando, FL
-
🧠 - Skills detailed
#DataStage #Computer Science #GIT #SQL Server #"ETL (Extract #Transform #Load)" #.Net #SQL (Structured Query Language) #Data Mapping #Linux #Indexing #Metadata #Version Control #Logging #Strategy #Business Analysis #Shell Scripting #Unix #Data Governance #Slowly Changing Dimensions #Data Quality #Data Lineage #Scripting #Data Transformations #Documentation #Statistics #Code Reviews
Role description
Title - Senior DataStage Consultant
Location-Hybrid @ Orlando, FL-Need Only Locals -No Non-Locals
Duration –Contract
Skills Needed: DataStage, ETL Development (Data Mapping and Transformation Implementation), Strong SQL Server w/ advanced T-SQL, File Based Ingestion, etc.
Role Description
Role summary Software Engineer with strong ETL experience to design, build, and support file-to-table data transformations using IBM InfoSphere DataStage. You’ll turn inbound file feeds into reliable, auditable SQL Server table loads with solid performance, clear error handling, and repeatable operations.
Key responsibilities
• Design, develop, and maintain IBM DataStage ETL jobs that ingest file feeds (CSV, fixed-width, delimited) and load curated destination tables in SQL Server.
• Build end-to-end ETL flows, including staging, transformations, validations, and publishing to downstream schemas.
• Perform source-to-target mapping and implement transformation logic based on business and technical requirements.
• Use common DataStage stages and patterns (e.g., Sequential File, Transformer, Lookup, Join/Merge, Aggregator, Sort, Funnel, Remove Duplicates), with attention to partitioning and parallel job design.
• Write, optimize, and tune SQL Server queries, stored procedures, and T‑SQL scripts used in ETL workflows.
• Implement restartable and supportable jobs: parameterization, robust logging, reject handling, auditing columns, and reconciliation checks.
• Apply data quality controls (format checks, referential checks, null/duplicate checks, threshold checks) and produce clear exception outputs for remediation.
• Monitor and troubleshoot ETL runs using DataStage Director/Operations Console and SQL Server tooling; perform root-cause analysis and fix defects.
• Improve performance through job design tuning (partitioning strategy, sorting choices, buffering, pushdown where appropriate) and SQL tuning (indexes, statistics, set-based logic).
• Participate in code reviews, testing, documentation, and release activities; maintain clear runbooks and operational procedures.
• Collaborate with business analysts, data modelers, QA, and production support to deliver stable pipelines.
Required skills and experience
• Hands-on IBM DataStage ETL development experience, including data mapping and transformation implementation.
• Strong SQL Server experience with advanced T‑SQL (joins, window functions, CTEs, temp tables, indexing basics, query plans).
• Solid understanding of file-based ingestion and parsing (CSV, fixed-width, headers/trailers, control totals, encoding, delimiters, quoting/escaping).
• Experience designing ETL jobs with good operational characteristics: parameter-driven design, logging, error handling, restart/re-run strategy, and auditability.
• Ability to troubleshoot data issues end-to-end (source file → stage tables → target tables) and communicate findings clearly.
Preferred qualifications
• Experience with DataStage Parallel Jobs tuning (partitioning methods, collect/sort trade-offs, skew handling).
• Familiarity with UNIX/Linux basics and shell scripting for orchestration and file handling.
• Experience with job scheduling/orchestration tools (e.g., Control‑M, Autosys) and CI/CD practices.
• Knowledge of common warehousing patterns (incremental loads, slowly changing dimensions, surrogate keys, effective dating).
• Experience with version control (Git) and structured promotion/release processes across environments (dev/test/prod).
• Exposure to data governance practices (metadata, lineage, naming standards) and secure handling of sensitive data.
Education
• Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience.
What success looks like in this role
• File feeds land and load consistently with clear reconciliation results.
• Failures are diagnosable from logs and reject outputs without deep forensics.
• Jobs meet runtime SLAs through solid DataStage design and SQL tuning.
• Mappings and transformations are documented and traceable to requirements.
Regards,
Sai Srikar
7704565690
Email: sai@cysphere.net
Title - Senior DataStage Consultant
Location-Hybrid @ Orlando, FL-Need Only Locals -No Non-Locals
Duration –Contract
Skills Needed: DataStage, ETL Development (Data Mapping and Transformation Implementation), Strong SQL Server w/ advanced T-SQL, File Based Ingestion, etc.
Role Description
Role summary Software Engineer with strong ETL experience to design, build, and support file-to-table data transformations using IBM InfoSphere DataStage. You’ll turn inbound file feeds into reliable, auditable SQL Server table loads with solid performance, clear error handling, and repeatable operations.
Key responsibilities
• Design, develop, and maintain IBM DataStage ETL jobs that ingest file feeds (CSV, fixed-width, delimited) and load curated destination tables in SQL Server.
• Build end-to-end ETL flows, including staging, transformations, validations, and publishing to downstream schemas.
• Perform source-to-target mapping and implement transformation logic based on business and technical requirements.
• Use common DataStage stages and patterns (e.g., Sequential File, Transformer, Lookup, Join/Merge, Aggregator, Sort, Funnel, Remove Duplicates), with attention to partitioning and parallel job design.
• Write, optimize, and tune SQL Server queries, stored procedures, and T‑SQL scripts used in ETL workflows.
• Implement restartable and supportable jobs: parameterization, robust logging, reject handling, auditing columns, and reconciliation checks.
• Apply data quality controls (format checks, referential checks, null/duplicate checks, threshold checks) and produce clear exception outputs for remediation.
• Monitor and troubleshoot ETL runs using DataStage Director/Operations Console and SQL Server tooling; perform root-cause analysis and fix defects.
• Improve performance through job design tuning (partitioning strategy, sorting choices, buffering, pushdown where appropriate) and SQL tuning (indexes, statistics, set-based logic).
• Participate in code reviews, testing, documentation, and release activities; maintain clear runbooks and operational procedures.
• Collaborate with business analysts, data modelers, QA, and production support to deliver stable pipelines.
Required skills and experience
• Hands-on IBM DataStage ETL development experience, including data mapping and transformation implementation.
• Strong SQL Server experience with advanced T‑SQL (joins, window functions, CTEs, temp tables, indexing basics, query plans).
• Solid understanding of file-based ingestion and parsing (CSV, fixed-width, headers/trailers, control totals, encoding, delimiters, quoting/escaping).
• Experience designing ETL jobs with good operational characteristics: parameter-driven design, logging, error handling, restart/re-run strategy, and auditability.
• Ability to troubleshoot data issues end-to-end (source file → stage tables → target tables) and communicate findings clearly.
Preferred qualifications
• Experience with DataStage Parallel Jobs tuning (partitioning methods, collect/sort trade-offs, skew handling).
• Familiarity with UNIX/Linux basics and shell scripting for orchestration and file handling.
• Experience with job scheduling/orchestration tools (e.g., Control‑M, Autosys) and CI/CD practices.
• Knowledge of common warehousing patterns (incremental loads, slowly changing dimensions, surrogate keys, effective dating).
• Experience with version control (Git) and structured promotion/release processes across environments (dev/test/prod).
• Exposure to data governance practices (metadata, lineage, naming standards) and secure handling of sensitive data.
Education
• Bachelor’s degree in Computer Science, Engineering, Information Systems, or equivalent practical experience.
What success looks like in this role
• File feeds land and load consistently with clear reconciliation results.
• Failures are diagnosable from logs and reject outputs without deep forensics.
• Jobs meet runtime SLAs through solid DataStage design and SQL tuning.
• Mappings and transformations are documented and traceable to requirements.
Regards,
Sai Srikar
7704565690
Email: sai@cysphere.net





