

IBM DataStage Developer with SQL & Databricks
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an IBM DataStage Developer with SQL & Databricks, offering a long-term contract in Pittsburgh, PA. Requires 8+ years IT experience, 5+ years with DataStage, SQL expertise, and 2+ years with Azure Databricks.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Pennsylvania
-
π§ - Skills detailed
#Data Extraction #SQL (Structured Query Language) #Agile #PySpark #Azure #Data Lake #Delta Lake #Integration Testing #DataStage #Business Analysis #Data Architecture #Debugging #Data Pipeline #DevOps #Databases #Unit Testing #Scala #"ETL (Extract #Transform #Load)" #Cloud #ADF (Azure Data Factory) #Azure Databricks #SQL Server #Code Reviews #Oracle #Synapse #Spark (Apache Spark) #Databricks #Documentation #Data Manipulation #Data Integration #Snowflake #Azure Data Factory
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Type: Contract
Job Category: IT
Job Description
Role: IBM DataStage Developer with SQL & Databricks
Location: Pittsburgh, PA (Onsite Only) Long-term Contract (C2C/W2) Job Overview:
We are seeking a skilled IBM DataStage Developer with strong proficiency in SQL and Azure Databricks for a data integration and modernization project. The ideal candidate will work closely with cross-functional teams to design, build, and maintain robust ETL processes for large-scale data systems.
Key Responsibilities:
Develop, enhance, and support ETL solutions using IBM InfoSphere DataStage.
Design and implement complex data pipelines integrating SQL and Databricks workflows.
Perform data extraction, transformation, and loading from multiple source systems into target data lakes/warehouses.
Collaborate with data architects and business analysts to ensure scalable and efficient data integration.
Optimize DataStage jobs and Databricks notebooks for performance and reliability.
Conduct unit testing, integration testing, and participate in code reviews.
Create and maintain technical documentation for ETL and data flow processes.
Required Skills:
Overall IT Experience: 8+ Years
5+ years of hands-on experience with IBM DataStage (v11.x or higher)
Strong experience with SQL (T-SQL/PL-SQL) for data manipulation, queries, and procedures
2+ years of working knowledge of Azure Databricks / PySpark / Delta Lake
Solid understanding of data warehousing, data lakes, and data integration patterns
Experience working with relational databases (e.g., SQL Server, Oracle, or Snowflake)
Excellent debugging and performance-tuning skills
Nice to Have:
Experience with Azure Data Factory, Synapse, or Data Lake Gen2
Background in banking, healthcare, or insurance domains
Familiarity with Agile methodologies and DevOps practices
Required Skills Cloud Developer SQL Application Developer