

Signature IT World Inc
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Python, requiring 8+ years of experience, strong skills in Python/Java, ETL, SQL, and big data technologies. The contract is onsite in Pittsburgh, PA, with a competitive pay rate, and a degree in Computer Science or related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 15, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Pittsburgh, PA
-
π§ - Skills detailed
#Airflow #SQL (Structured Query Language) #Databases #Hadoop #Snowflake #Java #Data Engineering #Kubernetes #Automation #Data Processing #Data Quality #BigQuery #Security #Scrum #AWS (Amazon Web Services) #Data Science #Cloud #Azure #"ETL (Extract #Transform #Load)" #Datasets #Docker #Redshift #Big Data #DevOps #Agile #Scala #Spark (Apache Spark) #Kafka (Apache Kafka) #Programming #Data Pipeline #NoSQL #GIT #Computer Science #Version Control #Data Modeling #Python #Data Integration #Data Architecture #GCP (Google Cloud Platform) #Data Ingestion
Role description
Title :- Data Engineer with Python
Location :- Pittsburg PA -Onsite
Position on C2C/W2
Visa :-USC and Green Card
Job Description
We are seeking an experienced Data Engineer with strong expertise in Python and/or Java development to join our team in Pittsburgh, PA. The ideal candidate will have hands-on experience building scalable data pipelines, working with large datasets, and developing robust data engineering solutions in cloud and distributed environments.
The candidate should possess strong problem-solving skills, experience with modern data architectures, and the ability to collaborate with cross-functional teams including Data Scientists, Analysts, and Software Engineers.
Key Responsibilities
β’ Design, develop, and maintain scalable data pipelines and ETL/ELT processes.
β’ Build and optimize large-scale data processing systems using Python and/or Java.
β’ Develop and maintain data integration solutions for structured and unstructured data sources.
β’ Work with distributed data processing frameworks and big data technologies.
β’ Collaborate with business stakeholders and technical teams to understand data requirements and deliver scalable solutions.
β’ Ensure data quality, governance, security, and performance optimization across data platforms.
β’ Develop APIs and automation scripts for data ingestion and transformation processes.
β’ Monitor and troubleshoot data workflows and production issues.
β’ Participate in architecture discussions and contribute to data platform modernization initiatives.
Required Skills
β’ 8+ years of experience in Data Engineering or related field.
β’ Strong programming experience in Python and/or Java.
β’ Hands-on experience with ETL development and data pipeline orchestration.
β’ Strong SQL skills and experience with relational and NoSQL databases.
β’ Experience with big data technologies such as Spark, Hadoop, Kafka, or similar frameworks.
β’ Experience with cloud platforms such as AWS, Azure, or GCP.
β’ Familiarity with workflow orchestration tools like Airflow.
β’ Experience with data warehousing solutions such as Snowflake, Redshift, BigQuery, or Hive.
β’ Strong understanding of data modeling and data architecture principles.
β’ Experience with CI/CD pipelines and version control systems like Git.
Preferred Skills
β’ Experience with real-time streaming data pipelines.
β’ Knowledge of containerization technologies such as Docker and Kubernetes.
β’ Exposure to DevOps practices and infrastructure automation.
β’ Experience working in Agile/Scrum environments.
Education
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Information Systems, or related field.
Key Competencies
β’ Strong analytical and problem-solving skills
β’ Excellent communication and collaboration abilities
β’ Ability to work in a fast-paced environment
β’ Self-motivated and detail-oriented professional
Thanks and Regards
Khursheed war
Email: Khursheed.a@sitwinc.com
Title :- Data Engineer with Python
Location :- Pittsburg PA -Onsite
Position on C2C/W2
Visa :-USC and Green Card
Job Description
We are seeking an experienced Data Engineer with strong expertise in Python and/or Java development to join our team in Pittsburgh, PA. The ideal candidate will have hands-on experience building scalable data pipelines, working with large datasets, and developing robust data engineering solutions in cloud and distributed environments.
The candidate should possess strong problem-solving skills, experience with modern data architectures, and the ability to collaborate with cross-functional teams including Data Scientists, Analysts, and Software Engineers.
Key Responsibilities
β’ Design, develop, and maintain scalable data pipelines and ETL/ELT processes.
β’ Build and optimize large-scale data processing systems using Python and/or Java.
β’ Develop and maintain data integration solutions for structured and unstructured data sources.
β’ Work with distributed data processing frameworks and big data technologies.
β’ Collaborate with business stakeholders and technical teams to understand data requirements and deliver scalable solutions.
β’ Ensure data quality, governance, security, and performance optimization across data platforms.
β’ Develop APIs and automation scripts for data ingestion and transformation processes.
β’ Monitor and troubleshoot data workflows and production issues.
β’ Participate in architecture discussions and contribute to data platform modernization initiatives.
Required Skills
β’ 8+ years of experience in Data Engineering or related field.
β’ Strong programming experience in Python and/or Java.
β’ Hands-on experience with ETL development and data pipeline orchestration.
β’ Strong SQL skills and experience with relational and NoSQL databases.
β’ Experience with big data technologies such as Spark, Hadoop, Kafka, or similar frameworks.
β’ Experience with cloud platforms such as AWS, Azure, or GCP.
β’ Familiarity with workflow orchestration tools like Airflow.
β’ Experience with data warehousing solutions such as Snowflake, Redshift, BigQuery, or Hive.
β’ Strong understanding of data modeling and data architecture principles.
β’ Experience with CI/CD pipelines and version control systems like Git.
Preferred Skills
β’ Experience with real-time streaming data pipelines.
β’ Knowledge of containerization technologies such as Docker and Kubernetes.
β’ Exposure to DevOps practices and infrastructure automation.
β’ Experience working in Agile/Scrum environments.
Education
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Information Systems, or related field.
Key Competencies
β’ Strong analytical and problem-solving skills
β’ Excellent communication and collaboration abilities
β’ Ability to work in a fast-paced environment
β’ Self-motivated and detail-oriented professional
Thanks and Regards
Khursheed war
Email: Khursheed.a@sitwinc.com






