

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Pittsburgh, PA
-
π§ - Skills detailed
#SQL (Structured Query Language) #Spark SQL #Hadoop #Azure #GIT #AWS (Amazon Web Services) #Spark (Apache Spark) #Scala #Python #Data Governance #Datasets #GCP (Google Cloud Platform) #Cloud #Data Pipeline #"ETL (Extract #Transform #Load)" #Version Control #Logging #Observability #Batch #Data Engineering
Role description
Data Engineer (3 Openings)
Location: Pittsburgh, Cleveland, or Dallas (Hybrid: 3 days onsite)
Type: Contract-to-Hire
Clearance: US Citizen or Green Card Holder
Experience: 4β6 years
Key Responsibilities:
β’ Design and implement scalable data pipelines using Hadoop, Spark, and Hive
β’ Build and maintain ETL/ELT frameworks for batch and streaming data
β’ Collaborate with product teams to ingest, transform, and serve model-ready datasets
β’ Optimize data workflows for performance and reliability
β’ Ensure pipeline quality through validation, logging, and exception handling
Preferred Skills:
β’ Hadoop, Hive, Spark, SQL, Python
β’ Experience with version control (Git) and CI/CD tools
β’ Familiarity with modern data governance and observability practices
β’ Cloud experience a plus (AWS, Azure, GCP)
Data Engineer (3 Openings)
Location: Pittsburgh, Cleveland, or Dallas (Hybrid: 3 days onsite)
Type: Contract-to-Hire
Clearance: US Citizen or Green Card Holder
Experience: 4β6 years
Key Responsibilities:
β’ Design and implement scalable data pipelines using Hadoop, Spark, and Hive
β’ Build and maintain ETL/ELT frameworks for batch and streaming data
β’ Collaborate with product teams to ingest, transform, and serve model-ready datasets
β’ Optimize data workflows for performance and reliability
β’ Ensure pipeline quality through validation, logging, and exception handling
Preferred Skills:
β’ Hadoop, Hive, Spark, SQL, Python
β’ Experience with version control (Git) and CI/CD tools
β’ Familiarity with modern data governance and observability practices
β’ Cloud experience a plus (AWS, Azure, GCP)