HashRoot

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 7+ years of experience, based in Pittsburgh, PA. The contract is for immediate joiners, offering a pay rate of "unknown." Key skills include Databricks, PySpark, SQL, and cloud platforms, with a preferred certification in Databricks.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 14, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Pittsburgh, PA
-
🧠 - Skills detailed
#Visualization #Data Privacy #Spark (Apache Spark) #Data Pipeline #Data Lakehouse #Databricks #Cloud #Scala #Big Data #"ETL (Extract #Transform #Load)" #Version Control #PySpark #SQL (Structured Query Language) #BI (Business Intelligence) #Compliance #Batch #Computer Science #Data Quality #Programming #Azure #Delta Lake #Data Lake #Security #Data Engineering #Python
Role description
Position: Data Engineer Experience: 7+ years Locations: Pittsburgh, PA Notice Period: Immediate Joiners Job Overview This position is part of the Enterprise Data & Analytics Capability team under the Global Technology Organization. In this role, you will lead the design, development, and optimization of large-scale data solutions on the Databricks platform. Key responsibilities β€’ Design, build, and maintain scalable data pipelines on Databricks (using Spark, Delta Lake, etc.) β€’ Write clean, efficient, and maintainable PySpark or SQL code for data transformation β€’ Design robust data models for analytics and reporting β€’ Ensure data quality, consistency, and governance β€’ Handle batch and streaming data workflows β€’ Provide architectural guidance and support in platform usage β€’ Drive best practices in data engineering across the team β€’ Monitor and optimize performance of Spark jobs and cluster usage β€’ Ensure compliance with security and data privacy standards Essential skills β€’ Bachelor’s degree in Computer Science, Engineering, or a related field β€’ Minimum of 5 years’ programming experience which includes at least one year working with a big data platform; experience in data engineering domain, Python, SQL, and cloud platforms such as Azure β€’ Familiarity with relevant systems, tools, languages, and business domain which includes Data Lakehouse principles, relational and Kimball data models (required) β€’ Experience with CI/CD pipelines and version control tools (required) β€’ Knowledge of data visualization tools and BI platforms (preferred) β€’ Certification in Databricks or relevant cloud platforms (preferred) β€’ Good communication (verbal and written) β€’ Experience in managing client stakeholders