

Info Way Solutions
BigData Engineer with DevOps
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with DevOps in Pittsburgh, PA, offering a contract length of "unknown" and a pay rate of "unknown." Requires 8+ years of experience, expertise in HDFS, Hive, Impala, PySpark, Python, and DevOps tools like uDeploy and Jenkins.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 22, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Pittsburgh, PA
-
🧠 - Skills detailed
#Compliance #Impala #Monitoring #Spark (Apache Spark) #Data Engineering #Python #PySpark #Automation #Big Data #"ETL (Extract #Transform #Load)" #HDFS (Hadoop Distributed File System) #Data Lifecycle #Jenkins #DevOps #Scala
Role description
Title: BigData Engineer with DevOps Exp.
Location: Pittsburgh, PA
Position Overview
We are seeking a highly experienced Senior Big Data DevOps Engineer with at least 8 years of professional experience in managing large-scale data platforms and automation frameworks. The ideal candidate will have deep expertise in HDFS, Hive, Impala, PySpark, Python, and DevOps automation tools such as uDeploy and Jenkins. This role is critical in ensuring smooth, reliable, and scalable data operations across multiple environments.
Key Responsibilities
Manage end-to-end data operations, including HDFS table management and data lifecycle governance
Design, build, and optimize ETL pipelines using PySpark and Python
Oversee multi-environment codebase governance, ensuring consistency and compliance across dev, test, and production environments
Implement and maintain DevOps automation using tools such as uDeploy and Jenkins
Lead platform upgrades, patching, and performance tuning for big data ecosystems
Provide production support, troubleshooting issues, and ensuring high availability of data services
Collaborate with data engineers, analysts, and business stakeholders to deliver reliable and scalable data solutions
Drive best practices in CI/CD, monitoring, and automation for big data platforms
Title: BigData Engineer with DevOps Exp.
Location: Pittsburgh, PA
Position Overview
We are seeking a highly experienced Senior Big Data DevOps Engineer with at least 8 years of professional experience in managing large-scale data platforms and automation frameworks. The ideal candidate will have deep expertise in HDFS, Hive, Impala, PySpark, Python, and DevOps automation tools such as uDeploy and Jenkins. This role is critical in ensuring smooth, reliable, and scalable data operations across multiple environments.
Key Responsibilities
Manage end-to-end data operations, including HDFS table management and data lifecycle governance
Design, build, and optimize ETL pipelines using PySpark and Python
Oversee multi-environment codebase governance, ensuring consistency and compliance across dev, test, and production environments
Implement and maintain DevOps automation using tools such as uDeploy and Jenkins
Lead platform upgrades, patching, and performance tuning for big data ecosystems
Provide production support, troubleshooting issues, and ensuring high availability of data services
Collaborate with data engineers, analysts, and business stakeholders to deliver reliable and scalable data solutions
Drive best practices in CI/CD, monitoring, and automation for big data platforms