System One

Senior Big Data Engineer (Python/PySpark)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Big Data Engineer (Python/PySpark) with a hybrid schedule (3 days in-office, 2 remote) in Pittsburgh, PA, or Cleveland, OH. Contract length is unspecified, with a pay rate of "unknown." Key skills include Python, PySpark, SQL, and experience with data pipelines and microservices. A Bachelor's degree and relevant certifications are preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 1, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Strongsville, OH
-
🧠 - Skills detailed
#Spark (Apache Spark) #GIT #Databases #SQL (Structured Query Language) #NumPy #Data Lake #Python #Matplotlib #Libraries #Unit Testing #Big Data #Microservices #Unix #Data Engineering #Programming #FastAPI #Storage #Data Pipeline #PySpark #Pandas #Version Control #Jira #Agile
Role description
Position Title: Senior Big Data Engineer (Python/PySpark) Provide locations/flexible work by preference: 1. Pittsburgh PA - Two PNC Plaza OR Cleveland OH - Strongsville Technology Center • • Please note, the hiring team is not looking to source out of any other tech hubs at this time • • Ability to work remote: Hybrid - 3 days in office, 2 Remote Acceptable time zone(s): EST Days of the week: Mon-Fri, 40 hours Working Hours (Flexible): 8-5pm EST Roles and Responsibilities: • Interact with Business to understand the requirement on a daily evolving need and adapt to the needs. • Able to understand the business requirements, tech stack involved and guide the team towards Integral solution approach. • Dedicated and committed approach with good communication skills and participation in agile ceremonies. • Hands on Experience with Object Oriented Programming including limitations. • Familiarity with cross-platform development • Able to handle all the database storage and retrieval mechanisms on Data Lake Platforms and extensive experience in handling. • Extensive Unix/FTP/ File Handling • Strong Hands-on experience with SQL databases Must Have Technical Skills: • A deep understanding and multi-process architecture and the threading limitations of Python. • Pyspark Data engineering skills • Experience using libraries like Pandas, NumPy and MatPlotLib. With experience working with file based systems(supporting csv/parquet/) • Extensive knowledge in understanding Data and creating Data Pipelines • Hands on experience designing and implementing APIs • Hands on experience building microservices using FastAPI or related technologies. • Experience writing unit testing and ensure code coverage. • Experience with Agile Methodology/JIRA/Confluence • Experience in version control using Git Flex Skills/Nice to Have: Added advantage if worked on Big Data, PySpark libraries. Education/Certifications: Bachelors Preferred Python or Spark certifications are a plus Role Differentiator: Conversion possibility, growth opportunities are immense, there is always work and always ways to grow and a place where you can be energized by new work and opportunities. Supporting the bank in a huge way in this department and very critical to the bank's success. Skills: • A deep understanding and multi-process architecture and the threading limitations of Python • Experience in version control using Git • Experience using libraries like Pandas, NumPy and MatPlotLib. With experience working with file based systems(supporting csv/parquet/) • Experience with Agile Methodology/JIRA/Confluence • Experience writing unit testing and ensure code coverage. • Extensive knowledge in understanding Data and creating Data Pipelines • Hands on experience building microservices using FastAPI or related technologies. • Hands on experience designing and implementing APIs • Pyspark Data engineering skills Share your resume with ariz.khan@systemone.com. Also connect me at LinkedIn : (16) Ariz J. Khan | LinkedIn Ref: #404-IT Pittsburgh