

Sr. PySpark Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. PySpark Engineer with a 6-month contract, offering $75.00 - $100.00 per hour. Requires 10+ years in data engineering, 5 years of PySpark coding, and 2 years in the pharmaceutical industry. Remote work with occasional onsite presence.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
800
🗓️ - Date discovered
May 14, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Remote
🧠 - Skills detailed
#Data Engineering #Cloud #SQL (Structured Query Language) #Jupyter #"ETL (Extract #Transform #Load)" #Leadership #Spark (Apache Spark) #Data Processing #Data Warehouse #SQL Queries #PySpark #Python #AWS (Amazon Web Services) #Informatica #Data Science #Airflow
Role description
Role: Principal/ Sr. PySpark Engineer
Duration: 6 Months Contract to Hire
Mostly Remote with couple of days onsite in a month
Job Summary:
We are seeking a Principle Software/Data Engineer (PySpark) with experience on Cloud and Distributed Compute environments (Data Heavy) NOT a data scientist position. This role blends principal software engineering with data engineering, particularly in a regulated (GxP) environment. They’re not doing exploratory analysis or modeling — they’re building the infrastructure, pipelines, and systems that support data workflows at scale, reliably and compliantly.
Candidate needs to have experience with following:-
Must have actual coding experience in PySpark and AWS
Prior team leadership or mentorship experience
GxP or regulated industry experience
Skillsets - Strong hands on S/w development and data engineering experience on Cloud / Distributed Compute environments preferably using PySpark, AWS, Airflow , Jupyterhub stack
Experience:
10 + years of experience with software development/ data engineering.
Coding experience with Pyspark for data processing and analysis
Write complex SQL queries to extract and manipulate data
Implement ETL processes using tools like Informatica
Collaborate with cross-functional teams to optimize data warehouse architecture
Utilize Python and Spark for data processing and analysis
Strong hands on S/w development and data engineering experience on Cloud / Distributed Compute environments preferably using PySpark, AWS, Airflow , Jupyterhub stack.
Experience in leading technical development teams and successfully delivering releases in GXP ecosystems .
Ability to review code and architecture and generate data models .
If you are a seasoned engineer looking to make a significant impact through innovative data solutions, we encourage you to apply for this exciting opportunity.
Job Type: Contract
Pay: $75.00 - $100.00 per hour
Expected hours: 40 per week
Schedule:
Monday to Friday
People with a criminal record are encouraged to apply
Experience:
PySpark coding: 5 years (Required)
Data Engineering: 10 years (Required)
Pharmaceutical: 2 years (Required)
Ability to Commute:
Remote (Required)
Willingness to travel:
25% (Required)
Work Location: Remote
Role: Principal/ Sr. PySpark Engineer
Duration: 6 Months Contract to Hire
Mostly Remote with couple of days onsite in a month
Job Summary:
We are seeking a Principle Software/Data Engineer (PySpark) with experience on Cloud and Distributed Compute environments (Data Heavy) NOT a data scientist position. This role blends principal software engineering with data engineering, particularly in a regulated (GxP) environment. They’re not doing exploratory analysis or modeling — they’re building the infrastructure, pipelines, and systems that support data workflows at scale, reliably and compliantly.
Candidate needs to have experience with following:-
Must have actual coding experience in PySpark and AWS
Prior team leadership or mentorship experience
GxP or regulated industry experience
Skillsets - Strong hands on S/w development and data engineering experience on Cloud / Distributed Compute environments preferably using PySpark, AWS, Airflow , Jupyterhub stack
Experience:
10 + years of experience with software development/ data engineering.
Coding experience with Pyspark for data processing and analysis
Write complex SQL queries to extract and manipulate data
Implement ETL processes using tools like Informatica
Collaborate with cross-functional teams to optimize data warehouse architecture
Utilize Python and Spark for data processing and analysis
Strong hands on S/w development and data engineering experience on Cloud / Distributed Compute environments preferably using PySpark, AWS, Airflow , Jupyterhub stack.
Experience in leading technical development teams and successfully delivering releases in GXP ecosystems .
Ability to review code and architecture and generate data models .
If you are a seasoned engineer looking to make a significant impact through innovative data solutions, we encourage you to apply for this exciting opportunity.
Job Type: Contract
Pay: $75.00 - $100.00 per hour
Expected hours: 40 per week
Schedule:
Monday to Friday
People with a criminal record are encouraged to apply
Experience:
PySpark coding: 5 years (Required)
Data Engineering: 10 years (Required)
Pharmaceutical: 2 years (Required)
Ability to Commute:
Remote (Required)
Willingness to travel:
25% (Required)
Work Location: Remote