

Saama
Senior AWS PySpark Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS PySpark Developer with 8-10 years of experience, offering a hybrid work location in South San Francisco, CA. Pay rate is competitive. Key skills include AWS, Python, data pipeline tools, and big data technologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 18, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
South San Francisco, CA
-
π§ - Skills detailed
#Datasets #Computer Science #AWS (Amazon Web Services) #ECR (Elastic Container Registery) #Redshift #Big Data #Scripting #Cloud #Athena #NoSQL #Tableau #AWS Glue #PySpark #Data Wrangling #Python #TensorFlow #BI (Business Intelligence) #Databases #PostgreSQL #Spark (Apache Spark) #ML (Machine Learning) #Docker #IAM (Identity and Access Management) #Data Architecture #Data Engineering #Data Processing #Kubernetes #SageMaker #Visualization #Data Pipeline #Microsoft Power BI
Role description
Role β Senior AWS PySpark Developer
Location β Hybrid β South San Francisco, CA
We are seeking an experienced Sr. AWS PySpark Developer with 8 -10 years of experience to design, build, and optimize our data pipelines and analytics architecture. The ideal candidate will have a strong background in data wrangling and analysis, with a deep understanding of AWS data services.
Key Responsibilities:
β’ Design, build, and optimize robust data pipelines and data architecture on the AWS cloud platform.
β’ Wrangle, explore, and analyze large datasets to identify trends, answer business questions, and pinpoint areas for improvement.
β’ Develop and maintain a next-generation analytics environment, providing a self-service, centralized platform for all data-centric activities.
β’ Formulate and implement distributed algorithms for effective data processing and trend identification.
β’ Configure and manage Identity and Access Management (IAM) on the AWS platform.
β’ Collaborate with stakeholders to understand data requirements and deliver effective solutions.
Required Skills & Experience:
β’ 8-10 years of experience as a Data Engineer or Developer.
β’ Proven experience building and optimizing data pipelines on AWS.
β’ Proficiency in scripting with Python.
β’ Strong working knowledge of:
β’ Big Data Tools: AWS Athena.
β’ Relational & NoSQL Databases: AWS Redshift and PostgreSQL.
β’ Data Pipeline Tools: AWS Glue, AWS Data Pipeline, or AWS Lake Formation.
β’ Container Orchestration: Kubernetes, Docker, Amazon ECR/ECS/EKS.
β’ Experience with wrangling, exploring, and analyzing data.
β’ Strong organizational and problem-solving skills.
Preferred Skills:
β’ Experience with machine learning tools (SageMaker, TensorFlow).
β’ Working knowledge of stream processing (Kinesis, Spark-Streaming).
β’ Experience with analytics and visualization tools (Tableau, Power BI).
β’ Knowledge of optimizing AWS Redshift performance.
Education
β’ Bachelorβs or Masterβs Degree in Information Technology, Computer Science or relevant field.
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment, including but not limited to, computers, phones, and photocopiers.
Physical Demands
This position requires the frequent and repetitive use of a computer, keyboard, and mouse. Hand and finger dexterity is required.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice.
EEO
Saama provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.
Role β Senior AWS PySpark Developer
Location β Hybrid β South San Francisco, CA
We are seeking an experienced Sr. AWS PySpark Developer with 8 -10 years of experience to design, build, and optimize our data pipelines and analytics architecture. The ideal candidate will have a strong background in data wrangling and analysis, with a deep understanding of AWS data services.
Key Responsibilities:
β’ Design, build, and optimize robust data pipelines and data architecture on the AWS cloud platform.
β’ Wrangle, explore, and analyze large datasets to identify trends, answer business questions, and pinpoint areas for improvement.
β’ Develop and maintain a next-generation analytics environment, providing a self-service, centralized platform for all data-centric activities.
β’ Formulate and implement distributed algorithms for effective data processing and trend identification.
β’ Configure and manage Identity and Access Management (IAM) on the AWS platform.
β’ Collaborate with stakeholders to understand data requirements and deliver effective solutions.
Required Skills & Experience:
β’ 8-10 years of experience as a Data Engineer or Developer.
β’ Proven experience building and optimizing data pipelines on AWS.
β’ Proficiency in scripting with Python.
β’ Strong working knowledge of:
β’ Big Data Tools: AWS Athena.
β’ Relational & NoSQL Databases: AWS Redshift and PostgreSQL.
β’ Data Pipeline Tools: AWS Glue, AWS Data Pipeline, or AWS Lake Formation.
β’ Container Orchestration: Kubernetes, Docker, Amazon ECR/ECS/EKS.
β’ Experience with wrangling, exploring, and analyzing data.
β’ Strong organizational and problem-solving skills.
Preferred Skills:
β’ Experience with machine learning tools (SageMaker, TensorFlow).
β’ Working knowledge of stream processing (Kinesis, Spark-Streaming).
β’ Experience with analytics and visualization tools (Tableau, Power BI).
β’ Knowledge of optimizing AWS Redshift performance.
Education
β’ Bachelorβs or Masterβs Degree in Information Technology, Computer Science or relevant field.
Work Environment
This job operates in a professional office environment. This role routinely uses standard office equipment, including but not limited to, computers, phones, and photocopiers.
Physical Demands
This position requires the frequent and repetitive use of a computer, keyboard, and mouse. Hand and finger dexterity is required.
Other Duties
Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities, and activities may change at any time with or without notice.
EEO
Saama provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.
This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation, and training.






