

Python Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Python Developer with 9+ years of experience, focusing on PySpark, AWS, and Java. It is a 1+ year contract located in Pasadena, CA. Required skills include Django, Flask, and strong ETL pipeline development.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 7, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
1099 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Pasadena, CA
-
π§ - Skills detailed
#ML (Machine Learning) #Agile #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Code Reviews #Datasets #SQL (Structured Query Language) #GIT #Apache Spark #Data Processing #Lambda (AWS Lambda) #IAM (Identity and Access Management) #Scala #Data Engineering #Flask #Spark (Apache Spark) #Python #Version Control #Scrum #Data Science #PySpark #AI (Artificial Intelligence) #Java #Django #"ETL (Extract #Transform #Load)" #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Position: Python Developer with Pyspark, AWS and Java
Location: Pasadena, CA (Only Locals-F2F is must)
Job type: 1+ year contract
Exp Level: 9 Years. Without internship projects.
VISA : Any visa is fine. (USC, GC, H1, H4 EAD, OPT EAD)
Must have : Django, Flask, Pyspark, AWS and Java (Donβt submit Python with AI/ML, Python fullstack, Python /ML)
Job Summary:
We are seeking a highly skilled Python Developer with strong PySpark expertise and hands-on experience in AWS cloud services to join our data engineering team. The ideal candidate will focus on designing and developing scalable data processing solutions using Apache Spark on the AWS platform. Java development is a secondary skill used for legacy system integration and occasional support.
Key Responsibilities:
Design, build, and optimize scalable ETL pipelines using PySpark
Work with large datasets to perform data transformation, cleansing, and aggregation
Develop and deploy data processing applications on AWS (e.g., EMR, S3, Lambda, Glue)
Develop reusable and efficient Python code following best practices
Collaborate with data engineers, data scientists, and product teams
Integrate data processing workflows with other services and systems, potentially using Java where needed
Monitor, troubleshoot, and improve performance of data jobs on distributed environments
Participate in code reviews and contribute to a culture of continuous improvement
Required Skills & Qualifications:
β’ 8+ years of professional experience in Python development
β’ 5+ years of hands-on experience with Apache Spark / Pyspark
β’ Strong understanding of distributed data processing
β’ Proven experience with AWS services, including: EMR, Glue, S3, Lambda, IAM, CloudWatch, etc.
β’ Working knowledge of Java development
β’ Experience with data structures, algorithms, and performance tuning in Spark
β’ Familiarity with SQL for querying large datasets
β’ Working knowledge of Java for backend or integration purposes
β’ Experience with version control systems like Git
β’ Familiarity with Agile/Scrum development methodology