

Lead Python Developer with SciPy & PySpark Experience
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Python Developer with 10+ years of Python experience, specializing in SciPy and PySpark. It offers a remote contract at $70-$80/hr. Requires strong data science skills, cloud platform experience, and familiarity with Agile methodologies.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
640
-
ποΈ - Date discovered
July 12, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
1099 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Spark SQL #ML (Machine Learning) #Spark (Apache Spark) #Python #Plotly #Version Control #Scala #Pandas #SciPy #Hadoop #SQL (Structured Query Language) #Datasets #Kubernetes #PySpark #Agile #BI (Business Intelligence) #GCP (Google Cloud Platform) #Data Engineering #Data Pipeline #HDFS (Hadoop Distributed File System) #DevOps #Airflow #Cloud #NumPy #AWS (Amazon Web Services) #Data Science #Azure #Scrum #Big Data #Docker #"ETL (Extract #Transform #Load)" #GIT #Matplotlib #TensorFlow #Libraries #Data Processing
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Role: Lead Python Developer
Location: Remote
Pay Rate: $70-$80/hr. on 1099 or $68/hr. on W2
Acceptable Work Authorization: USC, GC, GC-EAD, H4-EAD Only
Job Summary:
We are looking for an experienced Python Developer with a strong background in data science, numerical computing, and distributed data processing. The ideal candidate will have hands-on experience working with SciPy for scientific computations and PySpark for big data processing. You will work closely with data engineers, scientists, and business stakeholders to develop scalable and performant solutions.
Key Responsibilities:
β’ Design, develop, and deploy data pipelines using PySpark for large-scale datasets.
β’ Build and optimize scientific and numerical computation models using SciPy and related libraries.
β’ Write efficient, reusable, and scalable Python code for data processing, transformation, and analytics.
β’ Collaborate with cross-functional teams including data scientists, DevOps, and BI teams.
β’ Debug, optimize, and improve the performance of existing code and workflows.
β’ Implement unit tests and follow CI/CD best practices.
β’ Work in Agile/Scrum environment and participate in sprint planning and reviews.
Required Skills:
β’ Strong proficiency in Python (10+ years).
β’ Solid experience with SciPy, NumPy, and related scientific Python libraries.
β’ Proficiency in PySpark for distributed data processing using Spark.
β’ Experience with data frames, RDDs, and Spark SQL.
β’ Good understanding of Big Data ecosystems like Hadoop, Hive, or HDFS.
β’ Familiarity with version control tools such as Git.
β’ Experience working with large datasets in cloud platforms (e.g., AWS, Azure, GCP).
β’ Strong analytical and problem-solving skills.
Preferred Skills:
β’ Experience with Pandas, Matplotlib, Seaborn, or Plotly.
β’ Knowledge of machine learning workflows using Scikit-learn or TensorFlow.
β’ Exposure to workflow orchestration tools like Airflow or Prefect.
β’ Familiarity with Docker/Kubernetes and CI/CD pipelines.