

Pyspark Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Pyspark Developer with a contract length of "unknown" and a pay rate of "unknown." It requires 9+ years of PySpark experience, expertise in Azure Cloud, and prior work in the financial services sector.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
504
-
ποΈ - Date discovered
June 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Iselin, NJ
-
π§ - Skills detailed
#Spark (Apache Spark) #Azure Databricks #Data Cleansing #Azure Data Factory #ADF (Azure Data Factory) #Data Architecture #Data Processing #Kubernetes #Scala #Data Governance #Big Data #Databricks #Security #Databases #Data Quality #Compliance #ML (Machine Learning) #Docker #Azure #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #TensorFlow #NoSQL #Azure cloud #PySpark #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Description:
We are seeking a Pyspark Developer to build robust data processing solutions on Azure Cloud using PySpark to drive innovation within the financial services industry. Our challenge is to leverage extensive PySpark experience and hands-on expertise in Azure Cloud to navigate the complexities of big data, ensuring compliance with industry standards while delivering impactful results. We aim to rapidly adapt to evolving technologies and create scalable data solutions that enhance operational efficiency and decision-making.
Responsibilities:
β’ Develop, maintain, and optimize data processing pipelines using PySpark on Azure Cloud.
β’ Collaborate with cross-functional teams to understand data requirements and translate business needs into technical specifications.
β’ Design and implement scalable data models and data architecture to support analytical and reporting needs.
β’ Perform data cleansing, transformation, and enrichment processes to enhance data quality and usability.
β’ Monitor and troubleshoot data processing jobs, ensuring high availability and performance.
β’ Implement best practices for data governance and security within Azure Cloud.
β’ Stay up-to-date with emerging technologies in big data, cloud computing, and the financial services industry
Requirements:
β’ Minimum of 9 years of experience in PySpark development.
β’ Proven experience working with Azure Cloud services (Azure Data Factory, Azure Databricks, etc.).
β’ Prior experience in the banking, investment banking, or financial services sector is mandatory.
β’ Proficient in PySpark for data processing and ETL tasks.
β’ Experience with SQL and NoSQL databases.
β’ Familiarity with data warehousing concepts and tools.
β’ Knowledge of Azure services related to big data and analytics.
Preferred, but not required:
β’ Experience with containerization technologies (e.g., Docker, Kubernetes).
β’ Familiarity with machine learning frameworks (e.g., MLlib, TensorFlow).