

PySpark Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a PySpark Developer with a contract length of "unknown" and a pay rate of "unknown." Required skills include 9+ years in PySpark, Azure Cloud experience, and prior work in financial services. Familiarity with SQL, NoSQL, and data warehousing is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 9, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Iselin, NJ
-
π§ - Skills detailed
#Azure Databricks #TensorFlow #Spark (Apache Spark) #Databricks #Databases #Kubernetes #Azure cloud #Scala #Data Quality #Data Cleansing #Data Processing #Data Governance #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Security #Azure #Azure Data Factory #PySpark #NoSQL #Big Data #Data Architecture #ADF (Azure Data Factory) #Docker #Cloud
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Responsibilities:
β’ Develop, maintain, and optimize data processing pipelines using PySpark on Azure Cloud.
β’ Collaborate with cross-functional teams to understand data requirements and translate business needs into technical specifications.
β’ Design and implement scalable data models and data architecture to support analytical and reporting needs.
β’ Perform data cleansing, transformation, and enrichment processes to enhance data quality and usability.
β’ Monitor and troubleshoot data processing jobs, ensuring high availability and performance.
β’ Implement best practices for data governance and security within Azure Cloud.
β’ Stay up-to-date with emerging technologies in big data, cloud computing, and the financial services industry.
Requirements:
β’ Minimum of 9 years of experience in PySpark development.
β’ Proven experience working with Azure Cloud services (Azure Data Factory, Azure Databricks, etc.).
β’ Prior experience in the banking, investment banking, or financial services sector is mandatory.
β’ Proficient in PySpark for data processing and ETL tasks.
β’ Experience with SQL and NoSQL databases.
β’ Familiarity with data warehousing concepts and tools.
β’ Knowledge of Azure services related to big data and analytics.
Preferred, but not required:
β’ Experience with containerization technologies (e.g., Docker, Kubernetes).
β’ Familiarity with machine learning frameworks (e.g., MLlib, TensorFlow).