PySpark Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a PySpark Developer with 8+ years of experience in PySpark and Azure Cloud, focusing on data processing solutions in the financial services industry. Contract length is unspecified; location is hybrid in Iselin, NJ.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 7, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Iselin, NJ
-
🧠 - Skills detailed
#Spark (Apache Spark) #SQL (Structured Query Language) #TensorFlow #Azure Data Factory #Cloud #Azure cloud #ML (Machine Learning) #Databricks #Data Quality #Compliance #NoSQL #Azure Databricks #Docker #Security #Data Architecture #PySpark #Azure #"ETL (Extract #Transform #Load)" #Kubernetes #Data Cleansing #Data Processing #Scala #Databases #Big Data #ADF (Azure Data Factory) #Data Governance
Role description
Title: Pyspark Developer location: Iselin, NJ - Hybrid We are seeking a Pyspark Developer to build robust data processing solutions on Azure Cloud using PySpark to drive innovation within the financial services industry. Our challenge is to leverage extensive PySpark experience and hands-on expertise in Azure Cloud to navigate the complexities of big data, ensuring compliance with industry standards while delivering impactful results. We aim to rapidly adapt to evolving technologies and create scalable data solutions that enhance operational efficiency and decision-making. Responsibilities: β€’ Develop, maintain, and optimize data processing pipelines using PySpark on Azure Cloud. β€’ Collaborate with cross-functional teams to understand data requirements and translate business needs into technical specifications. β€’ Design and implement scalable data models and data architecture to support analytical and reporting needs. β€’ Perform data cleansing, transformation, and enrichment processes to enhance data quality and usability. β€’ Monitor and troubleshoot data processing jobs, ensuring high availability and performance. β€’ Implement best practices for data governance and security within Azure Cloud. β€’ Stay up-to-date with emerging technologies in big data, cloud computing, and the financial services industry. Experience: β€’ Minimum of 8+ years of experience in PySpark development. β€’ Proven experience working with Azure Cloud services (Azure Data Factory, Azure Databricks, etc.). β€’ Prior experience in the banking, investment banking, or financial services sector is mandatory. β€’ Proficient in PySpark for data processing and ETL tasks. β€’ Experience with SQL and NoSQL databases. β€’ Familiarity with data warehousing concepts and tools. β€’ Knowledge of Azure services related to big data and analytics. Nice to have: β€’ Experience with containerization technologies (e.g., Docker, Kubernetes). β€’ Familiarity with machine learning frameworks (e.g., MLlib, TensorFlow). Thanks and Regards Asif Ansari Senior Technical Recruiter Email: Asif.a@sitwinc.com