

Strategic Staffing Solutions
Big Data Developer (Spark / Scala / Python)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer (Spark / Scala / Python) in Charlotte, NC, from April 2026 to April 2028. Requires 5+ years in big data development, strong skills in Apache Spark, Scala, Python, and experience in financial services.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 14, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Apache Spark #Kafka (Apache Kafka) #GIT #Data Governance #Spark (Apache Spark) #Data Pipeline #Data Processing #Cloud #Scala #Big Data #"ETL (Extract #Transform #Load)" #Version Control #HDFS (Hadoop Distributed File System) #Datasets #Compliance #Automation #Batch #Agile #Hadoop #GitHub #HBase #Data Lake #Security #Data Engineering #Python #Data Integrity #Data Ingestion
Role description
Big Data Developer (Spark / Scala / Python)
Location: Charlotte, NC (Hybrid)
Duration: April 2026 – April 2028 (Long-term Contract)
Industry: Financial Services / Enterprise Risk Technology
Overview
We are seeking an experienced Big Data Developer to support enterprise data and risk technology initiatives within a large-scale financial services environment. This role focuses on designing, developing, and optimizing distributed data processing solutions using modern big data technologies.
The ideal candidate will have strong experience working with Apache Spark, Scala, Python, and large-scale data pipelines within the Hadoop ecosystem. You will collaborate with engineering teams to build high-performance data processing systems that support enterprise analytics, regulatory reporting, and risk management initiatives.
Key Responsibilities
Big Data Development
• Design and develop scalable big data applications using Apache Spark, Scala, and Python.
• Build and optimize large-scale distributed data processing pipelines.
• Work with large datasets to support enterprise analytics, risk technology, and data platforms.
Data Engineering & ETL
• Develop and maintain ETL / ELT pipelines for data ingestion, transformation, and processing.
• Integrate data from multiple sources within Hadoop-based data ecosystems.
• Optimize data processing workflows for performance, reliability, and scalability.
Big Data Technologies
• Work within the Hadoop ecosystem including Hive, HDFS, Kafka, and HBase.
• Support data platforms that process high-volume data workloads.
• Implement batch and streaming data processing solutions.
Automation & Scheduling
• Develop automated workflows and job scheduling using Autosys or similar scheduling tools.
• Improve operational efficiency by automating data processing and pipeline management tasks.
Development Best Practices
• Maintain source code using Git / GitHub version control.
• Follow Agile development practices and collaborate with cross-functional engineering teams.
• Participate in design discussions and contribute to data platform architecture decisions.
Data Governance & Compliance
• Support data governance, security, and regulatory compliance requirements in financial services environments.
• Ensure data integrity, accuracy, and secure handling of sensitive enterprise data.
Required Qualifications
• 5+ years of software engineering or big data development experience
• Strong experience with:
• Apache Spark
• Scala
• Python
• Experience working with large-scale distributed data systems
• Experience developing ETL pipelines and data processing frameworks
• Experience with Git or other version control systems
Preferred Qualifications
• Experience working within the Hadoop ecosystem (Hive, HDFS, Kafka, HBase)
• Experience with Autosys or other enterprise job scheduling tools
• Experience in financial services or highly regulated environments
• Experience with cloud data platforms or modern data lake architectures
Key Skills
• Apache Spark
• Scala
• Python
• Hadoop Ecosystem (Hive, HDFS, Kafka, HBase)
• ETL / Data Pipelines
• Autosys Job Scheduling
• Git / Version Control
• Distributed Data Processing
• Data Governance & Compliance
• Enterprise Data Platforms
Big Data Developer (Spark / Scala / Python)
Location: Charlotte, NC (Hybrid)
Duration: April 2026 – April 2028 (Long-term Contract)
Industry: Financial Services / Enterprise Risk Technology
Overview
We are seeking an experienced Big Data Developer to support enterprise data and risk technology initiatives within a large-scale financial services environment. This role focuses on designing, developing, and optimizing distributed data processing solutions using modern big data technologies.
The ideal candidate will have strong experience working with Apache Spark, Scala, Python, and large-scale data pipelines within the Hadoop ecosystem. You will collaborate with engineering teams to build high-performance data processing systems that support enterprise analytics, regulatory reporting, and risk management initiatives.
Key Responsibilities
Big Data Development
• Design and develop scalable big data applications using Apache Spark, Scala, and Python.
• Build and optimize large-scale distributed data processing pipelines.
• Work with large datasets to support enterprise analytics, risk technology, and data platforms.
Data Engineering & ETL
• Develop and maintain ETL / ELT pipelines for data ingestion, transformation, and processing.
• Integrate data from multiple sources within Hadoop-based data ecosystems.
• Optimize data processing workflows for performance, reliability, and scalability.
Big Data Technologies
• Work within the Hadoop ecosystem including Hive, HDFS, Kafka, and HBase.
• Support data platforms that process high-volume data workloads.
• Implement batch and streaming data processing solutions.
Automation & Scheduling
• Develop automated workflows and job scheduling using Autosys or similar scheduling tools.
• Improve operational efficiency by automating data processing and pipeline management tasks.
Development Best Practices
• Maintain source code using Git / GitHub version control.
• Follow Agile development practices and collaborate with cross-functional engineering teams.
• Participate in design discussions and contribute to data platform architecture decisions.
Data Governance & Compliance
• Support data governance, security, and regulatory compliance requirements in financial services environments.
• Ensure data integrity, accuracy, and secure handling of sensitive enterprise data.
Required Qualifications
• 5+ years of software engineering or big data development experience
• Strong experience with:
• Apache Spark
• Scala
• Python
• Experience working with large-scale distributed data systems
• Experience developing ETL pipelines and data processing frameworks
• Experience with Git or other version control systems
Preferred Qualifications
• Experience working within the Hadoop ecosystem (Hive, HDFS, Kafka, HBase)
• Experience with Autosys or other enterprise job scheduling tools
• Experience in financial services or highly regulated environments
• Experience with cloud data platforms or modern data lake architectures
Key Skills
• Apache Spark
• Scala
• Python
• Hadoop Ecosystem (Hive, HDFS, Kafka, HBase)
• ETL / Data Pipelines
• Autosys Job Scheduling
• Git / Version Control
• Distributed Data Processing
• Data Governance & Compliance
• Enterprise Data Platforms






