Strategic Staffing Solutions

Big Data Developer (Spark / Scala / Python)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer (Spark/Scala/Python) in Charlotte, NC, on a contract from April 2026 to April 2028, paying $65.00 - $95.00 per hour. Requires 5+ years of experience, strong skills in Spark, Scala, Python, and ETL processes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
760
-
🗓️ - Date
March 14, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC 28202
-
🧠 - Skills detailed
#Agile #Version Control #Apache Spark #GIT #Data Processing #Distributed Computing #Data Engineering #Data Pipeline #Data Framework #Automation #Datasets #Python #Spark (Apache Spark) #Scala #Big Data #"ETL (Extract #Transform #Load)"
Role description
Big Data Developer (Spark / Scala / Python) Location: Charlotte, NC (Hybrid)Duration: Contract – April 2026 to April 2028 Overview We are seeking an experienced Big Data Developer to support enterprise risk technology initiatives. This role will focus on developing and maintaining large-scale data processing solutions using Spark, Scala, and Python within a distributed big data environment. The ideal candidate will have strong experience working with big data frameworks, scheduling tools, and source control platforms, and will collaborate closely with engineering teams to support complex data processing and analytics initiatives. Key ResponsibilitiesBig Data Development Design and develop scalable big data solutions using Spark, Scala, and Python. Build and optimize data pipelines and distributed data processing workflows. Work with large datasets to support enterprise analytics and risk technology platforms. Data Engineering & Processing Develop and maintain ETL and data transformation processes. Optimize performance for large-scale Spark-based workloads. Ensure reliability and scalability across distributed computing environments. Automation & Scheduling Implement job scheduling and automation using Autosys or similar scheduling tools. Monitor and maintain data processing workflows to ensure operational stability. Collaboration Work closely with engineering teams, data engineers, and technical stakeholders to deliver scalable solutions. Participate in design discussions and contribute to architecture decisions for data platforms. Development Best Practices Maintain source control and code versioning using Git. Follow established development standards and collaborate through Agile processes. Required Qualifications 5+ years of software engineering or big data development experience Strong experience with: Apache Spark Scala Python Experience working with large-scale data processing systems Experience with Autosys or similar job scheduling tools Experience using Git or other version control platforms Preferred Qualifications Experience working in financial services or enterprise data environments Experience building big data pipelines or ETL frameworks Familiarity with distributed computing and data platform architecture Key Skills Apache Spark Scala Python Big Data Development ETL / Data Pipelines Autosys Scheduling Git Version Control Distributed Data Processing Pay: $65.00 - $95.00 per hour Benefits: Dental insurance Health insurance Vision insurance Work Location: In person