Strategic Staffing Solutions

Big Data Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with a 12-month W2 contract in Charlotte, NC, offering $65-70/hour. Key skills include Hadoop, ETL, SQL Server, and Spark. Requires 5+ years in ETL development and 3+ years in Big Data technologies.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date
October 9, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Scala #Schema Design #S3 (Amazon Simple Storage Service) #Data Engineering #Big Data #Spark (Apache Spark) #Leadership #GitHub #Batch #PySpark #Data Modeling #Jenkins #SQL (Structured Query Language) #MS SQL (Microsoft SQL Server) #Data Integration #Data Pipeline #"ETL (Extract #Transform #Load)" #Migration #Code Reviews #Datasets #SQL Server #Hadoop #Data Quality #RDS (Amazon Relational Database Service)
Role description
STRATEGIC STAFFING SOLUTIONS (S3) HAS AN OPENING! Strategic Staffing Solutions is currently looking for a Big Data Engineer w/ Hadoop and ETL, a W2 contract opportunity with one of our largest clients! Candidates should be willing to work on our W2 ONLY, NO C2C Job Title: Big Data Engineer w/ Hadoop and ETL Role Type: W2 only Duration: 12 months Location: Charlotte, NC Schedule: Hybrid W2 hourly rate: $65-70 TOP SKILLS: The ideal candidate will have strong Hadoop and ETL pipeline creation and modeling skills. β€’ Hadoop (ability to operate in hive environment with SPARK and understanding of β€œinternal SPARK” β€’ Experience moving data from MS SQL server platform to Hadoop/Hive platform β€’ SQL SERVER β€’ SQL ETL Data Modeling experience is essential β€’ Big Data β€’ Confluence β€’ Jenkins β€’ GitHub PROJECT DETAILS This contractor will be joining the Operational Risk Data Integration and Delivery teams who support the front line and second line Analytics and Reporting. On this team Requests are received from product owners to update data sets into CARAT and RDS which allow the control team to conduct deeper analytics and reporting. DUTIES This resource will support daily jobs that populate which need to be monitored and tasked with making enhancements to current data sets and integration of new data sets Key Responsibilities: β€’ Design, develop, and maintain scalable ETL pipelines for batch processing using Big Data technologies. β€’ Work with large datasets in Hadoop environments, ensuring performance, reliability, and scalability. β€’ Build and maintain Spark/PySpark data transformation jobs and standalone transformation processes. β€’ Support and optimize existing data models and pipelines in the Snapshot portal. β€’ Ensure data quality and pipeline reliability for ongoing CARAT application migration. β€’ Perform data modeling and understand schema designs for Big Data. β€’ Collaborate with the hiring manager, technical leads, and cross-functional teams to design and implement data engineering solutions. β€’ Assist in maintaining and occasionally enhancing legacy SQL Server elements as needed. β€’ Participate in whiteboarding sessions and problem-solving discussions as part of the interview and team collaboration process. Required Qualifications: β€’ 5+ years of experience in ETL development and data pipeline construction. β€’ 3+ years of hands-on experience in Big Data technologies, including Hadoop, Spark, and PySpark. β€’ Proven experience building and optimizing ETL workflows in batch processing environments. β€’ Strong understanding of data transformation and performance optimization in distributed systems. β€’ Working knowledge of SQL Server and ability to support legacy systems. β€’ Experience with snapshot-based data modeling approaches. β€’ Strong problem-solving and communication skills; able to clearly explain solutions and technical decisions. β€’ Must be available for in-person interview, including resume walkthrough, problem-solving, and whiteboarding session. Preferred Qualifications: β€’ Experience with DCP (Data Control Platform) – nice to have. β€’ Prior experience in banking or financial services environments. β€’ Experience working with banking systems or projects a strong plus. SOFT SKILLS - LEADERSHIP SKILLS - Able to be a lead, no direct reports but this person will be expected to understand analysis of various features, perform code reviews, support group sessions, and lead on insight. - Strong communication skills - written and verbal β€œBeware of scams. S3 never asks for money during its onboarding process.”