Pyspark Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a PySpark Developer on a 6-month contract in Charlotte, NC, offering $50-$60/hr. Requires 2+ years in Python, PySpark, Apache Airflow, and experience in the Retail industry, with a focus on data transformation and optimization.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
480
-
πŸ—“οΈ - Date discovered
August 12, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Deployment #Python #Apache Airflow #PostgreSQL #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Processing #Airflow #Spark (Apache Spark) #Compliance #RDBMS (Relational Database Management System) #Storage #JSON (JavaScript Object Notation) #Data Storage #Spark SQL #Kubernetes #PySpark
Role description
Pyspark Developer Hybrid | Charlotte, NC $50-$60/hr w-2 Our client, a leader in their industry, has an excellent opportunity for a PySpark Developer to work on a 6-month contract position in Charlotte, NC, working hybrid onsite two days a week. We can facilitate w2 and corp-to-corp consultants. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance. Rate: $50-$60/hr Responsibilities: β€’ Design, develop, and maintain PySpark-based data transformation pipelines for diverse formats (JSON, CSV, RDBMS, stream) on Kubernetes/on-prem platforms. β€’ Optimize and tune PySpark jobs to efficiently handle medium to large-scale data volumes. β€’ Develop and implement data workflows leveraging Apache Airflow. β€’ Support and maintain PySpark applications, ensuring reliability and performance. β€’ Collaborate with cross-functional teams to ensure high-quality data processing and compliance. Requirements: β€’ 2+ years of experience in Python and PySpark development. β€’ 2+ years of experience with PySpark data transformation pipeline design and deployment on Kubernetes/on-prem platforms. β€’ 2+ years of experience in performance optimization for large-scale data processing. β€’ 2+ years of experience in designing and implementing Apache Airflow workflows. β€’ 2+ years of experience with Spark SQL and PostgreSQL for data storage and querying. β€’ Experience working in the Retail industry Job #: JN-082025-103194 Skills, experience, and other compensable factors will be considered when determining pay rate. The pay range provided in this posting reflects a W2 hourly rate; other employment options may be available that may result in pay outside of the provided range. W2 employees of Eliassen Group who are regularly scheduled to work 30 or more hours per week are eligible for the following benefits: medical (choice of 3 plans), dental, vision, pre-tax accounts, other voluntary benefits including life and disability insurance, 401(k) with match, and sick time if required by law in the worked-in state/locality. Please be advised- If anyone reaches out to you about an open position connected with Eliassen Group, please confirm that they have an Eliassen.com email address and never provide personal or financial information to anyone who is not clearly associated with Eliassen Group. If you have any indication of fraudulent activity, please contact InfoSec@eliassen.com.