

Centraprise
Data Engineer (PySpark, Python) - Only W2/1099
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (PySpark, Python) in Charlotte, NC, for a 12+ month contract at a pay rate of "unknown." Requires 3-6 years of experience, strong SQL, PySpark, Python skills, and AWS familiarity.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 15, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Cloud #Agile #Distributed Computing #AWS (Amazon Web Services) #Data Pipeline #Programming #Version Control #Data Science #Code Reviews #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #PySpark #Python #Complex Queries #SQL (Structured Query Language) #Data Transformations #SQL Queries #Scala #GIT #Data Engineering #S3 (Amazon Simple Storage Service) #Data Quality #Lambda (AWS Lambda) #Data Extraction #Redshift
Role description
Data Engineer (PySpark, Python)
Charlotte, NC
12+ Months Contract
Job Description:
Responsibilities:
• Design, develop, and optimise large-scale data pipelines using PySpark and Python.
• Implement and adhere to best practices in object-oriented programming to build reusable, maintainable code.
• Write advanced SQL queries for data extraction, transformation, and loading (ETL).
• Collaborate closely with data scientists, analysts, and stakeholders to gather requirements and translate them into technical solutions.
• Troubleshoot data-related issues and resolve them in a timely and accurate manner.
• Leverage AWS cloud services (e.g., S3, EMR, Lambda, Glue) to build and manage cloud-native data workflows (preferred).
• Participate in code reviews, data quality checks, and performance tuning of data jobs.
Required Skills & Qualifications:
• 3–6 years of relevant experience in a data engineering or backend development role.
• Strong hands-on experience with PySpark and Python, especially in designing and implementing scalable data transformations.
• Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
• Proficient in SQL, with the ability to write complex queries and optimise performance.
• Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
• Excellent communication and collaboration skills.
Preferred Qualifications (Nice to Have):
• Experience working with AWS cloud ecosystem (S3, Glue, EMR, Redshift, Lambda, etc.).
• Exposure to data warehousing concepts, distributed computing, and performance tuning.
• Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies.
Data Engineer (PySpark, Python)
Charlotte, NC
12+ Months Contract
Job Description:
Responsibilities:
• Design, develop, and optimise large-scale data pipelines using PySpark and Python.
• Implement and adhere to best practices in object-oriented programming to build reusable, maintainable code.
• Write advanced SQL queries for data extraction, transformation, and loading (ETL).
• Collaborate closely with data scientists, analysts, and stakeholders to gather requirements and translate them into technical solutions.
• Troubleshoot data-related issues and resolve them in a timely and accurate manner.
• Leverage AWS cloud services (e.g., S3, EMR, Lambda, Glue) to build and manage cloud-native data workflows (preferred).
• Participate in code reviews, data quality checks, and performance tuning of data jobs.
Required Skills & Qualifications:
• 3–6 years of relevant experience in a data engineering or backend development role.
• Strong hands-on experience with PySpark and Python, especially in designing and implementing scalable data transformations.
• Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
• Proficient in SQL, with the ability to write complex queries and optimise performance.
• Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
• Excellent communication and collaboration skills.
Preferred Qualifications (Nice to Have):
• Experience working with AWS cloud ecosystem (S3, Glue, EMR, Redshift, Lambda, etc.).
• Exposure to data warehousing concepts, distributed computing, and performance tuning.
• Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies.






