

Centraprise
Sr. Data Engineer (PySpark & Python + AI Tools Exp.) - (Only W2 or 1099)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer in Charlotte, NC (Hybrid) with a 12+ month contract. Requires 6+ years of experience in data engineering, proficiency in PySpark, Python, SQL, and AI tools, plus AWS cloud knowledge. Pay rate: "TBD".
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 18, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
1099 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Agile #AWS (Amazon Web Services) #Data Quality #Redshift #Scala #AI (Artificial Intelligence) #Code Reviews #Programming #S3 (Amazon Simple Storage Service) #Cloud #Complex Queries #Distributed Computing #PySpark #SQL Queries #Version Control #Data Transformations #Data Extraction #Python #"ETL (Extract #Transform #Load)" #Data Science #SQL (Structured Query Language) #Spark (Apache Spark) #Lambda (AWS Lambda) #GIT #Data Engineering #Data Pipeline
Role description
Sr. Data Engineer (PySpark & Python + AI Tools Exp.) - (Only W2 or 1099)
Charlotte, NC (Hybrid)
12+ Months Contract
Job Description:
We are currently seeking a Senior Data Engineer with hands-on coding experience and a strong background in Python, PySpark, and Object-oriented programming.
The ideal candidate will be responsible for designing, developing, and implementing new features to our existing framework using PySpark and Python.
This position requires a deep understanding of data transformation and the ability to create standalone scripts based on given business logic. Also, exposure to AI Tools and building any AI applications will be advantage.
Key Responsibilities:
• Design, develop, and optimize large-scale data pipelines using PySpark and Python.
• Implement and adhere to best practices in object-oriented programming to build reusable, maintainable code.
• Write advanced SQL queries for data extraction, transformation, and loading (ETL).
• Collaborate closely with data scientists, analysts, and stakeholders to gather requirements and translate them into technical solutions.
• Troubleshoot data-related issues and resolve them in a timely and accurate manner.
• Leverage AWS cloud services (e.g., S3, EMR, Lambda, Glue) to build and manage cloud-native data workflows (preferred).
• Participate in code reviews, data quality checks, and performance tuning of data jobs.
Required Skills & Qualifications:
• 6+ years of relevant experience in a data engineering or backend development role.
• Strong hands-on experience with PySpark and Python, especially in designing and implementing scalable data transformations.
• Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
• Proficient in SQL, with the ability to write complex queries and optimize performance.
• Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
• Excellent communication and collaboration skills.
• Hands-on experience with AI Tools.
Preferred Qualifications (Nice to Have):
• Experience working with AWS cloud ecosystem (S3, Glue, EMR, Redshift, Lambda, etc.).
• Exposure to data warehousing concepts, distributed computing, and performance tuning.
• Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies.
• Exposure to AI Tools and hands-on experience of building any AI applications.
Sr. Data Engineer (PySpark & Python + AI Tools Exp.) - (Only W2 or 1099)
Charlotte, NC (Hybrid)
12+ Months Contract
Job Description:
We are currently seeking a Senior Data Engineer with hands-on coding experience and a strong background in Python, PySpark, and Object-oriented programming.
The ideal candidate will be responsible for designing, developing, and implementing new features to our existing framework using PySpark and Python.
This position requires a deep understanding of data transformation and the ability to create standalone scripts based on given business logic. Also, exposure to AI Tools and building any AI applications will be advantage.
Key Responsibilities:
• Design, develop, and optimize large-scale data pipelines using PySpark and Python.
• Implement and adhere to best practices in object-oriented programming to build reusable, maintainable code.
• Write advanced SQL queries for data extraction, transformation, and loading (ETL).
• Collaborate closely with data scientists, analysts, and stakeholders to gather requirements and translate them into technical solutions.
• Troubleshoot data-related issues and resolve them in a timely and accurate manner.
• Leverage AWS cloud services (e.g., S3, EMR, Lambda, Glue) to build and manage cloud-native data workflows (preferred).
• Participate in code reviews, data quality checks, and performance tuning of data jobs.
Required Skills & Qualifications:
• 6+ years of relevant experience in a data engineering or backend development role.
• Strong hands-on experience with PySpark and Python, especially in designing and implementing scalable data transformations.
• Solid understanding of Object-Oriented Programming (OOP) principles and design patterns.
• Proficient in SQL, with the ability to write complex queries and optimize performance.
• Strong problem-solving skills and the ability to troubleshoot complex data issues independently.
• Excellent communication and collaboration skills.
• Hands-on experience with AI Tools.
Preferred Qualifications (Nice to Have):
• Experience working with AWS cloud ecosystem (S3, Glue, EMR, Redshift, Lambda, etc.).
• Exposure to data warehousing concepts, distributed computing, and performance tuning.
• Familiarity with version control systems (e.g., Git), CI/CD pipelines, and Agile methodologies.
• Exposure to AI Tools and hands-on experience of building any AI applications.






