

Centraprise
Big Data Developer - Contract on W2 Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Charlotte, NC, on a W2 contract for 6-24 months. Key skills include Hadoop, PySpark, Kafka, and REST API development. Experience with cloud platforms and AI architectures is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 8, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Batch #REST (Representational State Transfer) #Big Data #AWS (Amazon Web Services) #Cloudera #API (Application Programming Interface) #Dremio #Hadoop #Cloud #Spark (Apache Spark) #Python #Data Engineering #PySpark #Azure #Kafka (Apache Kafka) #GCP (Google Cloud Platform) #AI (Artificial Intelligence) #Scala #REST API
Role description
Position: Big Data Developer
Location: Charlotte, NC (Onsite: 3 days a week)
Contract: 6-24 months to perm
Contract on W2 Only
Must have:
Hadoop, Pyspark and Kafka
Required Skills & Qualifications:
β’ experience in Big Data Engineering.
β’ Strong hands-on experience with PySpark or Scala.
β’ Deep expertise in on-prem Hadoop distributions (Cloudera, Hortonworks, MapR) or cloud-based big data platforms.
β’ Proficiency in Kafka batch processing, Hive, and Dremio.
β’ Solid understanding of REST API development using Python frameworks.
β’ Familiarity with cloud platforms (GCP, AWS, or Azure).
β’ Experience or exposure to AI and RAG architectures is a plus.
β’ Excellent problem-solving, communication, and collaboration skills.
Position: Big Data Developer
Location: Charlotte, NC (Onsite: 3 days a week)
Contract: 6-24 months to perm
Contract on W2 Only
Must have:
Hadoop, Pyspark and Kafka
Required Skills & Qualifications:
β’ experience in Big Data Engineering.
β’ Strong hands-on experience with PySpark or Scala.
β’ Deep expertise in on-prem Hadoop distributions (Cloudera, Hortonworks, MapR) or cloud-based big data platforms.
β’ Proficiency in Kafka batch processing, Hive, and Dremio.
β’ Solid understanding of REST API development using Python frameworks.
β’ Familiarity with cloud platforms (GCP, AWS, or Azure).
β’ Experience or exposure to AI and RAG architectures is a plus.
β’ Excellent problem-solving, communication, and collaboration skills.