

DATAEXL INFORMATION LLC
Big Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Charlotte, NC, with a contract length of 6-24 months. The pay rate is W2. Key skills include Hadoop, PySpark, Kafka, and REST API development. On-site work is required three days a week.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 11, 2025
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#GCP (Google Cloud Platform) #PySpark #Cloudera #Big Data #AWS (Amazon Web Services) #API (Application Programming Interface) #Cloud #Data Engineering #Azure #AI (Artificial Intelligence) #Dremio #Security #Python #FastAPI #Django #Scala #Spark (Apache Spark) #Batch #Data Pipeline #Data Ingestion #Hadoop #Kafka (Apache Kafka) #REST API #REST (Representational State Transfer) #"ETL (Extract #Transform #Load)" #Flask #Deployment #Data Science #ML (Machine Learning)
Role description
No C2C
No 1099
No H1B and OPT
Position title: Big data developer (Only W2)
Location: Charlotte, NC (3 Days onsite a week)
Contract: 6-24 months to perm
Mode of Interview : Face to face
Must have: Hadoop, Pyspark and Kafka
Key Responsibilities:
β’ Design and implement scalable data ingestion and transformation pipelines using PySpark or Scala, Hadoop, Hive, and Dremio.
β’ Build and manage Kafka batch pipelines for reliable data streaming and integration.
β’ Work with on-prem Hadoop ecosystems (Cloudera, Hortonworks, MapR) or cloud-native big data platforms.
β’ Develop and maintain RESTful APIs using Python (FastAPI, Flask, or Django) to expose data and services.
β’ Collaborate with data scientists, ML engineers, and platform teams to ensure seamless data flow and system performance.
β’ Monitor, troubleshoot, and optimize production data pipelines and services.
β’ Ensure security, scalability, and reliability across all data engineering components.
β’ (Optional but valuable) Contribute to the design and deployment of AI-driven RAG systems for enterprise use cases.
Required Skills & Qualifications:
β’ experience in Big Data Engineering.
β’ Strong hands-on experience with PySpark or Scala.
β’ Deep expertise in on-prem Hadoop distributions (Cloudera, Hortonworks, MapR) or cloud-based big data platforms.
β’ Proficiency in Kafka batch processing, Hive, and Dremio.
β’ Solid understanding of REST API development using Python frameworks.
β’ Familiarity with cloud platforms (GCP, AWS, or Azure).
β’ Experience or exposure to AI and RAG architectures is a plus.
β’ Excellent problem-solving, communication, and collaboration skills.
No C2C
No 1099
No H1B and OPT
Position title: Big data developer (Only W2)
Location: Charlotte, NC (3 Days onsite a week)
Contract: 6-24 months to perm
Mode of Interview : Face to face
Must have: Hadoop, Pyspark and Kafka
Key Responsibilities:
β’ Design and implement scalable data ingestion and transformation pipelines using PySpark or Scala, Hadoop, Hive, and Dremio.
β’ Build and manage Kafka batch pipelines for reliable data streaming and integration.
β’ Work with on-prem Hadoop ecosystems (Cloudera, Hortonworks, MapR) or cloud-native big data platforms.
β’ Develop and maintain RESTful APIs using Python (FastAPI, Flask, or Django) to expose data and services.
β’ Collaborate with data scientists, ML engineers, and platform teams to ensure seamless data flow and system performance.
β’ Monitor, troubleshoot, and optimize production data pipelines and services.
β’ Ensure security, scalability, and reliability across all data engineering components.
β’ (Optional but valuable) Contribute to the design and deployment of AI-driven RAG systems for enterprise use cases.
Required Skills & Qualifications:
β’ experience in Big Data Engineering.
β’ Strong hands-on experience with PySpark or Scala.
β’ Deep expertise in on-prem Hadoop distributions (Cloudera, Hortonworks, MapR) or cloud-based big data platforms.
β’ Proficiency in Kafka batch processing, Hive, and Dremio.
β’ Solid understanding of REST API development using Python frameworks.
β’ Familiarity with cloud platforms (GCP, AWS, or Azure).
β’ Experience or exposure to AI and RAG architectures is a plus.
β’ Excellent problem-solving, communication, and collaboration skills.






