

DATAEXL INFORMATION LLC
Big Data Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Charlotte, NC, with a 6-24 month contract. Pay rate is W2 only. Key skills include Hadoop, PySpark, and Kafka. Experience in Big Data Engineering and REST API development is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 9, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Scala #Data Engineering #Big Data #Spark (Apache Spark) #Deployment #REST (Representational State Transfer) #Azure #API (Application Programming Interface) #REST API #Django #Batch #GCP (Google Cloud Platform) #FastAPI #AI (Artificial Intelligence) #Data Science #Python #PySpark #Cloud #Dremio #Security #Kafka (Apache Kafka) #Data Pipeline #Data Ingestion #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #ML (Machine Learning) #Flask #Cloudera #Hadoop
Role description
No C2C
No 1099
No H1B and OPT
Position title: Big data developer (Only W2)
Location: Charlotte, NC (3 Days onsite a week)
Contract: 6-24 months to perm
Mode of Interview : Face to face
Must have: Hadoop, Pyspark and Kafka
Key Responsibilities:
• Design and implement scalable data ingestion and transformation pipelines using PySpark or Scala, Hadoop, Hive, and Dremio.
• Build and manage Kafka batch pipelines for reliable data streaming and integration.
• Work with on-prem Hadoop ecosystems (Cloudera, Hortonworks, MapR) or cloud-native big data platforms.
• Develop and maintain RESTful APIs using Python (FastAPI, Flask, or Django) to expose data and services.
• Collaborate with data scientists, ML engineers, and platform teams to ensure seamless data flow and system performance.
• Monitor, troubleshoot, and optimize production data pipelines and services.
• Ensure security, scalability, and reliability across all data engineering components.
• (Optional but valuable) Contribute to the design and deployment of AI-driven RAG systems for enterprise use cases.
Required Skills & Qualifications:
• experience in Big Data Engineering.
• Strong hands-on experience with PySpark or Scala.
• Deep expertise in on-prem Hadoop distributions (Cloudera, Hortonworks, MapR) or cloud-based big data platforms.
• Proficiency in Kafka batch processing, Hive, and Dremio.
• Solid understanding of REST API development using Python frameworks.
• Familiarity with cloud platforms (GCP, AWS, or Azure).
• Experience or exposure to AI and RAG architectures is a plus.
• Excellent problem-solving, communication, and collaboration skills.
No C2C
No 1099
No H1B and OPT
Position title: Big data developer (Only W2)
Location: Charlotte, NC (3 Days onsite a week)
Contract: 6-24 months to perm
Mode of Interview : Face to face
Must have: Hadoop, Pyspark and Kafka
Key Responsibilities:
• Design and implement scalable data ingestion and transformation pipelines using PySpark or Scala, Hadoop, Hive, and Dremio.
• Build and manage Kafka batch pipelines for reliable data streaming and integration.
• Work with on-prem Hadoop ecosystems (Cloudera, Hortonworks, MapR) or cloud-native big data platforms.
• Develop and maintain RESTful APIs using Python (FastAPI, Flask, or Django) to expose data and services.
• Collaborate with data scientists, ML engineers, and platform teams to ensure seamless data flow and system performance.
• Monitor, troubleshoot, and optimize production data pipelines and services.
• Ensure security, scalability, and reliability across all data engineering components.
• (Optional but valuable) Contribute to the design and deployment of AI-driven RAG systems for enterprise use cases.
Required Skills & Qualifications:
• experience in Big Data Engineering.
• Strong hands-on experience with PySpark or Scala.
• Deep expertise in on-prem Hadoop distributions (Cloudera, Hortonworks, MapR) or cloud-based big data platforms.
• Proficiency in Kafka batch processing, Hive, and Dremio.
• Solid understanding of REST API development using Python frameworks.
• Familiarity with cloud platforms (GCP, AWS, or Azure).
• Experience or exposure to AI and RAG architectures is a plus.
• Excellent problem-solving, communication, and collaboration skills.