

Big Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with a contract length in Reston, VA. Requires 7+ years in Java/Python/Scala, 3+ years in Hadoop technologies, 2+ years in AWS, and experience with Kafka. Strong SQL and data analysis skills are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 15, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Reston, VA
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Big Data #Java #Python #S3 (Amazon Simple Storage Service) #Data Integration #Hadoop #Programming #SQL (Structured Query Language) #Data Analysis #Data Ingestion #Spark (Apache Spark) #Scala #Sqoop (Apache Sqoop) #Lambda (AWS Lambda) #Redshift #Kafka (Apache Kafka)
Role description
Title: Bigdata Developer
Location: Reston, VA Hybrid (once a week in office)
Term: Contract
Job Description
β’ 7+ years of strong programming background with Java/Python/Scala
β’ At least 3+ years of experience working on Data Integration projects using Hadoop MapReduce, Sqoop, Oozie , Hive, Spark and other related Big Data technologies
β’ At least 2+ years of experience on AWS preferably leveraging services such as Lambda, S3, Redshift, Glue services
β’ Some working experience building Kafka based data ingestion/retrieval programs
β’ Experience tuning Hadoop/Spark/hive parameters for optimal performance
β’ Strong SQL query writing and data analysis skills
Title: Bigdata Developer
Location: Reston, VA Hybrid (once a week in office)
Term: Contract
Job Description
β’ 7+ years of strong programming background with Java/Python/Scala
β’ At least 3+ years of experience working on Data Integration projects using Hadoop MapReduce, Sqoop, Oozie , Hive, Spark and other related Big Data technologies
β’ At least 2+ years of experience on AWS preferably leveraging services such as Lambda, S3, Redshift, Glue services
β’ Some working experience building Kafka based data ingestion/retrieval programs
β’ Experience tuning Hadoop/Spark/hive parameters for optimal performance
β’ Strong SQL query writing and data analysis skills