

Java AWS Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Java AWS Developer with a contract length of "unknown" and a pay rate of "unknown." It requires 7+ years in Java/Python/Scala, 3+ years in Data Integration with Hadoop and 2+ years in AWS services.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Reston, VA
-
π§ - Skills detailed
#Automated Testing #Kafka (Apache Kafka) #AWS (Amazon Web Services) #Java #Spark (Apache Spark) #Hadoop #Data Analysis #Cloudera #Shell Scripting #Scala #Data Integration #Programming #S3 (Amazon Simple Storage Service) #Data Ingestion #Lambda (AWS Lambda) #Scripting #SQL (Structured Query Language) #Data Engineering #Big Data #Redshift #Cloud #Python #Sqoop (Apache Sqoop)
Role description
Job Description:
β’ Hybrid -- once a week in person at the Reston office mostly.
β’ This position requires strong Java programming
β’ 7+ years of strong programming background with Java/Python/Scala
β’ At least 3+ years of experience working on Data Integration projects using Hadoop MapReduce, Sqoop, Oozie , Hive, Spark and other related Big Data technologies
β’ At least 2+ years of experience on AWS preferably leveraging services such as Lambda, S3, Redshift, Glue services
β’ Some working experience building Kafka based data ingestion/retrieval programs
β’ Experience tuning Hadoop/Spark/hive parameters for optimal performance
β’ Strong SQL query writing and data analysis skills
β’ Good shell scripting experience
β’ Rigor in high code quality, automated testing, and other engineering best practices, ability to write reusable code components
β’ big data engineer role with solid AWS experience;
Skills nice to have:
β’ Healthcare experience
β’ Cloudera Developer certification
Job Description:
β’ Hybrid -- once a week in person at the Reston office mostly.
β’ This position requires strong Java programming
β’ 7+ years of strong programming background with Java/Python/Scala
β’ At least 3+ years of experience working on Data Integration projects using Hadoop MapReduce, Sqoop, Oozie , Hive, Spark and other related Big Data technologies
β’ At least 2+ years of experience on AWS preferably leveraging services such as Lambda, S3, Redshift, Glue services
β’ Some working experience building Kafka based data ingestion/retrieval programs
β’ Experience tuning Hadoop/Spark/hive parameters for optimal performance
β’ Strong SQL query writing and data analysis skills
β’ Good shell scripting experience
β’ Rigor in high code quality, automated testing, and other engineering best practices, ability to write reusable code components
β’ big data engineer role with solid AWS experience;
Skills nice to have:
β’ Healthcare experience
β’ Cloudera Developer certification