Hadoop Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Developer with 8+ years of data engineering experience, focusing on Apache Spark, PySpark, and AWS technologies. Key skills include SQL, CI/CD practices, and cloud data warehouses. Contract length and pay rate are unspecified.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 20, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Maryland, United States
-
🧠 - Skills detailed
#AWS Glue #Cloud #Athena #GIT #SQL (Structured Query Language) #S3 (Amazon Simple Storage Service) #Hadoop #Spark (Apache Spark) #Data Modeling #Programming #Data Warehouse #PySpark #AWS (Amazon Web Services) #Data Engineering #Apache Spark #Terraform #Snowflake #IAM (Identity and Access Management) #Redshift #AWS S3 (Amazon Simple Storage Service) #Data Governance #Airflow
Role description
General Experience: The proposed candidate must have at least eight (8) years of programming experience in software development or maintenance. 1. Eight-plus (8+) years of experience in data engineering, with strong expertise in Apache Spark and PySpark. 1. Hands-on experience with AWS Glue (v2 or v3), AWS S3, AWS Lake Formation, and Athena. 1. Strong knowledge of SQL, partitioning strategies, and data modeling for analytical systems. 1. Experience with CI/CD practices, Git, and infrastructure-as-code (Terraform, CloudFormation). 1. Experience with Redshift, Snowflake, or similar cloud data warehouses. 1. Familiarity with Airflow, Step Functions, or other orchestration tools. 1. Knowledge of data governance frameworks, IAM permissions, and Lake Formation fine-grained access controls. 1. Exposure to Glue Studio, Glue DataBrew, or AWS DataZone. 1. .