Big Data Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include 4+ years in Big Data Engineering, proficiency in Hadoop, PySpark, Python, and AWS S3, along with data modeling experience.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
376
-
πŸ—“οΈ - Date discovered
September 10, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Chandler, AZ
-
🧠 - Skills detailed
#Shell Scripting #Unix #AWS S3 (Amazon Simple Storage Service) #Dremio #AWS (Amazon Web Services) #Python #Data Pipeline #Automation #PySpark #Data Modeling #S3 (Amazon Simple Storage Service) #Cloud #Storage #Database Design #Hadoop #MySQL #Scripting #GCP (Google Cloud Platform) #Spark (Apache Spark) #Scala #Big Data #Data Engineering
Role description
Job Description: Client is hiring a Big Data Engineer to design, build, and maintain scalable data pipelines. The role involves data modeling, pipeline automation, and integration with cloud and reporting tools. Must Have Skills: β€’ 4+ years in Big Data Engineering β€’ Hadoop, Hive, PySpark, Python β€’ AWS S3 (object storage, integration) β€’ Data modeling & database design (MySQL or similar) β€’ Autosys job scheduling β€’ Unix/Shell scripting, CI/CD pipelines β€’ PowerBI, Dremio Nice to Have: β€’ GCP cloud data engineering exposure β€’ Financial services domain experience Soft Skills: β€’ Proactive and accountable β€’ Strong problem-solving and troubleshooting skills β€’ Clear communication of technical work