

Big Data Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include 4+ years in Big Data Engineering, proficiency in Hadoop, PySpark, Python, and AWS S3, along with data modeling experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
376
-
ποΈ - Date discovered
September 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Chandler, AZ
-
π§ - Skills detailed
#Shell Scripting #Unix #AWS S3 (Amazon Simple Storage Service) #Dremio #AWS (Amazon Web Services) #Python #Data Pipeline #Automation #PySpark #Data Modeling #S3 (Amazon Simple Storage Service) #Cloud #Storage #Database Design #Hadoop #MySQL #Scripting #GCP (Google Cloud Platform) #Spark (Apache Spark) #Scala #Big Data #Data Engineering
Role description
Job Description:
Client is hiring a Big Data Engineer to design, build, and maintain scalable data pipelines. The role involves data modeling, pipeline automation, and integration with cloud and reporting tools.
Must Have Skills:
β’ 4+ years in Big Data Engineering
β’ Hadoop, Hive, PySpark, Python
β’ AWS S3 (object storage, integration)
β’ Data modeling & database design (MySQL or similar)
β’ Autosys job scheduling
β’ Unix/Shell scripting, CI/CD pipelines
β’ PowerBI, Dremio
Nice to Have:
β’ GCP cloud data engineering exposure
β’ Financial services domain experience
Soft Skills:
β’ Proactive and accountable
β’ Strong problem-solving and troubleshooting skills
β’ Clear communication of technical work
Job Description:
Client is hiring a Big Data Engineer to design, build, and maintain scalable data pipelines. The role involves data modeling, pipeline automation, and integration with cloud and reporting tools.
Must Have Skills:
β’ 4+ years in Big Data Engineering
β’ Hadoop, Hive, PySpark, Python
β’ AWS S3 (object storage, integration)
β’ Data modeling & database design (MySQL or similar)
β’ Autosys job scheduling
β’ Unix/Shell scripting, CI/CD pipelines
β’ PowerBI, Dremio
Nice to Have:
β’ GCP cloud data engineering exposure
β’ Financial services domain experience
Soft Skills:
β’ Proactive and accountable
β’ Strong problem-solving and troubleshooting skills
β’ Clear communication of technical work