

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include AWS S3, Hadoop, Hive, pySpark, Python, and Autosys. Financial services experience is a plus.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
424
-
ποΈ - Date discovered
September 12, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Chandler, AZ
-
π§ - Skills detailed
#Unix #"ETL (Extract #Transform #Load)" #Data Pipeline #Scripting #Shell Scripting #Hadoop #MySQL #Database Design #Storage #AWS S3 (Amazon Simple Storage Service) #Python #Security #AWS (Amazon Web Services) #GCP (Google Cloud Platform) #S3 (Amazon Simple Storage Service) #Cloud #Data Engineering #Spark (Apache Spark) #PySpark #Dremio
Role description
JOB DESCRIPTION:
Minimum 4 years of hand on experience with
β’ Building data pipeline using big-data stack (Hadoop, Hive, pySpark, python)
β’ Amazon AWS S3 β Object storage, security, data service integration with S3
β’ Data modelling and database design.
β’ Job Scheduler β Autosys
β’ PowerBI, Dremio
β’ Unix/shell scripting, CICD pipeline
β’ Exposure in GCP cloud data engineering is a plus
Manager Notes:
-The contractors need to be proactive, they can't wait to be told what to do
-Must be accountable along with the technical skills
-The tech stack mentioned, these are the technologies being used to build data pipelines
-They need to model, design the data, build pipelines, applying logic to the data to transform the data and troubleshoot
-They should have strong understanding and implementation of Autosys
-Ability to automate using spark, Python, Hadoop/Hive
-Should have a fundamental background in database design (MySQL or any standard database)
-Exposure to Cloud data engineering is a big plus, not required
-Financial services experience is a plus but not required-having domain knowledge is helpful
Technical Assessment
-We need a clear understanding of tech work experience, they need to be able to describe the work they have done
-Overall problem solving, so given a problem how efficiently does their thought process drive towards a solution?
JOB DESCRIPTION:
Minimum 4 years of hand on experience with
β’ Building data pipeline using big-data stack (Hadoop, Hive, pySpark, python)
β’ Amazon AWS S3 β Object storage, security, data service integration with S3
β’ Data modelling and database design.
β’ Job Scheduler β Autosys
β’ PowerBI, Dremio
β’ Unix/shell scripting, CICD pipeline
β’ Exposure in GCP cloud data engineering is a plus
Manager Notes:
-The contractors need to be proactive, they can't wait to be told what to do
-Must be accountable along with the technical skills
-The tech stack mentioned, these are the technologies being used to build data pipelines
-They need to model, design the data, build pipelines, applying logic to the data to transform the data and troubleshoot
-They should have strong understanding and implementation of Autosys
-Ability to automate using spark, Python, Hadoop/Hive
-Should have a fundamental background in database design (MySQL or any standard database)
-Exposure to Cloud data engineering is a big plus, not required
-Financial services experience is a plus but not required-having domain knowledge is helpful
Technical Assessment
-We need a clear understanding of tech work experience, they need to be able to describe the work they have done
-Overall problem solving, so given a problem how efficiently does their thought process drive towards a solution?