Data Engineer- AWS/Hadoop/Python- Only W2 & Onsite Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 12-month contract in Chandler, AZ, paying W2 only. Key skills include AWS, Hadoop, Python, data pipeline construction, and database design. Financial services experience is a plus. Onsite work required three days a week.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 7, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Chandler, AZ
-
🧠 - Skills detailed
#Dremio #GCP (Google Cloud Platform) #Spark (Apache Spark) #Shell Scripting #Cloud #Storage #S3 (Amazon Simple Storage Service) #Database Design #AWS S3 (Amazon Simple Storage Service) #Python #AWS (Amazon Web Services) #Security #MySQL #Data Pipeline #Data Engineering #PySpark #"ETL (Extract #Transform #Load)" #Hadoop #Scripting #Unix
Role description
Data Engineer- AWS/Hadoop/Python Location: Chandler, AZ \_ 3 days onsite Duration: 12 Months + Position is STRICTLY W2 ONLY, no sponsorship is provided Minimum 4 years of hand on experience with β€’ Building data pipeline using big-data stack (Hadoop, Hive, PySpark, python) β€’ Amazon AWS S3 Object storage, security, data service integration with S3 β€’ Data modelling and database design. β€’ Job Scheduler Autosys β€’ PowerBI, Dremio β€’ Unix/shell scripting, CICD pipeline β€’ Exposure in Google Cloud Platform cloud data engineering is a plus Manager Notes: β€’ The contractors need to be proactive, they can't wait to be told what to do β€’ Must be accountable along with the technical skills β€’ The tech stack mentioned, these are the technologies being used to build data pipelines β€’ They need to model, design the data, build pipelines, applying logic to the data to transform the data and troubleshoot β€’ They should have strong understanding and implementation of Autosys β€’ Ability to automate using spark, Python, Hadoop/Hive β€’ Should have a fundamental background in database design (MySQL or any standard database) β€’ Exposure to Cloud data engineering is a big plus, not required β€’ Financial services experience is a plus but not required-having domain knowledge is helpful Technical Assessment -We need a clear understanding of tech work experience, they need to be able to describe the work they have done -Overall problem solving, so given a problem how efficiently does their thought process drive towards a solution? Location: Chandler, AZ only option 3 days a week in office