

BIG DATA ENGINEER (contract)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer on a 24-month contract in Chandler, AZ, offering a pay rate of "TBD". Key skills include building data pipelines using Hadoop, PySpark, and AWS S3. Financial services experience preferred.
π - Country
United States
π± - Currency
Unknown
-
π° - Day rate
-
ποΈ - Date discovered
August 29, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Chandler, AZ 85286
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #AWS S3 (Amazon Simple Storage Service) #Storage #Shell Scripting #Automation #Security #Hadoop #AWS (Amazon Web Services) #Compliance #MySQL #Unix #Spark (Apache Spark) #Dremio #Python #GCP (Google Cloud Platform) #Scripting #Data Pipeline #PySpark #Cloud #Database Design #Big Data #S3 (Amazon Simple Storage Service) #Data Engineering
Role description
Title: Big Data Engineer
Location: 2600 S Price Rd Chandler, AZ
Duration: 24 months
Work Engagement: W2
Work Schedule: 3 days in office/2 days remote
Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits
Summary:
In this contingent resource assignment, you may: Consult on or participate in moderately complex initiatives and deliverables within Software Engineering and contribute to large-scale planning related to Software Engineering deliverables. Review and analyze moderately complex Software Engineering challenges that require an in-depth evaluation of variable factors. Contribute to the resolution of moderately complex issues and consult with others to meet Software Engineering deliverables while leveraging solid understanding of the function, policies, procedures, and compliance requirements. Collaborate with client personnel in Software Engineering.
Responsibilities:
Utilize the tech stack below to build data pipelines
Model and design the data
Build data pipelines
Applying logic to the data to transform the data and troubleshoot
Implementation and automation of the data
Qualifications:
Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship.
Experience building data pipelines and automation using big-data stack (Hadoop, Hive, PySpark, Python)
Amazon AWS S3 β Object storage, security, data service integration with S3
Strong understanding and implementation of Autosys
Experience with PowerBI & Dremio
Experience with Unix/shell scripting, CICD pipeline
Fundamental background in database design (MySQL or any standard database)
Exposure to Cloud data engineering (GCP cloud, or other platforms) (preferred)
Previous experience in financial services experience (preferred)
Title: Big Data Engineer
Location: 2600 S Price Rd Chandler, AZ
Duration: 24 months
Work Engagement: W2
Work Schedule: 3 days in office/2 days remote
Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits
Summary:
In this contingent resource assignment, you may: Consult on or participate in moderately complex initiatives and deliverables within Software Engineering and contribute to large-scale planning related to Software Engineering deliverables. Review and analyze moderately complex Software Engineering challenges that require an in-depth evaluation of variable factors. Contribute to the resolution of moderately complex issues and consult with others to meet Software Engineering deliverables while leveraging solid understanding of the function, policies, procedures, and compliance requirements. Collaborate with client personnel in Software Engineering.
Responsibilities:
Utilize the tech stack below to build data pipelines
Model and design the data
Build data pipelines
Applying logic to the data to transform the data and troubleshoot
Implementation and automation of the data
Qualifications:
Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship.
Experience building data pipelines and automation using big-data stack (Hadoop, Hive, PySpark, Python)
Amazon AWS S3 β Object storage, security, data service integration with S3
Strong understanding and implementation of Autosys
Experience with PowerBI & Dremio
Experience with Unix/shell scripting, CICD pipeline
Fundamental background in database design (MySQL or any standard database)
Exposure to Cloud data engineering (GCP cloud, or other platforms) (preferred)
Previous experience in financial services experience (preferred)