

BIG DATA ENGINEER (contract)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer with a 24-month contract located in Chandler, AZ. Key skills include Hadoop, PySpark, AWS S3, and PowerBI. Financial services experience is preferred. Work schedule is hybrid: 3 days in-office, 2 remote.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 3, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Chandler, AZ
-
π§ - Skills detailed
#S3 (Amazon Simple Storage Service) #Scripting #Security #Cloud #Data Pipeline #AWS (Amazon Web Services) #GCP (Google Cloud Platform) #Python #Shell Scripting #Data Engineering #AWS S3 (Amazon Simple Storage Service) #Automation #Big Data #Spark (Apache Spark) #Dremio #"ETL (Extract #Transform #Load)" #Storage #Database Design #Hadoop #Unix #MySQL #PySpark #Compliance
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: Big Data Engineer
Location: 2600 S Price Rd Chandler, AZ
Duration: 24 months
Work Engagement: W2
Work Schedule: 3 days in office/2 days remote
Benefits on offer for this contract position: Health Insurance, Life insurance, 401K and Voluntary Benefits
Summary:
In this contingent resource assignment, you may: Consult on or participate in moderately complex initiatives and deliverables within Software Engineering and contribute to large-scale planning related to Software Engineering deliverables. Review and analyze moderately complex Software Engineering challenges that require an in-depth evaluation of variable factors. Contribute to the resolution of moderately complex issues and consult with others to meet Software Engineering deliverables while leveraging solid understanding of the function, policies, procedures, and compliance requirements. Collaborate with client personnel in Software Engineering.
Responsibilities:
β’ Utilize the tech stack below to build data pipelines
β’ Model and design the data
β’ Build data pipelines
β’ Applying logic to the data to transform the data and troubleshoot
β’ Implementation and automation of the data
Qualifications:
β’ Applicants must be authorized to work for ANY employer in the U.S. This position is not eligible for visa sponsorship.
β’ Experience building data pipelines and automation using big-data stack (Hadoop, Hive, PySpark, Python)
β’ Amazon AWS S3 β Object storage, security, data service integration with S3
β’ Strong understanding and implementation of Autosys
β’ Experience with PowerBI & Dremio
β’ Experience with Unix/shell scripting, CICD pipeline
β’ Fundamental background in database design (MySQL or any standard database)
β’ Exposure to Cloud data engineering (GCP cloud, or other platforms) (preferred)
β’ Previous experience in financial services experience (preferred)