

Senior Data Engineer – Big Data & Cloud
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – Big Data & Cloud in Chandler, AZ (Hybrid). It is a 24-month contract at $55/hr W2. Requires 4+ years in data engineering, expertise in Hadoop, AWS S3, Python, and data visualization tools.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date discovered
August 12, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Chandler, AZ
-
🧠 - Skills detailed
#Deployment #Python #AWS S3 (Amazon Simple Storage Service) #Storage #Shell Scripting #Data Modeling #GCP (Google Cloud Platform) #Scripting #Unix #Spark (Apache Spark) #Automation #Databases #AWS (Amazon Web Services) #Data Engineering #Hadoop #Visualization #"ETL (Extract #Transform #Load)" #Scala #Big Data #Data Pipeline #Dremio #Database Design #Security #MySQL #PySpark #Cloud #S3 (Amazon Simple Storage Service)
Role description
Job Title: Senior Data Engineer – Big Data & Cloud
Location: Chandler, AZ (Hybrid – 3 days onsite)
Duration: 24-month contract
Pay: $55/hr W2 ONLY, NO C2C
Overview:
We are seeking a proactive and highly skilled Senior Data Engineer to join our team as part of a pilot program. This role is focused on building robust, scalable data pipelines using a modern big data tech stack, ensuring efficient data modeling, transformation, and integration. The ideal candidate will be self-driven, accountable, and able to troubleshoot complex data engineering challenges without waiting for direction.
Responsibilities:
• Design, model, and build data pipelines using Hadoop, Hive, PySpark, and Python.
• Integrate and manage data within Amazon AWS S3, including security and data service integration.
• Design and implement database models (MySQL or similar relational databases).
• Develop and maintain job schedules using Autosys.
• Build data visualizations and reporting dashboards using PowerBI and Dremio.
• Write and optimize Unix/Shell scripts.
• Support CI/CD pipeline development and deployment processes.
• Troubleshoot and resolve data pipeline and transformation issues.
• Collaborate cross-functionally to ensure data solutions align with business needs.
• Apply automation and optimization to data processes for improved efficiency.
Required Qualifications:
• Minimum 4 years of hands-on experience in data engineering.
• Strong expertise in big data tools (Hadoop, Hive, PySpark, Python).
• Solid understanding of AWS S3 object storage, security, and integrations.
• Proficiency in database design and data modeling.
• Proven experience with Autosys job scheduling.
• Experience with data visualization tools such as PowerBI and Dremio.
• Proficiency in Unix/Shell scripting and CI/CD pipelines.
• Strong problem-solving skills and ability to work independently with minimal supervision.
Preferred Qualifications:
• Exposure to Google Cloud Platform (GCP) data engineering.
• Financial services industry experience or domain knowledge.
What We’re Looking For:
• A self-starter who proactively identifies problems and drives solutions.
• A team player who is accountable for deliverables and timelines.
• Strong communicator capable of explaining technical concepts to technical and non-technical audiences.
• Someone who thrives in a fast-paced, evolving environment.
Job Title: Senior Data Engineer – Big Data & Cloud
Location: Chandler, AZ (Hybrid – 3 days onsite)
Duration: 24-month contract
Pay: $55/hr W2 ONLY, NO C2C
Overview:
We are seeking a proactive and highly skilled Senior Data Engineer to join our team as part of a pilot program. This role is focused on building robust, scalable data pipelines using a modern big data tech stack, ensuring efficient data modeling, transformation, and integration. The ideal candidate will be self-driven, accountable, and able to troubleshoot complex data engineering challenges without waiting for direction.
Responsibilities:
• Design, model, and build data pipelines using Hadoop, Hive, PySpark, and Python.
• Integrate and manage data within Amazon AWS S3, including security and data service integration.
• Design and implement database models (MySQL or similar relational databases).
• Develop and maintain job schedules using Autosys.
• Build data visualizations and reporting dashboards using PowerBI and Dremio.
• Write and optimize Unix/Shell scripts.
• Support CI/CD pipeline development and deployment processes.
• Troubleshoot and resolve data pipeline and transformation issues.
• Collaborate cross-functionally to ensure data solutions align with business needs.
• Apply automation and optimization to data processes for improved efficiency.
Required Qualifications:
• Minimum 4 years of hands-on experience in data engineering.
• Strong expertise in big data tools (Hadoop, Hive, PySpark, Python).
• Solid understanding of AWS S3 object storage, security, and integrations.
• Proficiency in database design and data modeling.
• Proven experience with Autosys job scheduling.
• Experience with data visualization tools such as PowerBI and Dremio.
• Proficiency in Unix/Shell scripting and CI/CD pipelines.
• Strong problem-solving skills and ability to work independently with minimal supervision.
Preferred Qualifications:
• Exposure to Google Cloud Platform (GCP) data engineering.
• Financial services industry experience or domain knowledge.
What We’re Looking For:
• A self-starter who proactively identifies problems and drives solutions.
• A team player who is accountable for deliverables and timelines.
• Strong communicator capable of explaining technical concepts to technical and non-technical audiences.
• Someone who thrives in a fast-paced, evolving environment.