AWS Data Engineer (LOCAL CANDIDATES ONLY)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with a 12+ month contract, paying $81.50 per hour, located in Torrance, CA (hybrid). Requires 5+ years in AWS Glue/EMR, RedShift, PySpark, and ETL processes. Local candidates only.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
648
-
πŸ—“οΈ - Date discovered
September 14, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Torrance, CA
-
🧠 - Skills detailed
#Data Warehouse #S3 (Amazon Simple Storage Service) #Data Analysis #Compliance #AWS Glue #Spark (Apache Spark) #Lambda (AWS Lambda) #Datasets #Security #Data Mart #Apache Iceberg #Data Governance #Data Engineering #Cloud #Databases #Apache Spark #"ETL (Extract #Transform #Load)" #Documentation #Data Security #Snowflake #PySpark #Monitoring #BI (Business Intelligence) #Python #Database Design #Data Quality #Scala #Schema Design #AWS (Amazon Web Services) #Computer Science #Programming #RDS (Amazon Relational Database Service) #AWS EMR (Amazon Elastic MapReduce) #Agile #Data Integration #Data Lake #ELB (Elastic Load Balancing) #Data Pipeline #Redshift #Athena #Data Processing
Role description
FOR IMMEDIATE DETAILS about this position, please contact MICHELL CASEY at (949) 860-4715 or MCasey@calance.com ======================================================= β€’ β€’ We will NOT accept 3rd Party (C2C) Contractors β€’ β€’ ======================================================= JOB DETAILS: Position: AWS Data Engineer JOB REF#: 43802 - JIH5JP00003851 Duration: 12+ Months (On-Going Contract) Location: HYBRID - Torrance, CA 90501 Pay Rate: $81.50 per hour (W2 Only) β€’ β€’ We will not accept candidates willing to relocate β€’ β€’ β€’ β€’ Must work ONSITE 4 days per week (HYBRID) - LOCAL CANDIDATES ONLY!! β€’ β€’ IT Data Integration Engineer / AWS Data Engineer is tasked with the design, development, and management of data integration processes to ensure seamless data flow and accessibility across the organization. This role is pivotal in integrating data from diverse sources, transforming it to meet business requirements, and loading it into target systems such as data warehouses or data lakes. The aim is to support the CX business on their data-driven decision-making by providing high-quality, consistent, and accessible data. RESPONSIBILITIES INCLUDE: Develop and Maintain Data Integration Solutions: Design and implement data integration workflows using AWS Glue/EMR, Lambda, Redshift Demonstrate proficiency in PySpark, Apache Spark and Python for data processing large datasets Ensure data is accurately and efficiently extracted, transformed, and loaded into target systems. Ensure Data Quality and Integrity: Validate and cleanse data to maintain high data quality. Ensure data quality and integrity by implementing monitoring, validation, and error handling mechanisms within data pipelines Optimize Data Integration Processes: Enhance the performance, optimization of data workflows to meet SLAs, scalability of data integration processes and cost-efficiency on AWS cloud infrastructure. Identify and resolve performance bottlenecks, fine-tuning queries, and optimizing data processing to enhance Redshift's performance Regularly review and refine integration processes to improve efficiency. Support Business Intelligence and Analytics: Translate business requirements to technical specifications and coded data pipelines Ensure timely availability of integrated data for business intelligence and analytics. Collaborate with data analysts and business stakeholders to meet their data requirements. Maintain Documentation and Compliance: Document all data integration processes, workflows, and technical & system specifications. Ensure compliance with data governance policies, industry standards, and regulatory requirements. MANDATORY TECHNICAL SKILLS REQUIRED: 5+ years of AWS Glue or AWS EMR experience in building pipelines. 4+ years of RedShift and RedShift Spectrum experience. 1+ years of Apache Iceberg experience. 1+ years of AWS ECS/ELB experience. REQUIRED SKILLS/EXPERIENCE β€’ β€’ MUST be able to interview ONSITE β€’ β€’ 4+ years of experience in data engineering, database design, and ETL processes 5+ in programming languages, such as PySpark and Python 5+ years of experience with AWS tools and technologies (S3, EMR, Glue, Athena, RedShift, Postgres, RDS, Lambda, PySpark) 3+ years of experience of working with databases/ data marts/data warehouses Solid knowledge on Data Analysis and Data Warehousing concepts (star/snowflake schema design, dimensional modeling, and reporting enablement). Proven experience in ETL development, system integration, and CI/CD implementation. Experience in complex database objects to move the changed data across multiple environments Solid understanding of data security, privacy, and compliance. Excellent problem-solving and communication skills. Display good communication skills to effectively collaborate with multi-functional teams Participate in agile development processes including sprint planning stand-ups and retrospectives Provide technical guidance and mentorship to junior developers Attention to detail and a commitment to data quality. Continuous learning mindset to keep up with evolving technologies and best practices in data engineering. EDUCATION: Bachelor's degree in computer science, information technology, or a related field. A master's degree can be advantageous. ========================================== ========================================== Calance Consultant Benefits Offerings: EPO/PPO Medical Plans HMO/PPO Dental programs - Vision - VSP (Vision Plan Summary) 401K Retirement vesting program (VOYA) Paid Bi-Weekly/Direct Deposit Flex Spending Plan Voluntary Life, AD&D, STD or LTD plans