

AWS Data Engineer ( 3851 )
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Torrance, CA, offering an 18+ month contract at W2 pay. Requires a Bachelor's degree, 4-6+ years in data engineering, and 5+ years with AWS tools, PySpark, and Python.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
-
๐๏ธ - Date discovered
September 3, 2025
๐ - Project duration
More than 6 months
-
๐๏ธ - Location type
Hybrid
-
๐ - Contract type
W2 Contractor
-
๐ - Security clearance
Unknown
-
๐ - Location detailed
Torrance, CA
-
๐ง - Skills detailed
#AWS (Amazon Web Services) #Apache Spark #Spark (Apache Spark) #Data Processing #"ETL (Extract #Transform #Load)" #PySpark #Database Design #Data Integration #Athena #RDS (Amazon Relational Database Service) #Programming #S3 (Amazon Simple Storage Service) #Computer Science #Lambda (AWS Lambda) #Datasets #Data Engineering #Redshift #AWS Glue #Python
Role description
A client of Sharp Decisions if looking for an AWS Data Engineer. This role isย HYBRID in Torrance, CAย with an initial contract ofย 18+ months,ย W2 only.
Daily Tasks Performed:
โข Design and implement data integration workflows using AWS Glue/EMR, Lambda, Redshift
โข Demonstrate proficiency in Pyspark, Apache Spark and Python for data processing large datasets
โข Ensure data is accurately and efficiently extracted, transformed, and loaded into target systems.
Qualifications:
โข Bachelor's degree in computer science, information technology, or a related field. A master's degree can be advantageous.
โข 4-6+ years of experience in data engineering, database design, ETL processes,
โข 5+ in programming languages such as PySpark, Python
โข 5+ years of experience with AWS tools and technologies (S3, EMR, Glue, Athena, RedShift, Postgres, RDS, Lambda, PySpark)
A client of Sharp Decisions if looking for an AWS Data Engineer. This role isย HYBRID in Torrance, CAย with an initial contract ofย 18+ months,ย W2 only.
Daily Tasks Performed:
โข Design and implement data integration workflows using AWS Glue/EMR, Lambda, Redshift
โข Demonstrate proficiency in Pyspark, Apache Spark and Python for data processing large datasets
โข Ensure data is accurately and efficiently extracted, transformed, and loaded into target systems.
Qualifications:
โข Bachelor's degree in computer science, information technology, or a related field. A master's degree can be advantageous.
โข 4-6+ years of experience in data engineering, database design, ETL processes,
โข 5+ in programming languages such as PySpark, Python
โข 5+ years of experience with AWS tools and technologies (S3, EMR, Glue, Athena, RedShift, Postgres, RDS, Lambda, PySpark)