

Excelon Solutions
AWS Data Engineer/ Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer/Architect with a contract length of "unknown" and a pay rate of "unknown." Candidates should have 8+ years of experience in data warehousing, 4+ years in AWS, and proficiency in Python, PySpark, and SQL.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Torrance, CA
-
🧠 - Skills detailed
#AWS Glue #Amazon Redshift #Data Engineering #Redshift #Data Processing #Snowflake #Data Analysis #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Airflow #AWS (Amazon Web Services) #Schema Design #Python #SQL (Structured Query Language) #Dimensional Modelling #Lambda (AWS Lambda) #Cloud #PySpark
Role description
• Overall experience of 8+ years in data warehousing and/or data mesh implementations with 4+ years in AWS cloud ecosystem
• Strong experience with distributed data processing (Spark, AWS Glue, EMR, or equivalent).
• Hands-on expertise with data modelling, ETL pipelines, and performance optimization.
• Strong hands-on expertise in building and optimizing ETL pipelines into Amazon Redshift
• Proficiency in Python, PySpark and SQL; familiarity with Iceberg tables preferred.
• Solid background in Data Analysis and Data Warehousing concepts (star/snowflake schema design, dimensional modelling, and reporting enablement).
• Orchestration experience with Airflow, Step Functions, and Lambda
• Experience with Redshift performance tuning, schema design, and workload management.
• Overall experience of 8+ years in data warehousing and/or data mesh implementations with 4+ years in AWS cloud ecosystem
• Strong experience with distributed data processing (Spark, AWS Glue, EMR, or equivalent).
• Hands-on expertise with data modelling, ETL pipelines, and performance optimization.
• Strong hands-on expertise in building and optimizing ETL pipelines into Amazon Redshift
• Proficiency in Python, PySpark and SQL; familiarity with Iceberg tables preferred.
• Solid background in Data Analysis and Data Warehousing concepts (star/snowflake schema design, dimensional modelling, and reporting enablement).
• Orchestration experience with Airflow, Step Functions, and Lambda
• Experience with Redshift performance tuning, schema design, and workload management.






