AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 16, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Malvern, PA
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Security #Athena #Spark (Apache Spark) #Python #Data Pipeline #Cloud #Apache Spark #IAM (Identity and Access Management) #Data Lake #Presto #"ETL (Extract #Transform #Load)" #Programming #Data Analysis #DevOps #Schema Design #Lambda (AWS Lambda) #EC2 #Leadership #Data Access #Data Processing #Scala #AWS (Amazon Web Services) #Data Engineering #Redshift #Data Modeling
Role description
JOB DESCRIPTION: We are seeking a highly skilled and technically proficient Senior Data Engineer with deep expertise in AWS services and data engineering frameworks. The ideal candidate will play a pivotal role in designing, building, and optimizing scalable data pipelines and cloud-native solutions. This role demands a strong understanding of cloud architecture, data modeling, and excellent communication skills to collaborate across cross-functional teams. Key Responsibilities: β€’ Architect and implement robust data pipelines using AWS services such as Glue, EC2, ECS, Lambda, Step Functions, IAM, Athena, Presto, S3, and Redshift β€’ Collaborate with stakeholders to define source-to-target mappings and contribute to the development of target data models β€’ Enable scalable data engineering solutions using Python, Apache Spark, and other modern frameworks β€’ Optimize data workflows and ensure high performance, reliability, and security across cloud environments β€’ Provide technical leadership and mentorship to junior engineers and data analysts β€’ Communicate effectively with technical and non-technical stakeholders to translate business requirements into technical solutions β€’ Work with tools like HUE and Dremino (preferred) to enhance data accessibility and usability Required Skills & Qualifications: β€’ Proven hands-on experience with AWS data services and cloud infrastructure β€’ Strong programming skills in Python and Spark for data processing and transformation β€’ Experience in designing and implementing ETL/ELT pipelines and data lake architectures β€’ Familiarity with data modeling concepts and ability to contribute to schema design β€’ Knowledge of Dremino or similar query engines is a plus β€’ Excellent problem-solving and analytical skills β€’ Strong communication and collaboration abilities Preferred Qualifications: β€’ AWS certifications (e.g., AWS Certified Data Analytics, Solutions Architect) β€’ Experience with CI/CD pipelines and DevOps practices in cloud environments