

AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer, contract length unknown, with a pay rate of "unknown". Responsibilities include designing ETL/ELT pipelines, data ingestion, and optimizing data lakes using AWS services. Key skills required: Python, AWS, and data engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 11, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Atlanta, GA
-
π§ - Skills detailed
#Python #"ETL (Extract #Transform #Load)" #Scala #Data Science #Lambda (AWS Lambda) #Data Processing #S3 (Amazon Simple Storage Service) #Data Analysis #Data Warehouse #Data Lake #Redshift #AWS (Amazon Web Services) #Data Pipeline #Data Ingestion #Data Engineering #Data Quality
Role description
β’ Design, develop, and maintain scalable ETL/ELT data pipelines in AWS.
β’ Implement data ingestion from a variety of structured and unstructured data sources.
β’ Build and optimize data lakes and data warehouses using AWS services (e.g., S3, Redshift, Glue, EMR, Lambda).
β’ Write high-quality, efficient Python code for data processing and transformation.
β’ Develop and maintain data models and schemas to support analytics and reporting.
β’ Work closely with data analysts, data scientists, and business stakeholders to understand requirements and deliver data solutions.
β’ Monitor and troubleshoot data pipelines to ensure accuracy, reliability, and performance.
β’ Implement data quality and validation checks.
β’ Design, develop, and maintain scalable ETL/ELT data pipelines in AWS.
β’ Implement data ingestion from a variety of structured and unstructured data sources.
β’ Build and optimize data lakes and data warehouses using AWS services (e.g., S3, Redshift, Glue, EMR, Lambda).
β’ Write high-quality, efficient Python code for data processing and transformation.
β’ Develop and maintain data models and schemas to support analytics and reporting.
β’ Work closely with data analysts, data scientists, and business stakeholders to understand requirements and deliver data solutions.
β’ Monitor and troubleshoot data pipelines to ensure accuracy, reliability, and performance.
β’ Implement data quality and validation checks.