

AWS Data Engineer 3851
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with a contract length of "unknown," offering a pay rate of "$XX/hour." Key skills include AWS tools, PySpark, and ETL processes. A bachelor's degree and 4-6+ years of data engineering experience are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 10, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Torrance, CA
-
π§ - Skills detailed
#Compliance #Security #Data Processing #Monitoring #Snowflake #AWS (Amazon Web Services) #Lambda (AWS Lambda) #Data Pipeline #Python #Datasets #"ETL (Extract #Transform #Load)" #PySpark #Data Quality #Data Lake #Data Mart #Data Governance #Data Warehouse #Cloud #S3 (Amazon Simple Storage Service) #Database Design #Redshift #Documentation #SQL (Structured Query Language) #Databases #Data Analysis #Athena #Programming #Computer Science #RDS (Amazon Relational Database Service) #Data Integration #Agile #AWS Glue #Data Security #Schema Design #Spark (Apache Spark) #Scala #BI (Business Intelligence) #Data Engineering
Role description
Daily Tasks Performed
Develop and Maintain Data Integration Solutions:
β’ Design and implement data integration workflows using AWS Glue EMR, Lambda, Redshift
β’ Pyspark, Spark and Python for data processing large datasets
β’ Ensure data is extracted, transformed, and loaded into target systems
β’ Build ETL pipelines using Iceberg
Ensure Data Quality And Integrity
β’ Validate and cleanse data
β’ Ensure data quality and integrity by implementing monitoring, validation, and error handling mechanisms within data pipelines
Optimize Data Integration Processes
β’ Enhance the performance, optimization of data workflows to meet SLAs, scalability of data integration on AWS cloud infrastructure
β’ Data Analysis and Data Warehousing concepts (star snowflake schema design, dimensional modeling, and reporting enablement)
β’ Resolve performance bottlenecks
β’ Optimize data processing to enhance Redshift's performance
β’ Refine integration processes
Support Business Intelligence And Analytics
β’ Translate business requirements to technical specifications and coded data pipelines
β’ Ensure integrated data for business intelligence and analytics
β’ Meet data requirements
Maintain Documentation And Compliance
β’ Document all data integration processes , workflows , and technical & system specifications.
β’ Ensure compliance with data governance policies , industry standards, and regulatory requirements.
What will this person be working on
β’ Design , development , and management of data integration processes
β’ Integrating data from diverse sources, transforming it to meet business requirements, and loading it into target systems such as data warehouses or data lakes
Position Success Criteria (Desired) - 'WANTS'
β’ Bachelor's degree in computer science, information technology, or a related field. A master's degree can be advantageous.
β’ 4-6+ years of experience in data engineering , database design , ETL processes
β’ Experience with Iceberg
β’ 5+ in programming languages such as PySpark, Python and SQL
β’ 5+ years of experience with AWS tools and technologies (S3 , EMR , Glue , Athena , RedShift , Postgres , RDS , Lambda , PySpark)
β’ 3+ years of experience of working with databases data marts data warehouses
β’ ETL development , system integration , and CI CD implementation
β’ Experience in complex database objects to move the changed data across multiple environments
β’ Solid understanding of data security , privacy, and compliance.
β’ Participate in agile development processes including sprint planning stand-ups and retrospectives
β’ Provide technical guidance and mentorship to junior developers
Daily Tasks Performed
Develop and Maintain Data Integration Solutions:
β’ Design and implement data integration workflows using AWS Glue EMR, Lambda, Redshift
β’ Pyspark, Spark and Python for data processing large datasets
β’ Ensure data is extracted, transformed, and loaded into target systems
β’ Build ETL pipelines using Iceberg
Ensure Data Quality And Integrity
β’ Validate and cleanse data
β’ Ensure data quality and integrity by implementing monitoring, validation, and error handling mechanisms within data pipelines
Optimize Data Integration Processes
β’ Enhance the performance, optimization of data workflows to meet SLAs, scalability of data integration on AWS cloud infrastructure
β’ Data Analysis and Data Warehousing concepts (star snowflake schema design, dimensional modeling, and reporting enablement)
β’ Resolve performance bottlenecks
β’ Optimize data processing to enhance Redshift's performance
β’ Refine integration processes
Support Business Intelligence And Analytics
β’ Translate business requirements to technical specifications and coded data pipelines
β’ Ensure integrated data for business intelligence and analytics
β’ Meet data requirements
Maintain Documentation And Compliance
β’ Document all data integration processes , workflows , and technical & system specifications.
β’ Ensure compliance with data governance policies , industry standards, and regulatory requirements.
What will this person be working on
β’ Design , development , and management of data integration processes
β’ Integrating data from diverse sources, transforming it to meet business requirements, and loading it into target systems such as data warehouses or data lakes
Position Success Criteria (Desired) - 'WANTS'
β’ Bachelor's degree in computer science, information technology, or a related field. A master's degree can be advantageous.
β’ 4-6+ years of experience in data engineering , database design , ETL processes
β’ Experience with Iceberg
β’ 5+ in programming languages such as PySpark, Python and SQL
β’ 5+ years of experience with AWS tools and technologies (S3 , EMR , Glue , Athena , RedShift , Postgres , RDS , Lambda , PySpark)
β’ 3+ years of experience of working with databases data marts data warehouses
β’ ETL development , system integration , and CI CD implementation
β’ Experience in complex database objects to move the changed data across multiple environments
β’ Solid understanding of data security , privacy, and compliance.
β’ Participate in agile development processes including sprint planning stand-ups and retrospectives
β’ Provide technical guidance and mentorship to junior developers