

100% Remote -- (***12+ Years Only***) Senior AWS Data Engineer - Contract
β - Featured Role | Apply direct with Data Freelance Hub
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 9, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Automation #Data Processing #Snowflake #AWS (Amazon Web Services) #Data Warehouse #Spark (Apache Spark) #Scala #Data Modeling #Python #Data Integrity #Compliance #DynamoDB #Data Pipeline #Terraform #AWS Glue #"ETL (Extract #Transform #Load)" #Storage #Schema Design #PySpark #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Batch #Data Science #Data Storage #IAM (Identity and Access Management) #Security #Data Engineering
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Accion Labs, is seeking the following. Apply via Dice today!
100% Remote -- Senior AWS Data engineer - 6 months contract
Experience / Minimum Requirements:
12+ years of experience in IT , with strong hands-on expertise in AWS Glue, PySpark, Python, and Snowflake.
Deep understanding of Spark architecture, distributed processing, and performance tuning techniques.
Strong grasp of data modeling, schema design, and data warehouse concepts.
Experience with AWS data ecosystem including S3, Lambda and Glue Catalog.
Proficiency in Python (PySpark) for data transformation and automation tasks.
Familiarity with CI/CD practices and infrastructure-as-code tools such as Terraform is a plus.
Excellent communication and problem-solving skills, with the ability to work independently and in a team environment.
Collaborate with cross-functional teams including Data Scientists, Analysts, and Engineers to gather data requirements and build scalable data solutions.
Design, develop, and maintain complex ETL pipelines using AWS Glue and PySpark, ensuring efficient data processing across batch and streaming workloads.
Integrate and manage data storage and retrieval using AWS DynamoDB and Snowflake, optimizing for performance and scalability.
Ensure data integrity, quality, and security across data pipelines, applying best practices for encryption, IAM, and compliance.
Monitor and troubleshoot pipeline issues, continuously optimizing for cost and performance across AWS services.
Stay current with advancements in AWS Glue, PySpark, and data infrastructure tools, and recommend improvements where applicable.
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Accion Labs, is seeking the following. Apply via Dice today!
100% Remote -- Senior AWS Data engineer - 6 months contract
Experience / Minimum Requirements:
12+ years of experience in IT , with strong hands-on expertise in AWS Glue, PySpark, Python, and Snowflake.
Deep understanding of Spark architecture, distributed processing, and performance tuning techniques.
Strong grasp of data modeling, schema design, and data warehouse concepts.
Experience with AWS data ecosystem including S3, Lambda and Glue Catalog.
Proficiency in Python (PySpark) for data transformation and automation tasks.
Familiarity with CI/CD practices and infrastructure-as-code tools such as Terraform is a plus.
Excellent communication and problem-solving skills, with the ability to work independently and in a team environment.
Collaborate with cross-functional teams including Data Scientists, Analysts, and Engineers to gather data requirements and build scalable data solutions.
Design, develop, and maintain complex ETL pipelines using AWS Glue and PySpark, ensuring efficient data processing across batch and streaming workloads.
Integrate and manage data storage and retrieval using AWS DynamoDB and Snowflake, optimizing for performance and scalability.
Ensure data integrity, quality, and security across data pipelines, applying best practices for encryption, IAM, and compliance.
Monitor and troubleshoot pipeline issues, continuously optimizing for cost and performance across AWS services.
Stay current with advancements in AWS Glue, PySpark, and data infrastructure tools, and recommend improvements where applicable.