

Direct Client Position- Sr. AWS Data Engineer-Remote
β - Featured Role | Apply direct with Data Freelance Hub
This role is a contract position for a Sr. AWS Data Engineer, remote, offering a competitive pay rate. Requires 8+ years of experience with AWS Glue, PySpark, DynamoDB, and Snowflake, along with strong skills in data modeling and ETL pipeline development.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 6, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Terraform #Scala #AWS Glue #PySpark #Security #Snowflake #"ETL (Extract #Transform #Load)" #Storage #Data Processing #Data Pipeline #S3 (Amazon Simple Storage Service) #Data Modeling #Data Engineering #DynamoDB #Spark (Apache Spark) #IAM (Identity and Access Management) #Data Warehouse #Data Integrity #Python #Batch #Data Science #Lambda (AWS Lambda) #Compliance #Automation #Data Storage #Schema Design
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Sr. AWS Data Engineer
Location: Remote
Type of position: Contract
Job description:
Responsibilities:
β’ Collaborate with cross-functional teams including Data Scientists, Analysts, and Engineers to gather data requirements and build scalable data solutions.
β’ Design, develop, and maintain complex ETL pipelines using AWS Glue and PySpark, ensuring efficient data processing across batch and streaming workloads.
β’ Integrate and manage data storage and retrieval using AWS DynamoDB and Snowflake, optimizing for performance and scalability.
β’ Ensure data integrity, quality, and security across data pipelines, applying best practices for encryption, IAM, and compliance.
β’ Monitor and troubleshoot pipeline issues, continuously optimizing for cost and performance across AWS services.
β’ Stay current with advancements in AWS Glue, PySpark, and data infrastructure tools, and recommend improvements where applicable.
Experience / Minimum Requirements:
β’ 8+ years of experience as a Data Engineer, with strong hands-on expertise in AWS Glue, PySpark, AWS DynamoDB, and Snowflake.
β’ Deep understanding of Spark architecture, distributed processing, and performance tuning techniques.
β’ Strong grasp of data modeling, schema design, and data warehouse concepts.
β’ Experience with AWS data ecosystem including S3, Lambda and Glue Catalog.
β’ Proficiency in Python (PySpark) for data transformation and automation tasks.
β’ Familiarity with CI/CD practices and infrastructure-as-code tools such as Terraform is a plus.
β’ Excellent communication and problem-solving skills, with the ability to work independently and in a team environment.