

Kastech Software Solutions Group
Lead AWS Data Engineer with DynamoDB
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead AWS Data Engineer with DynamoDB in Miami, FL, for a contract of unspecified length, offering competitive pay. Requires 12+ years of experience, expertise in Python, AWS Glue, DynamoDB, and strong SQL skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date
March 3, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Miami, FL
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Data Processing #DynamoDB #AWS (Amazon Web Services) #AWS Glue #GIT #PySpark #Data Modeling #API (Application Programming Interface) #POSTMAN #Python #Infrastructure as Code (IaC) #Data Engineering #Data Pipeline #S3 (Amazon Simple Storage Service) #Version Control #Cloud #Spark (Apache Spark) #Automation #Triggers #IAM (Identity and Access Management) #Terraform #SQL (Structured Query Language)
Role description
Role: Lead AWS Data Engineer with DynamoDB Exp
Location: Miami FL
Exp : 12 + Years needed
Job Description:
Hands on data engineering experience with proven track record of building production data pipelines
Expert level proficiency in Python for data processing, ETL development, and automation (PySpark)
Deep experience with AWS Glue including job development, crawlers, triggers, and workflow orchestration
Strong expertise in DynamoDB data modeling, capacity planning, streams, and performance optimization
Solid understanding of AWS ecosystem: S3, Lambda, Step Functions, CloudWatch, IAM, KMS, Kinesis
Proficiency with Postman for API development, testing, and collaboration
Strong SQL skills and understanding of data warehousing principles
Experience with version control (Git), CI/CD tools, and infrastructure as code (CloudFormation/Terraform)
Role: Lead AWS Data Engineer with DynamoDB Exp
Location: Miami FL
Exp : 12 + Years needed
Job Description:
Hands on data engineering experience with proven track record of building production data pipelines
Expert level proficiency in Python for data processing, ETL development, and automation (PySpark)
Deep experience with AWS Glue including job development, crawlers, triggers, and workflow orchestration
Strong expertise in DynamoDB data modeling, capacity planning, streams, and performance optimization
Solid understanding of AWS ecosystem: S3, Lambda, Step Functions, CloudWatch, IAM, KMS, Kinesis
Proficiency with Postman for API development, testing, and collaboration
Strong SQL skills and understanding of data warehousing principles
Experience with version control (Git), CI/CD tools, and infrastructure as code (CloudFormation/Terraform)






