

Precision Technologies
Lead AWS Data Engineer with DynamoDB Exp || Min 12+ Years
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead AWS Data Engineer with 12+ years of experience, located in Miami, FL. Key skills include Python, AWS Glue, DynamoDB, SQL, and CI/CD tools. On-site work is required, with a pay rate of "unknown."
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 3, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Miami, FL
-
🧠 - Skills detailed
#Triggers #Version Control #AWS Glue #IAM (Identity and Access Management) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Infrastructure as Code (IaC) #Data Pipeline #Data Processing #DynamoDB #Data Engineering #GIT #Data Modeling #SQL (Structured Query Language) #Lambda (AWS Lambda) #Automation #Terraform #Python #PySpark #POSTMAN #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Cloud #API (Application Programming Interface)
Role description
Role: Lead AWS Data Engineer with DynamoDB Exp
Location: Miami FL
Exp : 12 + Years needed
Job Description
• Hands on data engineering experience with proven track record of building production data pipelines
• Expert level proficiency in Python for data processing, ETL development, and automation (PySpark)
• Deep experience with AWS Glue including job development, crawlers, triggers, and workflow orchestration
• Strong expertise in DynamoDB data modeling, capacity planning, streams, and performance optimization
• Solid understanding of AWS ecosystem: S3, Lambda, Step Functions, CloudWatch, IAM, KMS, Kinesis
• Proficiency with Postman for API development, testing, and collaboration
• Strong SQL skills and understanding of data warehousing principles
• Experience with version control (Git), CI/CD tools, and infrastructure as code (CloudFormation/Terraform)
Role: Lead AWS Data Engineer with DynamoDB Exp
Location: Miami FL
Exp : 12 + Years needed
Job Description
• Hands on data engineering experience with proven track record of building production data pipelines
• Expert level proficiency in Python for data processing, ETL development, and automation (PySpark)
• Deep experience with AWS Glue including job development, crawlers, triggers, and workflow orchestration
• Strong expertise in DynamoDB data modeling, capacity planning, streams, and performance optimization
• Solid understanding of AWS ecosystem: S3, Lambda, Step Functions, CloudWatch, IAM, KMS, Kinesis
• Proficiency with Postman for API development, testing, and collaboration
• Strong SQL skills and understanding of data warehousing principles
• Experience with version control (Git), CI/CD tools, and infrastructure as code (CloudFormation/Terraform)






