Voto Consulting LLC

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Atlanta, GA (Hybrid) for a 6+ month contract, offering expertise in ETL/ELT on AWS, SQL, Python, and PySpark. Key skills include data modeling and CI/CD familiarity. Local candidates preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 4, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#IAM (Identity and Access Management) #Databases #Compliance #Python #Data Quality #Schema Design #Athena #AWS (Amazon Web Services) #Data Engineering #SQS (Simple Queue Service) #SNS (Simple Notification Service) #Lambda (AWS Lambda) #DynamoDB #Datasets #DevOps #SQL (Structured Query Language) #Redshift #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Terraform #Data Lake #"ETL (Extract #Transform #Load)" #Security #Data Ingestion #PySpark #Batch #Data Modeling #Cloud #Data Science #RDS (Amazon Relational Database Service)
Role description
Job Title :- AWS Data Engineer Location :- Atlanta, GA (Hybrid – 03 Days in Office) Target Start Date / Urgency to Fill Role :- As early as possible. Duration :- 06+ Months Contract Note :- Local consultants needed who must be willing to go in-person for the interview if asked. Relocations are accepted. Must Haves :- • ETL/ELT on AWS, Data Ingestion Frameworks, Data Lake • SQL, Python & PySpark, CI/CD • Data Modeling, Schema Design, and Performance Tuning. Key Responsibilities • Design, develop, and maintain batch and streaming ETL/ELT pipelines using AWS services (Glue, Lambda, Step Functions, etc.). • Implement data ingestion frameworks from diverse sources (APIs, databases, streaming platforms). • Ensure data quality, governance, and security across all pipelines. • Build and optimize data lakes and warehouses leveraging Amazon S3, Redshift, Athena, and Lake Formation. • Collaborate with data scientists, analysts, and business stakeholders to deliver reliable datasets. • Monitor and troubleshoot data workflows, ensuring high availability and performance. • Stay updated with emerging AWS technologies and recommend improvements. Required Skills & Qualifications • Strong experience with AWS Cloud Services: S3, Glue, Redshift, EMR, Kinesis, Lambda, DynamoDB, RDS, Athena, SNS. SQS. • Proficiency in SQL, Python and PySpark. • Knowledge of data modeling, schema design, and performance tuning. • Familiarity with CI/CD pipelines and DevOps practices tool (Terraform, CloudFormation). • Experience with security best practices (IAM, encryption, compliance).