NLB Services

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a long-term contract in Atlanta, GA (hybrid, 3 days/week). Key skills include AWS services, SQL, Python, and PySpark. Experience with ETL/ELT pipelines and data governance is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 20, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Data Lake #DevOps #Athena #SNS (Simple Notification Service) #AWS (Amazon Web Services) #PySpark #Spark (Apache Spark) #Datasets #Batch #Cloud #Data Quality #RDS (Amazon Relational Database Service) #SQL (Structured Query Language) #Schema Design #Security #Compliance #Data Ingestion #"ETL (Extract #Transform #Load)" #Data Science #Data Engineering #Databases #Redshift #Terraform #Python #SQS (Simple Queue Service) #Data Modeling #IAM (Identity and Access Management) #Lambda (AWS Lambda) #DynamoDB
Role description
Job Role: AWS Data Engineer Location: Atlanta, GA - Hybrid (3 Days in a week) Job Type: Long Term Contract Job Description: Key Responsibilities: Β· Design, develop, and maintain batch and streaming ETL/ELT pipelines using AWS services (Glue, Lambda, Step Functions, etc.). Β· Implement data ingestion frameworks from diverse sources (APIs, databases, streaming platforms). Β· Ensure data quality, governance, and security across all pipelines. Β· Build and optimize data lakes and warehouses leveraging Amazon S3, Redshift, Athena, and Lake Formation. Β· Collaborate with data scientists, analysts, and business stakeholders to deliver reliable datasets. Β· Monitor and troubleshoot data workflows, ensuring high availability and performance. Β· Stay updated with emerging AWS technologies and recommend improvements. Required Skills & Qualifications Β· Strong experience with AWS cloud services: S3, Glue, Redshift, EMR, Kinesis, Lambda, DynamoDB, RDS, Athena, SNS. SQS. Β· Proficiency in SQL, Python and PySpark Β· Knowledge of data modeling, schema design, and performance tuning. Β· Familiarity with CI/CD pipelines and DevOps practices tool (Terraform, CloudFormation). Β· Experience with security best practices (IAM, encryption, compliance).