Extend Information Systems Inc.

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Atlanta, GA, on a long-term contract. Required skills include AWS services (S3, Glue, Redshift), SQL, Python, and PySpark. Experience with data ingestion, DevOps practices, and security best practices is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#Data Quality #Terraform #PySpark #Redshift #Spark (Apache Spark) #DevOps #SQL (Structured Query Language) #Data Lake #Lambda (AWS Lambda) #"ETL (Extract #Transform #Load)" #S3 (Amazon Simple Storage Service) #Batch #SNS (Simple Notification Service) #AWS (Amazon Web Services) #Security #Data Engineering #Athena #Datasets #IAM (Identity and Access Management) #Schema Design #Python #SQS (Simple Queue Service) #Databases #DynamoDB #Data Science #RDS (Amazon Relational Database Service) #Compliance #Cloud #Data Ingestion #Data Modeling
Role description
Job Title: AWS Data engineer Location: Atlanta, GA Duration: Long Term Contract Job Description: Key Responsibilities: • Design, develop, and maintain batch and streaming ETL/ELT pipelines using AWS services (Glue, Lambda, Step Functions, etc.). • Implement data ingestion frameworks from diverse sources (APIs, databases, streaming platforms). • Ensure data quality, governance, and security across all pipelines. • Build and optimize data lakes and warehouses leveraging Amazon S3, Redshift, Athena, and Lake Formation. • Collaborate with data scientists, analysts, and business stakeholders to deliver reliable datasets. • Monitor and troubleshoot data workflows, ensuring high availability and performance. • Stay updated with emerging AWS technologies and recommend improvements. Required Skills & Qualifications • Strong experience with AWS cloud services: S3, Glue, Redshift, EMR, Kinesis, Lambda, DynamoDB, RDS, Athena, SNS. SQS. • Proficiency in SQL, Python and PySpark • Knowledge of data modeling, schema design, and performance tuning. • Familiarity with CI/CD pipelines and DevOps practices tool (Terraform, CloudFormation). • Experience with security best practices (IAM, encryption, compliance).