Insight Global

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer on a long-term contract, offering a competitive pay rate. Requires 12+ years of experience, including 6+ years in Data Engineering and 4+ years building AWS data pipelines. Skills in ETL, PySpark, and advanced SQL are essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
544
-
πŸ—“οΈ - Date
April 15, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Data Engineering #Scala #Data Architecture #AWS (Amazon Web Services) #PySpark #Cloud #AWS Glue #Ab Initio #SQL Queries #Data Profiling #Data Analysis #Spark (Apache Spark) #AI (Artificial Intelligence) #Programming #Python #Talend #"ETL (Extract #Transform #Load)" #Datasets #SQL (Structured Query Language) #Data Pipeline
Role description
We’re hiring a Senior AWS Data Engineer to support a large-scale data modernization initiative. This role focuses on unifying and transforming data from legacy systems into scalable, cloud-based solutions using AWS. The ideal candidate is highly experienced, hands-on, and comfortable working independently in complex data environments. What You’ll Do β€’ Design, build, and maintain AWS-based data pipelines to support enterprise data initiatives β€’ Develop and optimize ETL processes to integrate data from multiple legacy systems β€’ Use PySpark and Python to process and transform large datasets β€’ Write and optimize advanced SQL queries for data analysis, profiling, and validation β€’ Perform deep data analysis to understand source data, relationships, and quality issues β€’ Partner with stakeholders to gather and manage technical requirements β€’ Test, troubleshoot, and enhance existing data models and pipelines β€’ Contribute ideas around AI-driven approaches (including agents) to improve data workflows Required Skills & Experience β€’ 12+ years of overall professional experience β€’ 6+ years of hands-on Data Engineering experience β€’ 4+ years of experience building data pipelines in AWS β€’ Strong background in ETL tools and frameworks (e.g., Ab Initio, Talend, AWS Glue) β€’ Hands-on experience with PySpark β€’ Strong object-oriented programming skills (Python preferred) β€’ Advanced SQL expertise, including: β€’ Complex joins and queries β€’ Data profiling, summarization, and validation β€’ Strong understanding of data relationships, structure, and quality β€’ Proven ability to work independently and solve complex data problems Nice to Have β€’ Experience applying AI concepts or agents within data engineering workflows β€’ Background in large enterprise or financial environments Why Join β€’ Work on high-impact, enterprise-scale data initiatives β€’ Long-term project stability β€’ Opportunity to shape and improve modern cloud data architecture β€’ Collaborative team with autonomy and ownership