

Insight Global
AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Data Engineer on a long-term contract, offering a competitive pay rate. Requires 12+ years of experience, including 6+ years in Data Engineering and 4+ years building AWS data pipelines. Skills in ETL, PySpark, and advanced SQL are essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
544
-
ποΈ - Date
April 15, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Charlotte, NC
-
π§ - Skills detailed
#Data Engineering #Scala #Data Architecture #AWS (Amazon Web Services) #PySpark #Cloud #AWS Glue #Ab Initio #SQL Queries #Data Profiling #Data Analysis #Spark (Apache Spark) #AI (Artificial Intelligence) #Programming #Python #Talend #"ETL (Extract #Transform #Load)" #Datasets #SQL (Structured Query Language) #Data Pipeline
Role description
Weβre hiring a Senior AWS Data Engineer to support a large-scale data modernization initiative. This role focuses on unifying and transforming data from legacy systems into scalable, cloud-based solutions using AWS. The ideal candidate is highly experienced, hands-on, and comfortable working independently in complex data environments.
What Youβll Do
β’ Design, build, and maintain AWS-based data pipelines to support enterprise data initiatives
β’ Develop and optimize ETL processes to integrate data from multiple legacy systems
β’ Use PySpark and Python to process and transform large datasets
β’ Write and optimize advanced SQL queries for data analysis, profiling, and validation
β’ Perform deep data analysis to understand source data, relationships, and quality issues
β’ Partner with stakeholders to gather and manage technical requirements
β’ Test, troubleshoot, and enhance existing data models and pipelines
β’ Contribute ideas around AI-driven approaches (including agents) to improve data workflows
Required Skills & Experience
β’ 12+ years of overall professional experience
β’ 6+ years of hands-on Data Engineering experience
β’ 4+ years of experience building data pipelines in AWS
β’ Strong background in ETL tools and frameworks (e.g., Ab Initio, Talend, AWS Glue)
β’ Hands-on experience with PySpark
β’ Strong object-oriented programming skills (Python preferred)
β’ Advanced SQL expertise, including:
β’ Complex joins and queries
β’ Data profiling, summarization, and validation
β’ Strong understanding of data relationships, structure, and quality
β’ Proven ability to work independently and solve complex data problems
Nice to Have
β’ Experience applying AI concepts or agents within data engineering workflows
β’ Background in large enterprise or financial environments
Why Join
β’ Work on high-impact, enterprise-scale data initiatives
β’ Long-term project stability
β’ Opportunity to shape and improve modern cloud data architecture
β’ Collaborative team with autonomy and ownership
Weβre hiring a Senior AWS Data Engineer to support a large-scale data modernization initiative. This role focuses on unifying and transforming data from legacy systems into scalable, cloud-based solutions using AWS. The ideal candidate is highly experienced, hands-on, and comfortable working independently in complex data environments.
What Youβll Do
β’ Design, build, and maintain AWS-based data pipelines to support enterprise data initiatives
β’ Develop and optimize ETL processes to integrate data from multiple legacy systems
β’ Use PySpark and Python to process and transform large datasets
β’ Write and optimize advanced SQL queries for data analysis, profiling, and validation
β’ Perform deep data analysis to understand source data, relationships, and quality issues
β’ Partner with stakeholders to gather and manage technical requirements
β’ Test, troubleshoot, and enhance existing data models and pipelines
β’ Contribute ideas around AI-driven approaches (including agents) to improve data workflows
Required Skills & Experience
β’ 12+ years of overall professional experience
β’ 6+ years of hands-on Data Engineering experience
β’ 4+ years of experience building data pipelines in AWS
β’ Strong background in ETL tools and frameworks (e.g., Ab Initio, Talend, AWS Glue)
β’ Hands-on experience with PySpark
β’ Strong object-oriented programming skills (Python preferred)
β’ Advanced SQL expertise, including:
β’ Complex joins and queries
β’ Data profiling, summarization, and validation
β’ Strong understanding of data relationships, structure, and quality
β’ Proven ability to work independently and solve complex data problems
Nice to Have
β’ Experience applying AI concepts or agents within data engineering workflows
β’ Background in large enterprise or financial environments
Why Join
β’ Work on high-impact, enterprise-scale data initiatives
β’ Long-term project stability
β’ Opportunity to shape and improve modern cloud data architecture
β’ Collaborative team with autonomy and ownership






