

KTek Resourcing
AWS Data Engineer - Charlotte, NC (Hybrid)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Charlotte, NC (Hybrid) with a contract length of unspecified duration and a pay rate of "unknown." Requires 5+ years in Data Engineering, strong AWS skills, Python, PySpark, and experience with data lakehouse architecture.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 29, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Data Quality #Data Warehouse #Aurora #PySpark #Terraform #Data Engineering #Lambda (AWS Lambda) #Compliance #Apache Iceberg #Data Lake #Spark (Apache Spark) #Redshift #Data Transformations #Infrastructure as Code (IaC) #Data Security #Datasets #Data Ingestion #Python #Scala #AWS Glue #Security #Data Science #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Data Lakehouse
Role description
Details of the job position:
Role: AWS Data Engineer
Location: Charlotte, NC (Hybrid)
Key Responsibilities:
• Design, build, and maintain scalable ETL pipelines and data ingestion frameworks using AWS Glue, Lambda, EMR, and PySpark.
• Develop and optimize data lakes and data warehouses using Redshift, Aurora, and Iceberg.
• Implement data partitioning, compression, and schema evolution strategies for Iceberg tables.
• Collaborate with data scientists, analysts, and business teams to ensure data quality, governance, and availability.
• Automate infrastructure provisioning and data workflows using Terraform and CI/CD pipelines.
• Optimize AWS services for performance, reliability, and cost efficiency.
• Integrate various data sources, including structured and unstructured datasets, into the centralized data platform.
• Ensure compliance with data security and governance standards across environments.
Required Skills & Experience:
• 5+ years of experience in Data Engineering or related fields.
• Strong hands-on experience with AWS Glue, Lambda, EMR, Redshift, and Aurora.
• Proficiency in Python and PySpark for data transformations and analytics.
• Experience implementing data lakehouse architecture using Apache Iceberg.
• Working knowledge of Terraform for IaC (Infrastructure as Code).
• Strong understanding of data warehousing, ETL, and ELT concepts.
• Experience in performance tuning of AWS data services and Spark jobs.
• Excellent problem-solving and communication skills.
Details of the job position:
Role: AWS Data Engineer
Location: Charlotte, NC (Hybrid)
Key Responsibilities:
• Design, build, and maintain scalable ETL pipelines and data ingestion frameworks using AWS Glue, Lambda, EMR, and PySpark.
• Develop and optimize data lakes and data warehouses using Redshift, Aurora, and Iceberg.
• Implement data partitioning, compression, and schema evolution strategies for Iceberg tables.
• Collaborate with data scientists, analysts, and business teams to ensure data quality, governance, and availability.
• Automate infrastructure provisioning and data workflows using Terraform and CI/CD pipelines.
• Optimize AWS services for performance, reliability, and cost efficiency.
• Integrate various data sources, including structured and unstructured datasets, into the centralized data platform.
• Ensure compliance with data security and governance standards across environments.
Required Skills & Experience:
• 5+ years of experience in Data Engineering or related fields.
• Strong hands-on experience with AWS Glue, Lambda, EMR, Redshift, and Aurora.
• Proficiency in Python and PySpark for data transformations and analytics.
• Experience implementing data lakehouse architecture using Apache Iceberg.
• Working knowledge of Terraform for IaC (Infrastructure as Code).
• Strong understanding of data warehousing, ETL, and ELT concepts.
• Experience in performance tuning of AWS data services and Spark jobs.
• Excellent problem-solving and communication skills.





