

Appex Innovation
AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Fort Mill, SC, offering $60.00 - $65.00 per hour for a 12+ month contract. Key skills include Python, AWS Glue, and Kafka. Experience in real-time streaming and data quality frameworks is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
March 5, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Fort Mill, SC 29707
-
π§ - Skills detailed
#PySpark #Data Quality #Terraform #Cloud #Data Engineering #AWS Glue #AWS (Amazon Web Services) #Data Pipeline #Kafka (Apache Kafka) #Python #Spark (Apache Spark) #Anomaly Detection #Infrastructure as Code (IaC) #Lambda (AWS Lambda) #Scala #Monitoring #AWS Lambda #Batch #Data Processing #DevOps #"ETL (Extract #Transform #Load)" #Data Integrity #SQL (Structured Query Language) #Data Lake
Role description
We have an opportunity with our partner for AWS Data Engineer role in Fort, Mill, SC for a hybrid role.
Job details:
Job Title: Sr.Data Engineer (MidβSenior Level) β AWS & Streaming
Experience Level β 12+ Years
Location: Fort Mill, SC (3 days hybrid)
Note: Only W2 candidates are excepted of the vendor; no Layer candidates will get accepted for the role
Role Summary:We are seeking a MidβSenior Data Engineer with strong expertise in AWS-based data engineering, real-time streaming technologies, and enterprise-grade data quality frameworks. The ideal candidate will design, build, and optimize scalable batch and streaming data pipelines, implement robust data validation and monitoring processes, and support mission-critical analytics platforms.
Key Responsibilities:
Develop and maintain scalable ETL/ELT pipelines using AWS Glue, PySpark, and Python
Build event-driven workflows using AWS Lambda
Design and manage real-time streaming solutions using Kafka, KSQL, and Apache Flink
Implement and enforce comprehensive data quality frameworks, including validation, profiling, monitoring, and reconciliation
Optimize data processing performance, scalability, reliability, and cost in cloud environments
Collaborate with cross-functional teams to deliver reliable, production-grade data platforms and ensure data integrity across the pipeline
Required Skills:
Strong hands-on experience with Python and PySpark
Proven expertise in AWS Glue, Lambda, and other cloud-native data services
Solid experience with the Kafka ecosystem (topics, partitions, consumer groups, streaming patterns)
Demonstrated experience building and supporting data quality frameworks (validation rules, reconciliation checks, profiling, anomaly detection)
Strong understanding of distributed data processing and scalable architecture patterns
Good-to-Have Skills:
Experience with Apache Flink for real-time stream processing and stateful computations
Knowledge of KSQL or other streaming SQL engines
Exposure to CI/CD pipelines, IaC (Terraform/CloudFormation), and DevOps practices
Familiarity with data lake/lakehouse architectures and table formats such as Iceberg, Delta, or Hudi
Experience working in enterprise or financial data environments
Pay: $60.00 - $65.00 per hour
Application Question(s):
Mandatory
β’ : Mention your Visa Status and Current Location.
Experience:
Python: 10 years (Required)
AWS: 8 years (Required)
Work Location: Hybrid remote in Fort Mill, SC 29707
We have an opportunity with our partner for AWS Data Engineer role in Fort, Mill, SC for a hybrid role.
Job details:
Job Title: Sr.Data Engineer (MidβSenior Level) β AWS & Streaming
Experience Level β 12+ Years
Location: Fort Mill, SC (3 days hybrid)
Note: Only W2 candidates are excepted of the vendor; no Layer candidates will get accepted for the role
Role Summary:We are seeking a MidβSenior Data Engineer with strong expertise in AWS-based data engineering, real-time streaming technologies, and enterprise-grade data quality frameworks. The ideal candidate will design, build, and optimize scalable batch and streaming data pipelines, implement robust data validation and monitoring processes, and support mission-critical analytics platforms.
Key Responsibilities:
Develop and maintain scalable ETL/ELT pipelines using AWS Glue, PySpark, and Python
Build event-driven workflows using AWS Lambda
Design and manage real-time streaming solutions using Kafka, KSQL, and Apache Flink
Implement and enforce comprehensive data quality frameworks, including validation, profiling, monitoring, and reconciliation
Optimize data processing performance, scalability, reliability, and cost in cloud environments
Collaborate with cross-functional teams to deliver reliable, production-grade data platforms and ensure data integrity across the pipeline
Required Skills:
Strong hands-on experience with Python and PySpark
Proven expertise in AWS Glue, Lambda, and other cloud-native data services
Solid experience with the Kafka ecosystem (topics, partitions, consumer groups, streaming patterns)
Demonstrated experience building and supporting data quality frameworks (validation rules, reconciliation checks, profiling, anomaly detection)
Strong understanding of distributed data processing and scalable architecture patterns
Good-to-Have Skills:
Experience with Apache Flink for real-time stream processing and stateful computations
Knowledge of KSQL or other streaming SQL engines
Exposure to CI/CD pipelines, IaC (Terraform/CloudFormation), and DevOps practices
Familiarity with data lake/lakehouse architectures and table formats such as Iceberg, Delta, or Hudi
Experience working in enterprise or financial data environments
Pay: $60.00 - $65.00 per hour
Application Question(s):
Mandatory
β’ : Mention your Visa Status and Current Location.
Experience:
Python: 10 years (Required)
AWS: 8 years (Required)
Work Location: Hybrid remote in Fort Mill, SC 29707






