

TestingXperts
AWS Data Engineer (Apache Spark, Iceberg, Redshift) β Remote | Independent Visa Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer with expertise in Apache Spark, Iceberg, and Redshift. The contract is remote, requiring strong skills in big data systems, AWS services, and data pipeline design. Experience with large-scale data platforms is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 14, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Scala #Redshift #BigQuery #Big Data #Batch #Shell Scripting #SNS (Simple Notification Service) #dbt (data build tool) #Data Engineering #PySpark #SQS (Simple Queue Service) #Data Pipeline #Apache Iceberg #Cloud #Scripting #GCP (Google Cloud Platform) #Lambda (AWS Lambda) #Data Lake #Hadoop #Spark (Apache Spark) #Airflow #Python #Bash #AWS Glue #Apache Spark #Data Processing #EC2 #Amazon Redshift
Role description
Job Title: AWS Data Engineer
Location: Remote
Job Summary
We are looking for an experienced AWS Data Engineer with strong expertise in big data distributed systems and Spark-based data processing. The ideal candidate should have hands-on experience building scalable data pipelines on AWS and working with modern data lake technologies.
Must-Have Skills & Experience
β’ Strong experience with Big Data distributed systems such as:
β’ Apache Iceberg
β’ Hadoop
β’ Hudi
β’ Amazon Redshift
β’ Deep expertise in Apache Spark (batch and/or streaming)
β’ Hands-on experience with:
β’ Shell scripting (Bash)
β’ Python
β’ PySpark
β’ Solid experience with AWS services, including:
β’ AWS Glue
β’ Lambda
β’ Airflow
β’ Lake Formation
β’ EKS
β’ EC2
β’ SQS
β’ SNS
β’ Strong understanding of data pipeline design, performance optimization, and troubleshooting
Preferred / Nice-to-Have Skills
β’ Experience with Google Cloud Platform (GCP)
β’ Hands-on exposure to BigQuery
β’ Experience working with DBT
β’ Familiarity with real-time / streaming data processing
Additional Notes
β’ This role requires hands-on engineering expertise; not looking for a generic Data Engineer profile
β’ Candidates with experience in large-scale data platforms and cloud-native architectures are highly preferred
Job Title: AWS Data Engineer
Location: Remote
Job Summary
We are looking for an experienced AWS Data Engineer with strong expertise in big data distributed systems and Spark-based data processing. The ideal candidate should have hands-on experience building scalable data pipelines on AWS and working with modern data lake technologies.
Must-Have Skills & Experience
β’ Strong experience with Big Data distributed systems such as:
β’ Apache Iceberg
β’ Hadoop
β’ Hudi
β’ Amazon Redshift
β’ Deep expertise in Apache Spark (batch and/or streaming)
β’ Hands-on experience with:
β’ Shell scripting (Bash)
β’ Python
β’ PySpark
β’ Solid experience with AWS services, including:
β’ AWS Glue
β’ Lambda
β’ Airflow
β’ Lake Formation
β’ EKS
β’ EC2
β’ SQS
β’ SNS
β’ Strong understanding of data pipeline design, performance optimization, and troubleshooting
Preferred / Nice-to-Have Skills
β’ Experience with Google Cloud Platform (GCP)
β’ Hands-on exposure to BigQuery
β’ Experience working with DBT
β’ Familiarity with real-time / streaming data processing
Additional Notes
β’ This role requires hands-on engineering expertise; not looking for a generic Data Engineer profile
β’ Candidates with experience in large-scale data platforms and cloud-native architectures are highly preferred






