

Qualis1 Inc.
Senior Data Engineer - AWS & Python
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer - AWS & Python in Malvern, PA, with a contract length of "unknown" and a pay rate of "unknown." Requires 8+ years of experience, including 5+ in Data Engineering, strong AWS and Python expertise, and knowledge of data security practices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 2, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Malvern, PA
-
🧠 - Skills detailed
#Deployment #Libraries #Scala #SNS (Simple Notification Service) #Security #Data Security #Data Modeling #Data Engineering #Athena #DynamoDB #Kafka (Apache Kafka) #Monitoring #Python #S3 (Amazon Simple Storage Service) #Data Pipeline #Spark (Apache Spark) #Data Science #Compliance #GIT #SQL (Structured Query Language) #SQS (Simple Queue Service) #Data Transformations #Data Quality #AWS (Amazon Web Services) #Redshift #AWS S3 (Amazon Simple Storage Service) #Observability #Java #Datadog #Datasets #Cloud #Lambda (AWS Lambda) #Airflow #Data Processing #Data Lake #Terraform #PySpark #"ETL (Extract #Transform #Load)" #IAM (Identity and Access Management)
Role description
Role Name - Senior Data Engineer - AWS & Python
REQUIREMENT\_CITY - Malvern, PA
ROLE\_DESCRIPTION -
Build and maintain event-driven data pipelines using AWS services such as Kinesis, MSK/Kafka, Lambda, Step Functions, SQS/SNS, and Glue/EMR.
Develop ETL/ELT workflows using Python and PySpark, ensuring performance, scalability, and cost efficiency.
Implement and optimize Spark-based data transformations, partitioning strategies, and data processing frameworks.
Design and manage data lake and warehouse structures using S3, Glue Catalog, Athena, and/or Redshift.
Build streaming solutions with checkpointing, stateful transformations, idempotency, and schema evolution.
Ensure high standards of data quality, observability, monitoring, and alerting (CloudWatch, Datadog, etc.).
Implement data security best practices including IAM, encryption (KMS), networking, and governance.
Create reusable frameworks, internal libraries, and CI/CD pipelines for automated deployments.
Collaborate with data scientists, analysts, and business teams to deliver well-modeled, reliable datasets.
Lead design reviews, mentor junior engineers, and contribute to engineering best practices.
Required Qualifications
Overall 8+ yrs of experience
5+ years of professional experience in Data Engineering.
Experience of working on Java is an advantage
Strong expertise in Python and PySpark for large-scale data processing.
Advanced hands-on experience with AWS (S3, Glue, EMR, Lambda, Step Functions, Kinesis/MSK, DynamoDB, Athena, Redshift).
Deep experience building event-driven and streaming data pipelines.
Strong SQL experience for analytical and ETL workloads.
Hands-on experience with workflow orchestration tools such as Airflow or Step Functions.
Experience with CI/CD, Git, and Infrastructure-as-Code (Terraform or CloudFormation).
Strong understanding of distributed systems, Spark performance tuning, data modeling, and cloud cost optimization.
Knowledge of data security, encryption, networking, and compliance best practices in cloud environments
Role Name - Senior Data Engineer - AWS & Python
REQUIREMENT\_CITY - Malvern, PA
ROLE\_DESCRIPTION -
Build and maintain event-driven data pipelines using AWS services such as Kinesis, MSK/Kafka, Lambda, Step Functions, SQS/SNS, and Glue/EMR.
Develop ETL/ELT workflows using Python and PySpark, ensuring performance, scalability, and cost efficiency.
Implement and optimize Spark-based data transformations, partitioning strategies, and data processing frameworks.
Design and manage data lake and warehouse structures using S3, Glue Catalog, Athena, and/or Redshift.
Build streaming solutions with checkpointing, stateful transformations, idempotency, and schema evolution.
Ensure high standards of data quality, observability, monitoring, and alerting (CloudWatch, Datadog, etc.).
Implement data security best practices including IAM, encryption (KMS), networking, and governance.
Create reusable frameworks, internal libraries, and CI/CD pipelines for automated deployments.
Collaborate with data scientists, analysts, and business teams to deliver well-modeled, reliable datasets.
Lead design reviews, mentor junior engineers, and contribute to engineering best practices.
Required Qualifications
Overall 8+ yrs of experience
5+ years of professional experience in Data Engineering.
Experience of working on Java is an advantage
Strong expertise in Python and PySpark for large-scale data processing.
Advanced hands-on experience with AWS (S3, Glue, EMR, Lambda, Step Functions, Kinesis/MSK, DynamoDB, Athena, Redshift).
Deep experience building event-driven and streaming data pipelines.
Strong SQL experience for analytical and ETL workloads.
Hands-on experience with workflow orchestration tools such as Airflow or Step Functions.
Experience with CI/CD, Git, and Infrastructure-as-Code (Terraform or CloudFormation).
Strong understanding of distributed systems, Spark performance tuning, data modeling, and cloud cost optimization.
Knowledge of data security, encryption, networking, and compliance best practices in cloud environments






