

AceStack
Data Engineer (Python + AWS)_Melvern, PA _Contract
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Python + AWS) in Melvern, PA, offering a 6-12 month contract at a competitive pay rate. Candidates must have 3-8+ years of experience, strong Python and AWS skills, and expertise in ETL/ELT processes.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 11, 2026
π - Duration
More than 6 months
-
ποΈ - Location
On-site
-
π - Contract
Fixed Term
-
π - Security
Unknown
-
π - Location detailed
Malvern, PA
-
π§ - Skills detailed
#S3 (Amazon Simple Storage Service) #Cloud #AWS (Amazon Web Services) #Version Control #Airflow #PySpark #AWS S3 (Amazon Simple Storage Service) #Data Catalog #SQL (Structured Query Language) #Data Processing #Scala #Python #Redshift #Programming #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Athena #Data Lake #Docker #Data Governance #AWS Lambda #Data Science #Data Engineering #Kafka (Apache Kafka) #Spark (Apache Spark) #Data Pipeline #AWS EMR (Amazon Elastic MapReduce) #AWS Glue #Kubernetes #Apache Spark #Databricks #GIT #Computer Science
Role description
Job Description β Data Engineer (Python + AWS)
Data Engineer β Python & AWS
Location : Melvern, PA (Local to PA only )
Employment Type : 06-12 Month Contract
Job Summary
We are seeking a highly skilled Data Engineer with strong experience in Python and AWS to design, develop, and maintain scalable data pipelines and data infrastructure. The ideal candidate will be responsible for building robust ETL/ELT processes, optimizing data workflows, and enabling data-driven decision-making across the organization.
The candidate should have hands-on experience with AWS data services, Python-based data processing, and modern data engineering tools.
Required Skills
Technical Skills
β’ Strong programming experience in Python for data processing.
β’ Experience with AWS Cloud Services, such as:
β’ AWS S3
β’ AWS Glue
β’ AWS Lambda
β’ AWS Redshift
β’ AWS Athena
β’ AWS EMR
β’ Hands-on experience with ETL/ELT pipeline development.
β’ Experience working with large-scale data processing frameworks (Spark/PySpark preferred).
β’ Knowledge of SQL and database optimization.
β’ Experience with data warehousing concepts and dimensional modeling.
β’ Familiarity with workflow orchestration tools such as Airflow.
β’ Experience with CI/CD and version control tools (Git).
Preferred Skills
β’ Experience with Databricks or Apache Spark.
β’ Experience with data lake architecture.
β’ Knowledge of containerization (Docker, Kubernetes).
β’ Experience with streaming technologies like Kafka or Kinesis.
β’ Exposure to data governance and data catalog tools.
Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Data Science, or related field.
β’ 3β8+ years of experience in Data Engineering or related roles.
β’ Hands-on experience building cloud-based data platforms.
Job Description β Data Engineer (Python + AWS)
Data Engineer β Python & AWS
Location : Melvern, PA (Local to PA only )
Employment Type : 06-12 Month Contract
Job Summary
We are seeking a highly skilled Data Engineer with strong experience in Python and AWS to design, develop, and maintain scalable data pipelines and data infrastructure. The ideal candidate will be responsible for building robust ETL/ELT processes, optimizing data workflows, and enabling data-driven decision-making across the organization.
The candidate should have hands-on experience with AWS data services, Python-based data processing, and modern data engineering tools.
Required Skills
Technical Skills
β’ Strong programming experience in Python for data processing.
β’ Experience with AWS Cloud Services, such as:
β’ AWS S3
β’ AWS Glue
β’ AWS Lambda
β’ AWS Redshift
β’ AWS Athena
β’ AWS EMR
β’ Hands-on experience with ETL/ELT pipeline development.
β’ Experience working with large-scale data processing frameworks (Spark/PySpark preferred).
β’ Knowledge of SQL and database optimization.
β’ Experience with data warehousing concepts and dimensional modeling.
β’ Familiarity with workflow orchestration tools such as Airflow.
β’ Experience with CI/CD and version control tools (Git).
Preferred Skills
β’ Experience with Databricks or Apache Spark.
β’ Experience with data lake architecture.
β’ Knowledge of containerization (Docker, Kubernetes).
β’ Experience with streaming technologies like Kafka or Kinesis.
β’ Exposure to data governance and data catalog tools.
Qualifications
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Data Science, or related field.
β’ 3β8+ years of experience in Data Engineering or related roles.
β’ Hands-on experience building cloud-based data platforms.






