

Data Engineer (ETL, AWS, DMS, Data Pipelines, Cloud)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (ETL, AWS, DMS, Data Pipelines, Cloud) in Princeton, NJ, on a contract basis. Requires 3-7 years of ETL and data engineering experience, proficiency in SQL/Python/Scala, and expertise in AWS services.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 5, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Princeton, NJ
-
π§ - Skills detailed
#Spark (Apache Spark) #Apache Spark #SQL (Structured Query Language) #Data Modeling #AWS DMS (AWS Database Migration Service) #NoSQL #Python #Terraform #"ETL (Extract #Transform #Load)" #Cloud #Redshift #Data Lake #S3 (Amazon Simple Storage Service) #DynamoDB #Kafka (Apache Kafka) #Infrastructure as Code (IaC) #Scala #PostgreSQL #MySQL #Data Manipulation #Computer Science #Data Engineering #Airflow #AWS (Amazon Web Services) #Data Pipeline #Lambda (AWS Lambda) #Databases #DMS (Data Migration Service)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: AWS Data Engineer (ETL, AWS, DMS, Data Pipelines, Cloud)
Location: Princeton, NJ(Onsite)
Job Type: Contract
Skills
ETL
AWS
DMS
Data pipelines
Cloud
Job Description:
β’ Bachelorβs/Masterβs degree in Computer Science, Data Engineering, or a related field.
β’ 3-7 years of experience in ETL development and data engineering.
β’ Strong proficiency in SQL, Python, or Scala for data manipulation.
β’ Expertise in AWS cloud services (DMS, Glue, Redshift, S3, Lambda, Step Functions, EMR).
β’ Experience working with relational and NoSQL databases (PostgreSQL, MySQL, DynamoDB, etc.).
β’ Hands-on experience with Apache Spark, Kafka, or Airflow is a plus.
β’ Knowledge of data warehousing concepts and data lake architectures.
β’ Strong understanding of data modeling, performance tuning, and query optimization.
β’ Experience with CI/CD pipelines and Infrastructure as Code (Terraform, CloudFormation) is a plus
Thanks
Aatmesh
aatmesh.singh@ampstek.com