

LTIMindtree
AWS Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer on a contract basis in Phoenix, AZ (Hybrid). Key skills include Apache Spark, Java, Python, AWS services, and Snowflake. AWS certifications are preferred. Experience in data engineering and ML/AI pipelines is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 24, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#DevOps #Java #Snowflake #Big Data #Data Quality #Python #Batch #Scala #"ETL (Extract #Transform #Load)" #AWS EMR (Amazon Elastic MapReduce) #DynamoDB #AWS Glue #Data Lake #SQL (Structured Query Language) #Spark (Apache Spark) #AWS S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #AWS Lambda #Data Science #Cloud #ML (Machine Learning) #Compliance #Apache Spark #Kubernetes #AI (Artificial Intelligence) #Docker #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Jenkins #Data Engineering #Data Pipeline #S3 (Amazon Simple Storage Service) #Athena
Role description
🔹 Job Details
Job Title : AWS Data Engineer
Job Location : Phoenix, AZ (Hybrid)
Job Type : Contract
Client : LTIMindtree
🔹 Role Overview
We are seeking a Specialist - Data Engineering with deep expertise in AWS cloud services and big data technologies to lead the development of scalable data pipelines and architectures.
Skills Required:
Apache Spark
Java
Python
AWS EMR
AWS Glue
AWS Lambda
AWS S3
Scala
Snowflake
ANSI-SQL
AWS Step Functions
Python for DATA
Skills that are Nice-To-Have but Not Mandatory:
AWS Athena, DynamoDB, Lake Formation
Docker, Kubernetes
Kafka, Flink
DevOps tools like Jenkins, AWS CodePipeline
Experience with ML/AI pipelines
AWS Certifications (Data Analytics Specialty or Solutions Architect Associate)
Responsibilities:
Design and build ETL/ELT pipelines for batch and streaming data
Architect scalable data lakes and warehouses on AWS
Collaborate with data scientists and analysts to deliver data models
Ensure data quality, governance, and compliance
Optimize data retrieval and reporting
Mentor junior engineers
Troubleshoot pipeline failures and improve systems
Support ML/AI feature engineering and inference pipelines
🔹 Job Details
Job Title : AWS Data Engineer
Job Location : Phoenix, AZ (Hybrid)
Job Type : Contract
Client : LTIMindtree
🔹 Role Overview
We are seeking a Specialist - Data Engineering with deep expertise in AWS cloud services and big data technologies to lead the development of scalable data pipelines and architectures.
Skills Required:
Apache Spark
Java
Python
AWS EMR
AWS Glue
AWS Lambda
AWS S3
Scala
Snowflake
ANSI-SQL
AWS Step Functions
Python for DATA
Skills that are Nice-To-Have but Not Mandatory:
AWS Athena, DynamoDB, Lake Formation
Docker, Kubernetes
Kafka, Flink
DevOps tools like Jenkins, AWS CodePipeline
Experience with ML/AI pipelines
AWS Certifications (Data Analytics Specialty or Solutions Architect Associate)
Responsibilities:
Design and build ETL/ELT pipelines for batch and streaming data
Architect scalable data lakes and warehouses on AWS
Collaborate with data scientists and analysts to deliver data models
Ensure data quality, governance, and compliance
Optimize data retrieval and reporting
Mentor junior engineers
Troubleshoot pipeline failures and improve systems
Support ML/AI feature engineering and inference pipelines






