

Ampstek
AWS Data Engineer || Only USC and Green Card Required
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Santa Clara, CA, for 12+ months at a competitive pay rate. Requires 7+ years of data engineering experience, expertise in AWS services, big data tech, and strong programming skills in Python and SQL. Only US Citizens and Green Card holders are eligible.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Santa Clara, CA
-
🧠 - Skills detailed
#DevOps #Datasets #PostgreSQL #S3 (Amazon Simple Storage Service) #Data Quality #Cloud #IAM (Identity and Access Management) #Batch #AWS (Amazon Web Services) #MySQL #Data Management #PySpark #Data Lifecycle #Data Catalog #SQL (Structured Query Language) #SQL Server #Scala #Data Modeling #Python #DynamoDB #Metadata #Redshift #Security #Programming #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Big Data #NoSQL #RDBMS (Relational Database Management System) #Snowflake #Data Ingestion #Hadoop #Athena #Infrastructure as Code (IaC) #Storage #Data Lake #Distributed Computing #Terraform #GitHub #Data Warehouse #Data Engineering #Spark (Apache Spark) #Data Pipeline #Logging #Databases #ML (Machine Learning) #MongoDB #Compliance
Role description
Position: AWS Data Engineer
Location: Santa Clara, CA (Onsite)
Duration: 12+ Months
•
• Only US Citizen and Green Card Required
• Job Description:
• 7+ years of experience in data engineering or related fields.
• Strong hands on experience with:
o AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena.
o Big Data tech: Spark/PySpark, Hadoop, Hive.
o Programming: Python, SQL, Scala (optional).
o Databases: SQL Server, PostgreSQL, MySQL, NoSQL (DynamoDB, MongoDB).
• Experience with CI/CD, DevOps, and IaC tools.
• Strong understanding of data modeling, warehousing, and distributed computing.
• Data Pipeline & ETL Development
o Design, build, and maintain scalable ETL/ELT pipelines using AWS services (Glue, Lambda, EMR, Step Functions).
o Develop batch and real time data ingestion processes from diverse sources (APIs, RDBMS, streaming platforms).
o Optimize data workflows for performance, scalability, and cost-efficiency.
• Data Platform Engineering
o Architect and implement data lakes and data warehouses using S3, Redshift, Lake Formation, Athena.
o Manage data modeling (star/snowflake schemas) and design optimized storage layers.
o Implement data cataloging, metadata management, and data lifecycle policies.
• Big Data & Analytics
o Work with big data tools such as Spark, Hadoop, Hive, and PySpark.
o Support analytics and machine learning teams by providing high quality, curated datasets.
o Cloud Infrastructure & DevOps
o Build CI/CD pipelines for data engineering (CodePipeline, CodeBuild, GitHub Actions).
o Write IaC using Terraform or AWS CloudFormation.
o Monitor, troubleshoot, and optimize workloads using CloudWatch and distributed logging.
• Data Quality & Governance
o Implement data validation frameworks and automated quality checks.
o Ensure compliance with security, privacy, and governance standards (IAM, KMS, encryption).
Thank You
Aakash Dubey
Talent Acquisition Lead
Aakash.dubey@ampstek.com
Position: AWS Data Engineer
Location: Santa Clara, CA (Onsite)
Duration: 12+ Months
•
• Only US Citizen and Green Card Required
• Job Description:
• 7+ years of experience in data engineering or related fields.
• Strong hands on experience with:
o AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena.
o Big Data tech: Spark/PySpark, Hadoop, Hive.
o Programming: Python, SQL, Scala (optional).
o Databases: SQL Server, PostgreSQL, MySQL, NoSQL (DynamoDB, MongoDB).
• Experience with CI/CD, DevOps, and IaC tools.
• Strong understanding of data modeling, warehousing, and distributed computing.
• Data Pipeline & ETL Development
o Design, build, and maintain scalable ETL/ELT pipelines using AWS services (Glue, Lambda, EMR, Step Functions).
o Develop batch and real time data ingestion processes from diverse sources (APIs, RDBMS, streaming platforms).
o Optimize data workflows for performance, scalability, and cost-efficiency.
• Data Platform Engineering
o Architect and implement data lakes and data warehouses using S3, Redshift, Lake Formation, Athena.
o Manage data modeling (star/snowflake schemas) and design optimized storage layers.
o Implement data cataloging, metadata management, and data lifecycle policies.
• Big Data & Analytics
o Work with big data tools such as Spark, Hadoop, Hive, and PySpark.
o Support analytics and machine learning teams by providing high quality, curated datasets.
o Cloud Infrastructure & DevOps
o Build CI/CD pipelines for data engineering (CodePipeline, CodeBuild, GitHub Actions).
o Write IaC using Terraform or AWS CloudFormation.
o Monitor, troubleshoot, and optimize workloads using CloudWatch and distributed logging.
• Data Quality & Governance
o Implement data validation frameworks and automated quality checks.
o Ensure compliance with security, privacy, and governance standards (IAM, KMS, encryption).
Thank You
Aakash Dubey
Talent Acquisition Lead
Aakash.dubey@ampstek.com






