

Ampstek
Need USC/GC Only :: Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Santa Clara, CA, on a contract basis. Requires 8+ years of experience, solid AWS knowledge, and hands-on skills in AWS services, big data tech, Python, SQL, and CI/CD tools.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Santa Clara County, CA
-
🧠 - Skills detailed
#DevOps #Datasets #PostgreSQL #S3 (Amazon Simple Storage Service) #Data Quality #Cloud #IAM (Identity and Access Management) #Batch #AWS (Amazon Web Services) #MySQL #Data Management #PySpark #Data Lifecycle #Data Catalog #SQL (Structured Query Language) #SQL Server #Scala #Data Modeling #Python #DynamoDB #Metadata #Redshift #Security #Programming #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Big Data #NoSQL #RDBMS (Relational Database Management System) #Snowflake #Data Ingestion #Hadoop #Athena #Infrastructure as Code (IaC) #Storage #Data Lake #Distributed Computing #Terraform #GitHub #Data Warehouse #Data Engineering #Spark (Apache Spark) #Data Pipeline #Logging #Databases #ML (Machine Learning) #MongoDB #Compliance
Role description
Position: Data Engineer
Location : Santa Clara CA
Duration: Contract
Job Description ::
Total Experience :: 8+ Years
Relevant Experience :: Must have solid understanding of AWS and worked for Amazon in a similar role.
Mandatory skills :: AWS Data Platform
• 7+ years of experience in data engineering or related fields.
• Strong hands on experience with:
o AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena.
o Big Data tech: Spark/PySpark, Hadoop, Hive.
o Programming: Python, SQL, Scala (optional).
o Databases: SQL Server, PostgreSQL, MySQL, NoSQL (DynamoDB, MongoDB).
• Experience with CI/CD, DevOps, and IaC tools.
• Strong understanding of data modeling, warehousing, and distributed computing.
• Data Pipeline & ETL Development
o Design, build, and maintain scalable ETL/ELT pipelines using AWS services (Glue, Lambda, EMR, Step Functions).
o Develop batch and real time data ingestion processes from diverse sources (APIs, RDBMS, streaming platforms).
o Optimize data workflows for performance, scalability, and cost-efficiency.
• Data Platform Engineering
o Architect and implement data lakes and data warehouses using S3, Redshift, Lake Formation, Athena.
o Manage data modeling (star/snowflake schemas) and design optimized storage layers.
o Implement data cataloging, metadata management, and data lifecycle policies.
• Big Data & Analytics
o Work with big data tools such as Spark, Hadoop, Hive, and PySpark.
o Support analytics and machine learning teams by providing high quality, curated datasets.
o Cloud Infrastructure & DevOps
o Build CI/CD pipelines for data engineering (CodePipeline, CodeBuild, GitHub Actions).
o Write IaC using Terraform or AWS CloudFormation.
o Monitor, troubleshoot, and optimize workloads using CloudWatch and distributed logging.
• Data Quality & Governance
o Implement data validation frameworks and automated quality checks.
o Ensure compliance with security, privacy, and governance standards (IAM, KMS, encryption).
Position: Data Engineer
Location : Santa Clara CA
Duration: Contract
Job Description ::
Total Experience :: 8+ Years
Relevant Experience :: Must have solid understanding of AWS and worked for Amazon in a similar role.
Mandatory skills :: AWS Data Platform
• 7+ years of experience in data engineering or related fields.
• Strong hands on experience with:
o AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena.
o Big Data tech: Spark/PySpark, Hadoop, Hive.
o Programming: Python, SQL, Scala (optional).
o Databases: SQL Server, PostgreSQL, MySQL, NoSQL (DynamoDB, MongoDB).
• Experience with CI/CD, DevOps, and IaC tools.
• Strong understanding of data modeling, warehousing, and distributed computing.
• Data Pipeline & ETL Development
o Design, build, and maintain scalable ETL/ELT pipelines using AWS services (Glue, Lambda, EMR, Step Functions).
o Develop batch and real time data ingestion processes from diverse sources (APIs, RDBMS, streaming platforms).
o Optimize data workflows for performance, scalability, and cost-efficiency.
• Data Platform Engineering
o Architect and implement data lakes and data warehouses using S3, Redshift, Lake Formation, Athena.
o Manage data modeling (star/snowflake schemas) and design optimized storage layers.
o Implement data cataloging, metadata management, and data lifecycle policies.
• Big Data & Analytics
o Work with big data tools such as Spark, Hadoop, Hive, and PySpark.
o Support analytics and machine learning teams by providing high quality, curated datasets.
o Cloud Infrastructure & DevOps
o Build CI/CD pipelines for data engineering (CodePipeline, CodeBuild, GitHub Actions).
o Write IaC using Terraform or AWS CloudFormation.
o Monitor, troubleshoot, and optimize workloads using CloudWatch and distributed logging.
• Data Quality & Governance
o Implement data validation frameworks and automated quality checks.
o Ensure compliance with security, privacy, and governance standards (IAM, KMS, encryption).






