

BURGEON IT SERVICES
AWS Data Engineer (Ex-Amazon Preferred) Santa Clara, CA Only on W2
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer (Ex-Amazon Preferred) in Santa Clara, CA, with a contract length of "unknown." The pay rate is "unknown." Requires 10+ years of experience, expertise in AWS services, big data technologies, and prior Amazon experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Santa Clara, CA
-
🧠 - Skills detailed
#GitHub #SQL (Structured Query Language) #Data Governance #Security #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Data Processing #Data Quality #Scala #Terraform #Infrastructure as Code (IaC) #AWS (Amazon Web Services) #Snowflake #DynamoDB #Hadoop #Redshift #Spark (Apache Spark) #Data Pipeline #Data Warehouse #PySpark #Data Engineering #IAM (Identity and Access Management) #Monitoring #Programming #Big Data #Python #Batch #Data Ingestion #Data Lake #SQL Server #PostgreSQL #Data Modeling #MySQL #MongoDB #Lambda (AWS Lambda) #Cloud #Databases #DevOps #Athena
Role description
AWS Data Engineer (Ex-Amazon Preferred)
Location: Santa Clara, CA (Hybrid – Amazon Office, Redmond)
Experience:10+ Years
Please share the resumes with me at pranay@burgeonits.com
Role Overview
Looking for an experienced AWS Data Engineer to design, build, and optimize scalable data pipelines, data lakes, and data warehouses on AWS.
Key Skills Required
• AWS Services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena
• Big Data: Spark, PySpark, Hadoop, Hive
• Programming: Python, SQL (Scala is a plus)
• Databases: SQL Server, PostgreSQL, MySQL, DynamoDB, MongoDB
• Data Engineering: ETL/ELT pipelines, batch & real-time data processing
• Data Modeling: Star/Snowflake schema, data warehousing concepts
• DevOps & IaC: CI/CD, Terraform / CloudFormation, CodePipeline, GitHub Actions
• Monitoring & Security: CloudWatch, IAM, KMS, data governance & quality
Responsibilities
• Build and maintain scalable ETL/ELT pipelines
• Develop real-time & batch data ingestion
• Design data lakes & warehouses on AWS
• Optimize performance, scalability, and cost
• Ensure data quality, security, and governance
Mandatory
• Prior Amazon experience in a similar role
AWS Data Engineer (Ex-Amazon Preferred)
Location: Santa Clara, CA (Hybrid – Amazon Office, Redmond)
Experience:10+ Years
Please share the resumes with me at pranay@burgeonits.com
Role Overview
Looking for an experienced AWS Data Engineer to design, build, and optimize scalable data pipelines, data lakes, and data warehouses on AWS.
Key Skills Required
• AWS Services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena
• Big Data: Spark, PySpark, Hadoop, Hive
• Programming: Python, SQL (Scala is a plus)
• Databases: SQL Server, PostgreSQL, MySQL, DynamoDB, MongoDB
• Data Engineering: ETL/ELT pipelines, batch & real-time data processing
• Data Modeling: Star/Snowflake schema, data warehousing concepts
• DevOps & IaC: CI/CD, Terraform / CloudFormation, CodePipeline, GitHub Actions
• Monitoring & Security: CloudWatch, IAM, KMS, data governance & quality
Responsibilities
• Build and maintain scalable ETL/ELT pipelines
• Develop real-time & batch data ingestion
• Design data lakes & warehouses on AWS
• Optimize performance, scalability, and cost
• Ensure data quality, security, and governance
Mandatory
• Prior Amazon experience in a similar role






