

Ampstek
Data Engineer - Only W2
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Redmond, WA, with a contract length of "unknown" and a pay rate of "unknown." Requires 7+ years of experience, strong AWS and Big Data skills, and expertise in ETL/ELT pipeline development.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 11, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Redmond, WA
-
π§ - Skills detailed
#DevOps #Datasets #PostgreSQL #S3 (Amazon Simple Storage Service) #Data Quality #Cloud #IAM (Identity and Access Management) #Batch #AWS (Amazon Web Services) #MySQL #Data Management #PySpark #Data Lifecycle #Data Catalog #SQL (Structured Query Language) #SQL Server #Scala #Data Modeling #Python #DynamoDB #Metadata #Redshift #Security #Programming #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Big Data #NoSQL #RDBMS (Relational Database Management System) #Snowflake #Data Ingestion #Hadoop #Athena #Infrastructure as Code (IaC) #Storage #Data Lake #Distributed Computing #Terraform #GitHub #Data Warehouse #Data Engineering #Spark (Apache Spark) #Data Pipeline #Logging #Databases #ML (Machine Learning) #MongoDB #Compliance
Role description
Job Title: Data Engineer
Location: Redmond, WA
7+ years of experience in data engineering or related fields.
Strong handsβon experience with:
AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena.
Big Data tech: Spark/PySpark, Hadoop, Hive.
Programming: Python, SQL, Scala (optional).
Databases: SQL Server, PostgreSQL, MySQL, NoSQL (DynamoDB, MongoDB).
Experience with CI/CD, DevOps, and IaC tools.
Strong understanding of data modeling, warehousing, and distributed computing.
Data Pipeline & ETL Development
Design, build, and maintain scalable ETL/ELT pipelines using AWS services (Glue, Lambda, EMR, Step Functions).
Develop batch and realβtime data ingestion processes from diverse sources (APIs, RDBMS, streaming platforms).
Optimize data workflows for performance, scalability, and cost-efficiency.
Data Platform Engineering
Architect and implement data lakes and data warehouses using S3, Redshift, Lake Formation, Athena.
Manage data modeling (star/snowflake schemas) and design optimized storage layers.
Implement data cataloging, metadata management, and data lifecycle policies.
Big Data & Analytics
Work with big data tools such as Spark, Hadoop, Hive, and PySpark.
Support analytics and machine learning teams by providing highβquality, curated datasets.
Cloud Infrastructure & DevOps
Build CI/CD pipelines for data engineering (CodePipeline, CodeBuild, GitHub Actions).
Write IaC using Terraform or AWS CloudFormation.
Monitor, troubleshoot, and optimize workloads using CloudWatch and distributed logging.
Data Quality & Governance
Implement data validation frameworks and automated quality checks.
Ensure compliance with security, privacy, and governance standards (IAM, KMS, encryption).
Thanks
Andrew
ampstekandrew@gmail.com
Job Title: Data Engineer
Location: Redmond, WA
7+ years of experience in data engineering or related fields.
Strong handsβon experience with:
AWS services: Glue, S3, Redshift, EMR, Lambda, Kinesis, Athena.
Big Data tech: Spark/PySpark, Hadoop, Hive.
Programming: Python, SQL, Scala (optional).
Databases: SQL Server, PostgreSQL, MySQL, NoSQL (DynamoDB, MongoDB).
Experience with CI/CD, DevOps, and IaC tools.
Strong understanding of data modeling, warehousing, and distributed computing.
Data Pipeline & ETL Development
Design, build, and maintain scalable ETL/ELT pipelines using AWS services (Glue, Lambda, EMR, Step Functions).
Develop batch and realβtime data ingestion processes from diverse sources (APIs, RDBMS, streaming platforms).
Optimize data workflows for performance, scalability, and cost-efficiency.
Data Platform Engineering
Architect and implement data lakes and data warehouses using S3, Redshift, Lake Formation, Athena.
Manage data modeling (star/snowflake schemas) and design optimized storage layers.
Implement data cataloging, metadata management, and data lifecycle policies.
Big Data & Analytics
Work with big data tools such as Spark, Hadoop, Hive, and PySpark.
Support analytics and machine learning teams by providing highβquality, curated datasets.
Cloud Infrastructure & DevOps
Build CI/CD pipelines for data engineering (CodePipeline, CodeBuild, GitHub Actions).
Write IaC using Terraform or AWS CloudFormation.
Monitor, troubleshoot, and optimize workloads using CloudWatch and distributed logging.
Data Quality & Governance
Implement data validation frameworks and automated quality checks.
Ensure compliance with security, privacy, and governance standards (IAM, KMS, encryption).
Thanks
Andrew
ampstekandrew@gmail.com






