

Aptonet Inc
AWS Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Seattle, WA, with a contract length of "unknown" and a pay rate of "unknown." Requires 5+ years of data engineering experience, proficiency in AWS services, SQL, Python or Scala, and utility industry data expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 29, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Yes
-
π - Location detailed
Seattle, WA
-
π§ - Skills detailed
#Databricks #Lambda (AWS Lambda) #Data Lifecycle #Security #S3 (Amazon Simple Storage Service) #Indexing #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #SQL Queries #Data Pipeline #Scala #Terraform #Data Engineering #Data Warehouse #DevOps #Data Architecture #Data Science #Data Storage #Batch #Data Cleansing #ML (Machine Learning) #Storage #IoT (Internet of Things) #Schema Design #Data Lake #Data Ingestion #Python #Redshift #BI (Business Intelligence) #Infrastructure as Code (IaC) #Databases #Computer Science #AWS (Amazon Web Services)
Role description
Title: AWS Data Engineer
Location: Seattle, WA
Role Summary
Seeking an AWS Data Engineer to design, build, and optimize large-scale data pipelines and analytics solutions on AWS. This role is responsible for the end-to-end data lifecycle, including data ingestion, transformation, storage, and delivery for analytics, machine learning, and operational systems.
Key Responsibilities
β’ Design, build, and optimize ETL/ELT workflows to ingest data from multiple sources (S3, Redshift, Lake Formation, Glue, Lambda).
β’ Implement data cleansing, enrichment, and standardization processes.
β’ Develop and automate batch and real-time streaming data pipelines using Kinesis, MSK, Lambda, Glue, EMR, and Step Functions.
β’ Ensure pipelines are optimized for scalability, performance, and fault tolerance.
β’ Optimize SQL queries, data models, and pipeline performance.
β’ Design and implement data architecture across data lakes, data warehouses, and lakehouses.
β’ Optimize data storage strategies including partitioning, indexing, and schema design.
β’ Integrate data from diverse sources such as databases, APIs, IoT, and third-party systems.
β’ Collaborate with Data Scientists, Analysts, and BI developers to deliver structured data.
β’ Document data assets and processes for discoverability.
β’ Train internal staff to maintain infrastructure and pipelines.
Required Technical Skills
β’ Strong experience with AWS services: S3, Redshift, Lake Formation, Glue, Lambda, Kinesis, MSK, EMR, Step Functions.
β’ Proficiency in SQL, Python, or Scala for data transformation and processing.
β’ Hands-on experience with Databricks on AWS.
β’ Working knowledge of DevOps and CI/CD practices.
β’ Experience designing data pipelines for batch and streaming architectures.
β’ Experience with utility industry data (meter data, customer data, grid/asset data, work management, outage data).
β’ Familiarity with IEC CIM standards and utility integration frameworks.
Preferred / Nice-to-Have Skills
β’ Experience with Infrastructure as Code (IaC) tools such as Terraform.
Qualifications & Experience
β’ Bachelorβs degree in Computer Science, Data Engineering, or related field.
β’ 5+ years of experience in data engineering roles.
β’ U.S. Citizenship or Green Card required.
β’ No security clearance required.
β’ Standard work schedule (40 hours/week), no OT required.
Title: AWS Data Engineer
Location: Seattle, WA
Role Summary
Seeking an AWS Data Engineer to design, build, and optimize large-scale data pipelines and analytics solutions on AWS. This role is responsible for the end-to-end data lifecycle, including data ingestion, transformation, storage, and delivery for analytics, machine learning, and operational systems.
Key Responsibilities
β’ Design, build, and optimize ETL/ELT workflows to ingest data from multiple sources (S3, Redshift, Lake Formation, Glue, Lambda).
β’ Implement data cleansing, enrichment, and standardization processes.
β’ Develop and automate batch and real-time streaming data pipelines using Kinesis, MSK, Lambda, Glue, EMR, and Step Functions.
β’ Ensure pipelines are optimized for scalability, performance, and fault tolerance.
β’ Optimize SQL queries, data models, and pipeline performance.
β’ Design and implement data architecture across data lakes, data warehouses, and lakehouses.
β’ Optimize data storage strategies including partitioning, indexing, and schema design.
β’ Integrate data from diverse sources such as databases, APIs, IoT, and third-party systems.
β’ Collaborate with Data Scientists, Analysts, and BI developers to deliver structured data.
β’ Document data assets and processes for discoverability.
β’ Train internal staff to maintain infrastructure and pipelines.
Required Technical Skills
β’ Strong experience with AWS services: S3, Redshift, Lake Formation, Glue, Lambda, Kinesis, MSK, EMR, Step Functions.
β’ Proficiency in SQL, Python, or Scala for data transformation and processing.
β’ Hands-on experience with Databricks on AWS.
β’ Working knowledge of DevOps and CI/CD practices.
β’ Experience designing data pipelines for batch and streaming architectures.
β’ Experience with utility industry data (meter data, customer data, grid/asset data, work management, outage data).
β’ Familiarity with IEC CIM standards and utility integration frameworks.
Preferred / Nice-to-Have Skills
β’ Experience with Infrastructure as Code (IaC) tools such as Terraform.
Qualifications & Experience
β’ Bachelorβs degree in Computer Science, Data Engineering, or related field.
β’ 5+ years of experience in data engineering roles.
β’ U.S. Citizenship or Green Card required.
β’ No security clearance required.
β’ Standard work schedule (40 hours/week), no OT required.






