

DKMRBH Inc
Data Engineer
โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in AWS and utility data (AMI, outage, grid) in Seattle, WA. Duration is 9+ months with a pay rate of "TBD." Key skills include Databricks, SQL, Python/Scala, and experience with high-volume datasets.
๐ - Country
United States
๐ฑ - Currency
$ USD
-
๐ฐ - Day rate
600
-
๐๏ธ - Date
April 24, 2026
๐ - Duration
More than 6 months
-
๐๏ธ - Location
On-site
-
๐ - Contract
Unknown
-
๐ - Security
Unknown
-
๐ - Location detailed
Washington, United States
-
๐ง - Skills detailed
#Batch #Data Pipeline #Data Science #AWS (Amazon Web Services) #Spark (Apache Spark) #Python #Data Engineering #SQL Queries #Data Lake #Scala #SQL (Structured Query Language) #IoT (Internet of Things) #Databricks #Normalization #Cloud #"ETL (Extract #Transform #Load)" #Datasets #Data Cleansing
Role description
AWS Data Engineer โ Utility Data (AMI, Grid, Outage)
Role Details
โข Location: Seattle, WA (Local only)
โข Work Model: Onsite/Hybrid (as required quarterly)
โข Duration: 9 +months (extension)
โข Interview: 2 video rounds
โข Client Domain: Public sector utility (power/energy)
If youโve worked with utility data (AMI, outage, grid, CIS) and built pipelines on AWS + Databricks, this is your lane.
This is not a generic data role โ it sits inside a power utility environment dealing with real operational data at scale.
Who this role is for
โข Engineers whoโve handled smart meter / AMI data, outage systems, or grid/asset data
โข People whoโve built Databricks pipelines on AWS for high-volume, time-series datasets
โข Candidates comfortable working in utility or energy environments (regulated, ops-heavy)
Role Snapshot
Youโll be building and optimizing data pipelines for utility operations โ meter data, outage events, asset data, and customer systems โ inside a cloud-native AWS environment using Databricks.
Environment
โข Public sector utility (power/energy)
โข High-volume, time-series operational data
โข Mix of IoT (meters), grid systems, and enterprise platforms
โข Regulated, reliability-focused environment
What this role actually owns day-to-day
โข Building Databricks pipelines ingesting AMI, outage, and asset data
โข Working with streaming + batch data for near real-time analytics
โข Structuring data for grid operations, reporting, and downstream analytics
โข Optimizing pipelines that process large-scale energy datasets
Key Responsibilities
โข Build and maintain ETL/ELT pipelines in Databricks for utility datasets (meter reads, outage events, asset telemetry)
โข Ingest data from IoT devices, utility systems, APIs, and enterprise platforms
โข Implement data cleansing, normalization, and enrichment for operational datasets
โข Develop streaming and batch pipelines supporting real-time and historical analysis
โข Optimize Spark jobs, SQL queries, and data models for performance at scale
โข Design data structures across data lake / lakehouse / warehouse layers
โข Work with analysts and data scientists to deliver usable, analytics-ready datasets
โข Document pipelines and support handoff to internal utility teams
Must-Have Requirements (Non-Negotiable)
โข Hands-on experience with utility domain data: AMI / smart meter, outage, grid/asset, or CIS
โข Strong experience with Databricks on AWS
โข Solid coding in SQL + Python or Scala
โข Experience building both batch and streaming pipelines
โข Understanding of data lake / lakehouse architectures
โข Familiarity with IEC CIM or utility data models
AWS Data Engineer โ Utility Data (AMI, Grid, Outage)
Role Details
โข Location: Seattle, WA (Local only)
โข Work Model: Onsite/Hybrid (as required quarterly)
โข Duration: 9 +months (extension)
โข Interview: 2 video rounds
โข Client Domain: Public sector utility (power/energy)
If youโve worked with utility data (AMI, outage, grid, CIS) and built pipelines on AWS + Databricks, this is your lane.
This is not a generic data role โ it sits inside a power utility environment dealing with real operational data at scale.
Who this role is for
โข Engineers whoโve handled smart meter / AMI data, outage systems, or grid/asset data
โข People whoโve built Databricks pipelines on AWS for high-volume, time-series datasets
โข Candidates comfortable working in utility or energy environments (regulated, ops-heavy)
Role Snapshot
Youโll be building and optimizing data pipelines for utility operations โ meter data, outage events, asset data, and customer systems โ inside a cloud-native AWS environment using Databricks.
Environment
โข Public sector utility (power/energy)
โข High-volume, time-series operational data
โข Mix of IoT (meters), grid systems, and enterprise platforms
โข Regulated, reliability-focused environment
What this role actually owns day-to-day
โข Building Databricks pipelines ingesting AMI, outage, and asset data
โข Working with streaming + batch data for near real-time analytics
โข Structuring data for grid operations, reporting, and downstream analytics
โข Optimizing pipelines that process large-scale energy datasets
Key Responsibilities
โข Build and maintain ETL/ELT pipelines in Databricks for utility datasets (meter reads, outage events, asset telemetry)
โข Ingest data from IoT devices, utility systems, APIs, and enterprise platforms
โข Implement data cleansing, normalization, and enrichment for operational datasets
โข Develop streaming and batch pipelines supporting real-time and historical analysis
โข Optimize Spark jobs, SQL queries, and data models for performance at scale
โข Design data structures across data lake / lakehouse / warehouse layers
โข Work with analysts and data scientists to deliver usable, analytics-ready datasets
โข Document pipelines and support handoff to internal utility teams
Must-Have Requirements (Non-Negotiable)
โข Hands-on experience with utility domain data: AMI / smart meter, outage, grid/asset, or CIS
โข Strong experience with Databricks on AWS
โข Solid coding in SQL + Python or Scala
โข Experience building both batch and streaming pipelines
โข Understanding of data lake / lakehouse architectures
โข Familiarity with IEC CIM or utility data models






