DKMRBH Inc

Data Engineer

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in AWS and utility data (AMI, outage, grid) in Seattle, WA. Duration is 9+ months with a pay rate of "TBD." Key skills include Databricks, SQL, Python/Scala, and experience with high-volume datasets.
๐ŸŒŽ - Country
United States
๐Ÿ’ฑ - Currency
$ USD
-
๐Ÿ’ฐ - Day rate
600
-
๐Ÿ—“๏ธ - Date
April 24, 2026
๐Ÿ•’ - Duration
More than 6 months
-
๐Ÿ๏ธ - Location
On-site
-
๐Ÿ“„ - Contract
Unknown
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
Washington, United States
-
๐Ÿง  - Skills detailed
#Batch #Data Pipeline #Data Science #AWS (Amazon Web Services) #Spark (Apache Spark) #Python #Data Engineering #SQL Queries #Data Lake #Scala #SQL (Structured Query Language) #IoT (Internet of Things) #Databricks #Normalization #Cloud #"ETL (Extract #Transform #Load)" #Datasets #Data Cleansing
Role description
AWS Data Engineer โ€“ Utility Data (AMI, Grid, Outage) Role Details โ€ข Location: Seattle, WA (Local only) โ€ข Work Model: Onsite/Hybrid (as required quarterly) โ€ข Duration: 9 +months (extension) โ€ข Interview: 2 video rounds โ€ข Client Domain: Public sector utility (power/energy) If youโ€™ve worked with utility data (AMI, outage, grid, CIS) and built pipelines on AWS + Databricks, this is your lane. This is not a generic data role โ€” it sits inside a power utility environment dealing with real operational data at scale. Who this role is for โ€ข Engineers whoโ€™ve handled smart meter / AMI data, outage systems, or grid/asset data โ€ข People whoโ€™ve built Databricks pipelines on AWS for high-volume, time-series datasets โ€ข Candidates comfortable working in utility or energy environments (regulated, ops-heavy) Role Snapshot Youโ€™ll be building and optimizing data pipelines for utility operations โ€” meter data, outage events, asset data, and customer systems โ€” inside a cloud-native AWS environment using Databricks. Environment โ€ข Public sector utility (power/energy) โ€ข High-volume, time-series operational data โ€ข Mix of IoT (meters), grid systems, and enterprise platforms โ€ข Regulated, reliability-focused environment What this role actually owns day-to-day โ€ข Building Databricks pipelines ingesting AMI, outage, and asset data โ€ข Working with streaming + batch data for near real-time analytics โ€ข Structuring data for grid operations, reporting, and downstream analytics โ€ข Optimizing pipelines that process large-scale energy datasets Key Responsibilities โ€ข Build and maintain ETL/ELT pipelines in Databricks for utility datasets (meter reads, outage events, asset telemetry) โ€ข Ingest data from IoT devices, utility systems, APIs, and enterprise platforms โ€ข Implement data cleansing, normalization, and enrichment for operational datasets โ€ข Develop streaming and batch pipelines supporting real-time and historical analysis โ€ข Optimize Spark jobs, SQL queries, and data models for performance at scale โ€ข Design data structures across data lake / lakehouse / warehouse layers โ€ข Work with analysts and data scientists to deliver usable, analytics-ready datasets โ€ข Document pipelines and support handoff to internal utility teams Must-Have Requirements (Non-Negotiable) โ€ข Hands-on experience with utility domain data: AMI / smart meter, outage, grid/asset, or CIS โ€ข Strong experience with Databricks on AWS โ€ข Solid coding in SQL + Python or Scala โ€ข Experience building both batch and streaming pipelines โ€ข Understanding of data lake / lakehouse architectures โ€ข Familiarity with IEC CIM or utility data models