

Mastech Digital
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis in Seattle, WA, requiring hands-on experience with Databricks, DBT, Snowflake, and enterprise SaaS systems. Key skills include SQL, Python, and ETL/ELT pipeline development.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
May 1, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Seattle, WA
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Databricks #dbt (data build tool) #Documentation #Compliance #PySpark #Data Science #Data Quality #SaaS (Software as a Service) #Snowflake #ML (Machine Learning) #SQL (Structured Query Language) #Workday #Python #Datasets #Data Engineering #AI (Artificial Intelligence)
Role description
Job Title: Data Engineer
Work Location: Seattle, WA
Job Type: Contract
Responsibilities
1. Ingest and process data from Salesforce (Investments), Dynamics 365 & Coupa (Finance), Workday (HR), Concur (Operations), and external program/partner data (e.g., Health, Education).
1. Build and maintain pipelines in Databricks, DBT, and Snowflake to deliver curated datasets for AI/ML.
1. Design simple, reusable data models to support training and inference.
1. Implement data quality tests, documentation, and lineage in DBT
1. Ensure performance optimization, cost efficiency, and compliance with governance standards.
1. Collaborate with Knowledge Management (AI/Data Science) team to provide feature ready datasets.
Requirements
1. βHands-on experience with Databricks (PySpark/SQL, Unity Catalog), DBT (models, tests, docs), and Snowflake (SQL, performance tuning).
1. Strong background in ETL/ELT pipeline development.
1. Experience with enterprise SaaS systems (Salesforce, D365, Coupa, Workday, Concur).
1. Solid SQL and Python skills; knowledge of data quality frameworks.
Job Title: Data Engineer
Work Location: Seattle, WA
Job Type: Contract
Responsibilities
1. Ingest and process data from Salesforce (Investments), Dynamics 365 & Coupa (Finance), Workday (HR), Concur (Operations), and external program/partner data (e.g., Health, Education).
1. Build and maintain pipelines in Databricks, DBT, and Snowflake to deliver curated datasets for AI/ML.
1. Design simple, reusable data models to support training and inference.
1. Implement data quality tests, documentation, and lineage in DBT
1. Ensure performance optimization, cost efficiency, and compliance with governance standards.
1. Collaborate with Knowledge Management (AI/Data Science) team to provide feature ready datasets.
Requirements
1. βHands-on experience with Databricks (PySpark/SQL, Unity Catalog), DBT (models, tests, docs), and Snowflake (SQL, performance tuning).
1. Strong background in ETL/ELT pipeline development.
1. Experience with enterprise SaaS systems (Salesforce, D365, Coupa, Workday, Concur).
1. Solid SQL and Python skills; knowledge of data quality frameworks.






