

Motion Recruitment
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer focused on ML Ops and research integration, offering an 8-month remote contract at a competitive pay rate. Requires 5+ years in data engineering, strong Python and SQL skills, and experience with Databricks, DBT, and Power BI.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 3, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Automation #ML Ops (Machine Learning Operations) #Data Science #Databricks #SQL (Structured Query Language) #Data Transformations #Deployment #dbt (data build tool) #BI (Business Intelligence) #Scala #Monitoring #Data Engineering #Documentation #Microsoft Power BI #ML (Machine Learning) #Security #Datasets #Python #Semantic Models #Version Control
Role description
Our client, a data-driven organization, is currently looking for a Data Engineer - Data Science & Research Engineer β ML Ops & Research Integration to join their team. This is a 100% remote contract position.
As the Data Science & Research Engineer, you will be responsible for developing and delivering data and analytics products that support predictive model integration and research workflows, leveraging tools like Databricks, DBT, and Power BI. This role focuses on building, validating, and preparing solutions for controlled go-live within a modern ML Ops environment.
Contract: 8-months (possibility of extension)
Key Responsibilities:
Develop and enhance data/analytics products using Databricks and DBT to operationalize model inputs and outputs.
Build Power BI semantic models, datasets, and dashboards aligned to approved deliverables, including performance and security best practices.
Deliver predictive model and integration enhancements using Python and SQL, supporting initial validation and testing.
Build and enhance ML Ops workflows including deployment automation, reproducible pipelines, and monitoring frameworks.
Develop APIs and connectors to integrate research platforms and datasets as reusable assets.
Partner with Data Science and platform teams to ensure solutions meet governance, security, and release readiness requirements.
Produce technical documentation including pipeline specs, interface definitions, and deployment notes to support controlled releases.
Required Qualifications:
5+ years of experience delivering BI, analytics engineering, or data engineering solutions in production environments.
Strong Python and SQL skills with experience building scalable and testable data transformations.
Hands-on experience with Databricks and Lakehouse concepts (Delta).
Experience with DBT including models, testing, and documentation.
Hands-on experience with Power BI including semantic models, datasets, and dashboards.
Experience with version control, code review, and CI/CD-aligned development practices.
Experience working with APIs and data services.
Preferred Qualifications:
Experience with ML lifecycle, analytics governance, or model monitoring concepts.
Experience integrating with ML platforms such as DataRobot or similar environments.
Experience designing APIs and integration patterns in enterprise environments.
Familiarity with regulated or governed environments requiring documentation and change control.
Deliverables / Success Measures: Delivered Databricks and DBT data products aligned to scope; Power BI dashboards and semantic models with proper documentation and security; reusable APIs and connectors for research integrations; validated solutions ready for go-live; improved efficiency in model and research integration workflows.
Our client, a data-driven organization, is currently looking for a Data Engineer - Data Science & Research Engineer β ML Ops & Research Integration to join their team. This is a 100% remote contract position.
As the Data Science & Research Engineer, you will be responsible for developing and delivering data and analytics products that support predictive model integration and research workflows, leveraging tools like Databricks, DBT, and Power BI. This role focuses on building, validating, and preparing solutions for controlled go-live within a modern ML Ops environment.
Contract: 8-months (possibility of extension)
Key Responsibilities:
Develop and enhance data/analytics products using Databricks and DBT to operationalize model inputs and outputs.
Build Power BI semantic models, datasets, and dashboards aligned to approved deliverables, including performance and security best practices.
Deliver predictive model and integration enhancements using Python and SQL, supporting initial validation and testing.
Build and enhance ML Ops workflows including deployment automation, reproducible pipelines, and monitoring frameworks.
Develop APIs and connectors to integrate research platforms and datasets as reusable assets.
Partner with Data Science and platform teams to ensure solutions meet governance, security, and release readiness requirements.
Produce technical documentation including pipeline specs, interface definitions, and deployment notes to support controlled releases.
Required Qualifications:
5+ years of experience delivering BI, analytics engineering, or data engineering solutions in production environments.
Strong Python and SQL skills with experience building scalable and testable data transformations.
Hands-on experience with Databricks and Lakehouse concepts (Delta).
Experience with DBT including models, testing, and documentation.
Hands-on experience with Power BI including semantic models, datasets, and dashboards.
Experience with version control, code review, and CI/CD-aligned development practices.
Experience working with APIs and data services.
Preferred Qualifications:
Experience with ML lifecycle, analytics governance, or model monitoring concepts.
Experience integrating with ML platforms such as DataRobot or similar environments.
Experience designing APIs and integration patterns in enterprise environments.
Familiarity with regulated or governed environments requiring documentation and change control.
Deliverables / Success Measures: Delivered Databricks and DBT data products aligned to scope; Power BI dashboards and semantic models with proper documentation and security; reusable APIs and connectors for research integrations; validated solutions ready for go-live; improved efficiency in model and research integration workflows.






