

FUSTIS LLC
Data Modeling Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Modeling Engineer in Reston, Virginia, hybrid (3 days onsite, 2 days offsite), with a contract length of unspecified duration and a pay rate of $75/hr W2. Key skills include AWS, Python, MLflow, and data pipeline expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
600
-
ποΈ - Date
February 27, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Reston, VA
-
π§ - Skills detailed
#Databases #Data Science #Python #AWS (Amazon Web Services) #Model Validation #Airflow #Monitoring #Compliance #Data Lake #MLflow #NoSQL #SageMaker #Datasets #Deployment #Version Control #Scala #ML Ops (Machine Learning Operations) #ML (Machine Learning) #Data Modeling #Data Engineering #SQL (Structured Query Language) #GIT #Spark (Apache Spark) #Data Pipeline
Role description
Data/Modelling Engineer
Reston, Virginia, United States : Hybrid, 3 days onsite, 2 days offsite (Onsite Interview)
Pay Rate is $75/hr W2.
Overview:
We are seeking a highly skilled ML/Data Engineer to lead model development, experiment tracking, and end to end machine learning operations across Domino and Amazon SageMaker. This role will drive model lifecycle quality, governance alignment, and engineering excellence.
Responsibilities
β’ Own the monitoring, tracking, and maintenance of ML models across Domino and SageMaker platforms.
β’ Implement MLflow for parameters, metrics, artifact management, and end to end lineage.
β’ Build and maintain scalable data pipelines for training, validation, and inference processes.
β’ Develop custom evaluation metrics, explainability components, and fairness/bias testing frameworks.
β’ Package models for deployment and support model lifecycle transitions across environments.
β’ Collaborate with data scientists, engineering teams, and governance stakeholders to ensure compliance and operational readiness.
Required Skills & Experience
β’ Strong experience with AWS and ML engineering
β’ Proficiency in Python and MLflow
β’ Hands on expertise with Domino and SageMaker SDKs
β’ Experience with feature engineering and scalable data pipelines
β’ Knowledge of model validation, explainability, and bias/fairness tooling
β’ Familiarity with Git based workflows, version control, and MLOps practices
Focused on manipulating data in a software engineering capacity. Some of that data might live in relational systems, but itοΏ½s increasingly moving towards NoSQL systems and data lakes.
Normalize databases and ascertain the structure of the data meets the requirements of the applications that are accessing the information. Construct datasets that are easy to analyze and support company requirements.
Combine raw information from different sources to create consistent and machine-readable formats.
Skills:
This IT role requires a significant set of technical skills, including a deep knowledge of SQL, data modeling, and tools like Spark/Hive/Airflow.
Professional certification(s) desired 15+ years relevant experience
Data/Modelling Engineer
Reston, Virginia, United States : Hybrid, 3 days onsite, 2 days offsite (Onsite Interview)
Pay Rate is $75/hr W2.
Overview:
We are seeking a highly skilled ML/Data Engineer to lead model development, experiment tracking, and end to end machine learning operations across Domino and Amazon SageMaker. This role will drive model lifecycle quality, governance alignment, and engineering excellence.
Responsibilities
β’ Own the monitoring, tracking, and maintenance of ML models across Domino and SageMaker platforms.
β’ Implement MLflow for parameters, metrics, artifact management, and end to end lineage.
β’ Build and maintain scalable data pipelines for training, validation, and inference processes.
β’ Develop custom evaluation metrics, explainability components, and fairness/bias testing frameworks.
β’ Package models for deployment and support model lifecycle transitions across environments.
β’ Collaborate with data scientists, engineering teams, and governance stakeholders to ensure compliance and operational readiness.
Required Skills & Experience
β’ Strong experience with AWS and ML engineering
β’ Proficiency in Python and MLflow
β’ Hands on expertise with Domino and SageMaker SDKs
β’ Experience with feature engineering and scalable data pipelines
β’ Knowledge of model validation, explainability, and bias/fairness tooling
β’ Familiarity with Git based workflows, version control, and MLOps practices
Focused on manipulating data in a software engineering capacity. Some of that data might live in relational systems, but itοΏ½s increasingly moving towards NoSQL systems and data lakes.
Normalize databases and ascertain the structure of the data meets the requirements of the applications that are accessing the information. Construct datasets that are easy to analyze and support company requirements.
Combine raw information from different sources to create consistent and machine-readable formats.
Skills:
This IT role requires a significant set of technical skills, including a deep knowledge of SQL, data modeling, and tools like Spark/Hive/Airflow.
Professional certification(s) desired 15+ years relevant experience






