Tekskills Inc.

Quantitative Machine Learning Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Quantitative Machine Learning Engineer in New York City, NY, for 12+ months at a pay rate of "rate". Key skills include expert PyTorch, Python, C++, R, Databricks, and experience with PPNR regulatory modeling.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
February 25, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Statistics #PyTorch #Distributed Computing #Migration #Documentation #MLflow #R #Forecasting #ML (Machine Learning) #C++ #Databricks #Snowflake #Snowpark #Regression #"ETL (Extract #Transform #Load)" #Data Integration #Spark (Apache Spark) #Python #Compliance #Hadoop #Big Data
Role description
Job Title : Quantitative Machine Learning Engineer – PPNR Model Migration (PyTorch & Databricks) Location : New York City , NY -Hybrid Duration : 12+ Months Role Objective We are looking for a Quantitative ML Engineer to lead the technical migration of complex PPNR (Pre-Provision Net Revenue) forecasting models from a Hadoop/C++/R environment to a modern Databricks and PyTorch ecosystem. You will be responsible for translating legacy mathematical logic into optimized PyTorch tensors while ensuring strict numerical parity required for US regulatory compliance (CCAR/DFAST). Key Responsibilities β€’ Model Translation: Reverse-engineer legacy C++ and R codebases to extract core mathematical logic, econometric formulas, and simulation parameters. β€’ PyTorch Implementation: Re-implement these models in PyTorch, utilizing advanced features like torch.nn for modularity and custom Autograd functions where necessary. β€’ Optimization: Refactor code to leverage Databricks’ distributed computing and PyTorch’s GPU/parallel processing capabilities to reduce model execution time. β€’ Data Integration: Build high-performance pipelines from Snowflake into Databricks using Spark and PyTorch DataLoaders. β€’ Parity & Validation: Conduct rigorous back-testing and sensitivity analysis to ensure the new PyTorch models yield results statistically identical to the legacy Hadoop outputs. β€’ Regulatory Documentation: Collaborating with Model Risk Management (MRM) to document the migration process, architectural changes, and validation results in compliance with SR 11-7 standards. Required Technical Skills β€’ Frameworks: Expert-level PyTorch (specifically for non-computer vision tasks like time-series, regression, or Monte Carlo simulations). β€’ Languages: High proficiency in Python and a strong ability to read and interpret C++ and R (specifically statistical packages like lme4 or forecast). β€’ Platforms: Hands-on experience with Databricks (MLflow, Spark) and Snowflake (Snowpark is a plus). β€’ Quantitative Finance: Deep understanding of statistical modeling, econometric forecasting, or financial risk management. β€’ Big Data: Experience migrating workloads out of Hadoop/Hive environments. Preferred Qualifications β€’ Experience specifically with PPNR, CCAR, or DFAST regulatory modeling. β€’ Masters or PhD in a quantitative field (Statistics, Financial Engineering, Physics, or Math). β€’ Experience with TorchScript or ONNX for model productionisation.