

Net2Source Inc.
Quantitative Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Quantitative ML Engineer/Data Scientist (Contract) in New York City, NY. Requires expertise in PyTorch, Python, C++, R, and experience with PPNR regulatory modeling. Knowledge of Databricks, Snowflake, and big data environments is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 11, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
New York, NY
-
π§ - Skills detailed
#MLflow #Forecasting #Regression #Snowpark #Documentation #Python #C++ #Data Integration #"ETL (Extract #Transform #Load)" #R #Big Data #Statistics #Snowflake #Hadoop #Distributed Computing #Migration #Spark (Apache Spark) #Data Science #PyTorch #ML (Machine Learning) #Databricks #Compliance
Role description
Job Title: Quantitative ML Engineer/Data Scientist (PyTorch & PPNR Migration)
Location: New York City, NY (Onsite-Hybrid)
Term: Contact
Role Objective
We are looking for a Quantitative ML Engineer to lead the technical migration of complex PPNR (Pre-Provision Net Revenue) forecasting models from a Hadoop/C++/R environment to a modern Databricks and PyTorch ecosystem. You will be responsible for translating legacy mathematical logic into optimized PyTorch tensors while ensuring strict numerical parity required for US regulatory compliance (CCAR/DFAST).
Key Responsibilities
β’ Model Translation: Reverse-engineer legacy C++ and R codebases to extract core mathematical logic, econometric formulas, and simulation parameters.
β’ PyTorch Implementation: Re-implement these models in PyTorch, utilizing advanced features like torch.nn for modularity and custom Autograd functions where necessary.
β’ Optimization: Refactor code to leverage Databricksβ distributed computing and PyTorchβs GPU/parallel processing capabilities to reduce model execution time.
β’ Data Integration: Build high-performance pipelines from Snowflake into Databricks using Spark and PyTorch DataLoaders.
β’ Parity & Validation: Conduct rigorous back-testing and sensitivity analysis to ensure the new PyTorch models yield results statistically identical to the legacy Hadoop outputs.
β’ Regulatory Documentation: Collaborating with Model Risk Management (MRM) to document the migration process, architectural changes, and validation results in compliance with SR 11-7 standards.
Required Technical Skills
β’ Frameworks: Expert-level PyTorch (specifically for non-computer vision tasks like time-series, regression, or Monte Carlo simulations).
β’ Languages: High proficiency in Python and a strong ability to read and interpret C++ and R (specifically statistical packages like lme4 or forecast).
β’ Platforms: Hands-on experience with Databricks (MLflow, Spark) and Snowflake (Snowpark is a plus).
β’ Quantitative Finance: Deep understanding of statistical modeling, econometric forecasting, or financial risk management.
β’ Big Data: Experience migrating workloads out of Hadoop/Hive environments.
Preferred Qualifications
β’ Experience specifically with PPNR, CCAR, or DFAST regulatory modeling.
β’ Masterβs or PhD in a quantitative field (Statistics, Financial Engineering, Physics, or Math).
β’ Experience with Torch Script or ONNX for model productionisation.
Job Title: Quantitative ML Engineer/Data Scientist (PyTorch & PPNR Migration)
Location: New York City, NY (Onsite-Hybrid)
Term: Contact
Role Objective
We are looking for a Quantitative ML Engineer to lead the technical migration of complex PPNR (Pre-Provision Net Revenue) forecasting models from a Hadoop/C++/R environment to a modern Databricks and PyTorch ecosystem. You will be responsible for translating legacy mathematical logic into optimized PyTorch tensors while ensuring strict numerical parity required for US regulatory compliance (CCAR/DFAST).
Key Responsibilities
β’ Model Translation: Reverse-engineer legacy C++ and R codebases to extract core mathematical logic, econometric formulas, and simulation parameters.
β’ PyTorch Implementation: Re-implement these models in PyTorch, utilizing advanced features like torch.nn for modularity and custom Autograd functions where necessary.
β’ Optimization: Refactor code to leverage Databricksβ distributed computing and PyTorchβs GPU/parallel processing capabilities to reduce model execution time.
β’ Data Integration: Build high-performance pipelines from Snowflake into Databricks using Spark and PyTorch DataLoaders.
β’ Parity & Validation: Conduct rigorous back-testing and sensitivity analysis to ensure the new PyTorch models yield results statistically identical to the legacy Hadoop outputs.
β’ Regulatory Documentation: Collaborating with Model Risk Management (MRM) to document the migration process, architectural changes, and validation results in compliance with SR 11-7 standards.
Required Technical Skills
β’ Frameworks: Expert-level PyTorch (specifically for non-computer vision tasks like time-series, regression, or Monte Carlo simulations).
β’ Languages: High proficiency in Python and a strong ability to read and interpret C++ and R (specifically statistical packages like lme4 or forecast).
β’ Platforms: Hands-on experience with Databricks (MLflow, Spark) and Snowflake (Snowpark is a plus).
β’ Quantitative Finance: Deep understanding of statistical modeling, econometric forecasting, or financial risk management.
β’ Big Data: Experience migrating workloads out of Hadoop/Hive environments.
Preferred Qualifications
β’ Experience specifically with PPNR, CCAR, or DFAST regulatory modeling.
β’ Masterβs or PhD in a quantitative field (Statistics, Financial Engineering, Physics, or Math).
β’ Experience with Torch Script or ONNX for model productionisation.






