

Ascendum Solutions
Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Mid-Level Contract Data Scientist position, lasting for an unspecified duration, with a pay rate of "X" per hour. Candidates must have 2-5 years of experience in Mortgage, Lending, or Fintech, and be proficient in Azure Databricks, Python, and SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 27, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Irvine, CA
-
π§ - Skills detailed
#Python #Data Cleaning #ML (Machine Learning) #SQL (Structured Query Language) #Databricks #Libraries #Data Science #Monitoring #Regression #Azure Databricks #Deployment #Strategy #Data Wrangling #Documentation #Azure
Role description
W2 ONLY position, we cannot accept C2C or 1099 candidates for this role
Data Scientist
Role Summary We are looking for a Mid-Level Contract Data Scientist to join our team and support ongoing analytics initiatives. This is a tactical, execution-focused role. We need a hands-on contributor who is comfortable working with our specific tool stack and domain to deliver practical modeling solutions. You will be responsible for the "grunt work" of building, validating, and deploying models rather than high-level research or strategy.
The Ideal Candidate
β’ You have enough experience to work independently but are happy executing on defined tickets and objectives.
β’ You understand that the goal is a working model in production, not a perfect academic experiment.
β’ Target Experience Level: 2β5 Years.
Top 3 Requirements (Must-Haves)
1. Domain Familiarity: Experience in Mortgage, Lending, or Fintech is highly preferred. We need someone who understands the basic concepts of our data (loans, borrowers, credit risk) so they can start immediately.
1. Tool Stack: You must be proficient in Azure Databricks, Python, and SQL. You should be comfortable writing queries and running notebooks on Day 1.
1. Applied Focus: Experience writing code that actually runs in a business environment (not just local notebooks).
Key Responsibilities
β’ Model Building: Develop standard predictive models (Regression, XGBoost, etc.) to support business operations.
β’ Data Preparation: Handle data cleaning, feature engineering, and data wrangling within Azure Databricks.
β’ Deployment Support: Write clean, modular code that allows engineers to move your models into production.
β’ Validation: Run performance monitoring and validation tests on existing and new models.
Technical Qualifications
β’ Experience: 2β5 years of professional Data Science experience.
β’ Core Skills: Strong Python and SQL skills are non-negotiable.
β’ Tech Stack: Experience with Azure Databricks and standard ML libraries (Scikit-Learn, XGBoost).
β’ Education: Bachelorβs degree in a quantitative field or equivalent practical experience.
Deliverables
β’ Working model pipelines and notebooks.
β’ Clean, commented code commits.
β’ Documentation of model results and feature logic
W2 ONLY position, we cannot accept C2C or 1099 candidates for this role
Data Scientist
Role Summary We are looking for a Mid-Level Contract Data Scientist to join our team and support ongoing analytics initiatives. This is a tactical, execution-focused role. We need a hands-on contributor who is comfortable working with our specific tool stack and domain to deliver practical modeling solutions. You will be responsible for the "grunt work" of building, validating, and deploying models rather than high-level research or strategy.
The Ideal Candidate
β’ You have enough experience to work independently but are happy executing on defined tickets and objectives.
β’ You understand that the goal is a working model in production, not a perfect academic experiment.
β’ Target Experience Level: 2β5 Years.
Top 3 Requirements (Must-Haves)
1. Domain Familiarity: Experience in Mortgage, Lending, or Fintech is highly preferred. We need someone who understands the basic concepts of our data (loans, borrowers, credit risk) so they can start immediately.
1. Tool Stack: You must be proficient in Azure Databricks, Python, and SQL. You should be comfortable writing queries and running notebooks on Day 1.
1. Applied Focus: Experience writing code that actually runs in a business environment (not just local notebooks).
Key Responsibilities
β’ Model Building: Develop standard predictive models (Regression, XGBoost, etc.) to support business operations.
β’ Data Preparation: Handle data cleaning, feature engineering, and data wrangling within Azure Databricks.
β’ Deployment Support: Write clean, modular code that allows engineers to move your models into production.
β’ Validation: Run performance monitoring and validation tests on existing and new models.
Technical Qualifications
β’ Experience: 2β5 years of professional Data Science experience.
β’ Core Skills: Strong Python and SQL skills are non-negotiable.
β’ Tech Stack: Experience with Azure Databricks and standard ML libraries (Scikit-Learn, XGBoost).
β’ Education: Bachelorβs degree in a quantitative field or equivalent practical experience.
Deliverables
β’ Working model pipelines and notebooks.
β’ Clean, commented code commits.
β’ Documentation of model results and feature logic






