

Insight Global
Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Scientist position for a 6-month contract-to-hire in Portland, OR, requiring 3+ years of financial industry experience. Key skills include Databricks, Apache Spark, Python, SQL, and ML Ops expertise. Hybrid work arrangement: 1 day on-site, 4 days remote.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
496
-
ποΈ - Date
February 11, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Portland, Oregon Metropolitan Area
-
π§ - Skills detailed
#Python #Big Data #Azure #Databricks #Statistics #Computer Science #DevOps #ML Ops (Machine Learning Operations) #Spark (Apache Spark) #Data Engineering #Apache Spark #Documentation #Customer Segmentation #Scala #Monitoring #Version Control #Datasets #GCP (Google Cloud Platform) #ML (Machine Learning) #SQL (Structured Query Language) #AWS (Amazon Web Services) #Data Pipeline #Data Science #Compliance #TensorFlow #Azure DevOps #Data Exploration #Cloud #MLflow #PyTorch
Role description
Position: Data Scientist
Location: Montgomery Park β Portland, OR
1 Day On-site, 4 Days Remote
Contract: 6 Month Contract-to-Hire
Location: Portland (1 day a week)
JOB DESCRIPTION
β’ Develop, deploy, and maintain scalable machine learning models and analytical workflows using Databricks, Apache Spark, and cloud-based data platforms.
β’ Partner with financial domain stakeholders to understand business challenges related to risk, compliance, fraud, lending, investments, and customer analytics.
β’ Perform hands-on data exploration, feature engineering, statistical analysis, and modeling using Python, SQL, and Spark.
β’ Collaborate with data engineering teams to build reliable data pipelines and ensure high-quality, production-ready datasets.
β’ Communicate complex technical findings to non-technical audiences; present actionable insights to business leaders.
β’ Apply deep financial services expertise to ensure models meet regulatory expectations (e.g., model documentation, audit, transparency, fairness).
β’ Optimize workflows for performance and cost efficiency within Databricks and cloud environments.
β’ Design experiments and evaluate model performance using best practices in ML Ops, version control, and reproducibility.
REQUIRED SKILLS AND EXPERIENCE
β’ Bachelorβs or Masterβs degree in Data Science, Computer Science, Statistics, Math, Engineering, or related field.
β’ 3+ years of experience as a Data Scientist, preferably within banks, credit unions, fintech, or other financial institutions.
β’ Hands-on experience with Databricks, Apache Spark, and cloud ecosystems (Azure, AWS, or GCP).
β’ Strong proficiency in Python, SQL, and common ML frameworks (scikitβlearn, PyTorch, TensorFlow, etc.).
β’ Deep understanding of financial data structures, regulatory requirements, and industry workflows.
β’ Experience building, deploying, and monitoring machine learning models in production environments.
β’ Ability to translate business problems into data-driven solutions with measurable impact.
NICE TO HAVE SKILLS AND EXPERIENCE
β’ Experience with ML Ops tools (MLflow, Azure DevOps, Databricks Repos, CI/CD pipelines).
β’ Familiarity with risk scoring, fraud detection, AML/KYC analytics, credit modeling, or customer segmentation.
β’ Knowledge of distributed systems, big data engineering, and scalable architecture design.
β’ Strong communication and stakeholder management skills in financial environments.
Position: Data Scientist
Location: Montgomery Park β Portland, OR
1 Day On-site, 4 Days Remote
Contract: 6 Month Contract-to-Hire
Location: Portland (1 day a week)
JOB DESCRIPTION
β’ Develop, deploy, and maintain scalable machine learning models and analytical workflows using Databricks, Apache Spark, and cloud-based data platforms.
β’ Partner with financial domain stakeholders to understand business challenges related to risk, compliance, fraud, lending, investments, and customer analytics.
β’ Perform hands-on data exploration, feature engineering, statistical analysis, and modeling using Python, SQL, and Spark.
β’ Collaborate with data engineering teams to build reliable data pipelines and ensure high-quality, production-ready datasets.
β’ Communicate complex technical findings to non-technical audiences; present actionable insights to business leaders.
β’ Apply deep financial services expertise to ensure models meet regulatory expectations (e.g., model documentation, audit, transparency, fairness).
β’ Optimize workflows for performance and cost efficiency within Databricks and cloud environments.
β’ Design experiments and evaluate model performance using best practices in ML Ops, version control, and reproducibility.
REQUIRED SKILLS AND EXPERIENCE
β’ Bachelorβs or Masterβs degree in Data Science, Computer Science, Statistics, Math, Engineering, or related field.
β’ 3+ years of experience as a Data Scientist, preferably within banks, credit unions, fintech, or other financial institutions.
β’ Hands-on experience with Databricks, Apache Spark, and cloud ecosystems (Azure, AWS, or GCP).
β’ Strong proficiency in Python, SQL, and common ML frameworks (scikitβlearn, PyTorch, TensorFlow, etc.).
β’ Deep understanding of financial data structures, regulatory requirements, and industry workflows.
β’ Experience building, deploying, and monitoring machine learning models in production environments.
β’ Ability to translate business problems into data-driven solutions with measurable impact.
NICE TO HAVE SKILLS AND EXPERIENCE
β’ Experience with ML Ops tools (MLflow, Azure DevOps, Databricks Repos, CI/CD pipelines).
β’ Familiarity with risk scoring, fraud detection, AML/KYC analytics, credit modeling, or customer segmentation.
β’ Knowledge of distributed systems, big data engineering, and scalable architecture design.
β’ Strong communication and stakeholder management skills in financial environments.





