Net2Source Inc.

Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist in Overland Park, KS, with a contract length of 12+ years. Key skills include SQL, Databricks, Azure, and predictive modeling. Experience in deploying machine learning models and strong data preprocessing expertise are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 13, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Overland Park, KS
-
🧠 - Skills detailed
#Data Cleaning #Documentation #Storage #Scala #SQL (Structured Query Language) #Azure cloud #"ETL (Extract #Transform #Load)" #Regression #Databricks #Classification #Data Engineering #Spark (Apache Spark) #Deployment #Data Lake #Data Quality #Pandas #Python #Delta Lake #Predictive Modeling #MLflow #Data Science #Azure #Datasets #Cloud #Monitoring #PySpark #Model Evaluation #ADF (Azure Data Factory) #Version Control #ML (Machine Learning) #Unit Testing #Batch
Role description
Role name: Sr Data Scientist Work site: Overland Park, KS (Onsite) Note : Minimum 12+years Job Description: Senior Data Scientist We are seeking an experienced Senior Data Scientist with deep expertise in traditional predictive modeling, including advanced data preprocessing, feature engineering, model development, inference pipelines, and model evaluation. The ideal candidate will have strong technical skills in SQL, Databricks, Azure, and Databricks MLOps, and a proven track record of delivering production-grade machine learning models that directly support business outcomes. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Key Responsibilities Predictive Modeling β€’ Develop, validate, and deploy traditional predictive ML models (e.g., regression, classification, time-series, survival models, uplift models). β€’ Build robust feature engineering pipelines that improve model accuracy, stability, and scalability. β€’ Conduct model evaluation and diagnostics using advanced statistical and ML evaluation techniques (ROC/AUC, F1, calibration, lift charts, PSI/KS, cross-validation). β€’ Implement and optimize inference pipelines for batch and real-time scoring environments. Data Preprocessing & Feature Engineering β€’ Perform end to end data cleaning, wrangling, sampling, and transformation to prepare high-quality training datasets. β€’ Build scalable data preparation workflows using Databricks and PySpark. β€’ Work with structured, semi structured, and large-scale datasets in SQL, Delta Lake, and cloud storage environments. β€’ Partner with Data Engineering to define data quality checks, model inputs, and feature store integrations. MLOps & Deployment β€’ Use Databricks MLflow, feature store, and model registry to track model experiments, versions, and deployments. β€’ Build automated CI/CD workflows for ML using Databricks MLOps frameworks. β€’ Collaborate with engineering teams to deploy, monitor, and maintain models in production. β€’ Conduct post deployment evaluations including drift monitoring, score stability, and operational performance reviews. Cross-functional Collaboration β€’ Partner with business stakeholders to define analytical problems, modeling objectives, and success criteria. β€’ Work closely with Data Engineers, Product Owners, and ML Engineers to ensure reliable and scalable delivery of ML solutions. β€’ Communicate insights, modeling decisions, and performance metrics to technical and non-technical audiences. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Required Skills & Qualifications β€’ 8+ years of hands-on experience as a Data Scientist or Machine Learning practitioner. β€’ Strong expertise in traditional predictive modeling using Python & Spark (pandas, scikit-learn, statsmodels). β€’ Advanced skills in data preprocessing, EDA, feature engineering, and model evaluation. β€’ Strong proficiency in SQL for large-scale analytics and data transformation. β€’ Hands-on experience with Databricks (Spark, Delta Lake, notebooks, feature engineering pipelines). β€’ Experience deploying and managing models with Databricks MLflow and MLOps workflows. β€’ Proficient working with Azure cloud ecosystem (Azure Data Lake, Azure ML, ADF, Databricks on Azure). β€’ Solid understanding of software engineering best practices for MLβ€”version control, unit testing, reproducibility, documentation. β€’ Experience working with large, complex, and high-volume datasets.