Data Scientist Pricing Prediction (ML + API) Cloud

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist specializing in Pricing Prediction (ML + API) Cloud, with a contract duration of 3-5 months, remote work, and a pay rate of "unknown." Candidates must have 5-6 years of ML experience, proficiency in Databricks, and a relevant degree.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 30, 2025
πŸ•’ - Project duration
3 to 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Fixed Term
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Python #Forecasting #Statistics #Delta Lake #Azure #Computer Science #Data Science #Cloud #Documentation #Regression #API (Application Programming Interface) #FastAPI #Business Analysis #AWS (Amazon Web Services) #Databricks #Deployment #Flask #Datasets #Supervised Learning #Scala #Triggers #MLflow #Data Engineering #Neural Networks #GCP (Google Cloud Platform) #PySpark #ML (Machine Learning) #Spark (Apache Spark) #Pandas #A/B Testing
Role description
Data Scientist Pricing Prediction (ML + API) cloud Remote -12+ years We are seeking an experienced Data Scientist to lead the development of a pricing prediction model that will power key business decisions across our digital channels. This role will focus on designing, training, and deploying machine learning models using Databricks as the primary platform and exposing the model as an API for web integration. The ideal candidate will also be responsible for articulating model behavior, accuracy metrics, and interpretability to both technical and non-technical stakeholders. Contract (3-5 Months) Key Responsibilities: β€’ Build, train, and optimize supervised learning models (e.g., regression, ensemble models, or neural networks) for price prediction using historical and real-time datasets. β€’ Leverage Databricks (PySpark, MLflow, Delta Lake) for scalable model development and tracking. β€’ Expose the model as a RESTful API to integrate seamlessly into customer-facing websites or business systems. β€’ Design dashboards or documentation to explain model logic, feature importance, and prediction confidence intervals. β€’ Use explainability frameworks (e.g., SHAP, LIME) to make models transparent and defensible. β€’ Track model performance (e.g., RMSE, MAE, precision-recall) and implement retraining triggers or drift detection. β€’ Work closely with Data Engineers, Web Developers, Product Managers, and Business Analysts to align on business goals and deployment paths. β€’ Communicate model assumptions, results, and limitations to non-technical stakeholders. MUST Bachelor’s or Master’s in Computer Science, Data Science, Statistics, or related field. β€’ 5–6 years of hands-on experience in building and deploying machine learning models in production. β€’ Proven experience working with Databricks, Spark, and cloud-based data platforms (Azure, AWS, or GCP). β€’ Strong proficiency in Python, MLflow, Pandas, Scikit-learn, and API frameworks (e.g., FastAPI, Flask). β€’ Deep understanding of regression modeling, time-series forecasting, or pricing optimization algorithms. β€’ Experience in model interpretability and validation techniques. β€’ Experience with data versioning, CI/CD pipelines, and automated ML workflows. β€’ Familiarity with A/B testing for model rollout and business impact evaluation. β€’ β€’ Databricks Unity Catalog, Feature Store, or Delta Live Tables is a plus