

Stellar Consulting Solutions, LLC
Data Scientist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Scientist position for a 6+ month remote contract at $60/hr, requiring expertise in Python, SQL, and PySpark, with a focus on retail and inventory management. Experience with Databricks, CI/CD, and machine learning is essential.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
November 7, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Spark (Apache Spark) #API (Application Programming Interface) #Version Control #Azure #SQL (Structured Query Language) #Data Science #ML (Machine Learning) #NumPy #AWS (Amazon Web Services) #Programming #TensorFlow #Agile #SciPy #PyTorch #Deployment #PySpark #Python #Scala #Documentation #Databricks #Cloud #GitHub #Libraries #GCP (Google Cloud Platform) #pydantic
Role description
Job Title: Data Scientist
Location: Remote
Duration: 6+ Months Contract
Pay Rate: $60/hr (No 3rd Parties)
We are seeking a highly skilled and motivated Data Scientist/Engineer to work on ongoing inventory optimization initiative. This role is ideal for someone with a strong foundation in data science/engineering and a passion for building scalable, production-grade solutions that drive business impact. You will work on high-impact projects in domains such as retail, inventory management, and operations research, collaborating with data scientists, engineers, and business stakeholders.
Key Responsibilities
Develop, test, and deploy data science solutions using Python, SQL, and PySpark on enterprise platforms such as Databricks.
Collaborate with data scientists to translate models into production-ready code.
Implement CI/CD pipelines and manage code repositories using GitHub Enterprise.
Design and optimize mathematical programming and machine learning models for real-world applications.
Work independently to break down complex problems into actionable development tasks.
Ensure code quality, scalability, and maintainability in a production environment.
Contribute to sprint planning, documentation, and cross-functional collaboration.
Required Qualifications
Hands-on experience with enterprise data science solutions, preferably in retail, inventory management, or operations research.
Proficiency in Python, SQL, and PySpark.
Experience with Databricks or similar enterprise cloud environments.
Strong understanding of CI/CD principles and version control through GitHub Enterprise.
Experience defining and validating service contracts using Pydantic for data validation and API schema enforcement.
Experience with production-level coding and deployment practices.
Familiarity with basic machine learning techniques and mathematical optimization methods.
Proficient in data science libraries and ML pipelines such as; NumPy, SciPy, scikit-learn, MLlib, PyTorch, TensorFlow.
Self-starter with an ownership mindset and the ability to work with minimal supervision.
Preferred Qualifications
Experience in Agile development environments.
Exposure to cloud platforms (e.g., Azure, AWS, GCP).
Familiarity with APIs including how RESTful services in data science workflows.
Job Title: Data Scientist
Location: Remote
Duration: 6+ Months Contract
Pay Rate: $60/hr (No 3rd Parties)
We are seeking a highly skilled and motivated Data Scientist/Engineer to work on ongoing inventory optimization initiative. This role is ideal for someone with a strong foundation in data science/engineering and a passion for building scalable, production-grade solutions that drive business impact. You will work on high-impact projects in domains such as retail, inventory management, and operations research, collaborating with data scientists, engineers, and business stakeholders.
Key Responsibilities
Develop, test, and deploy data science solutions using Python, SQL, and PySpark on enterprise platforms such as Databricks.
Collaborate with data scientists to translate models into production-ready code.
Implement CI/CD pipelines and manage code repositories using GitHub Enterprise.
Design and optimize mathematical programming and machine learning models for real-world applications.
Work independently to break down complex problems into actionable development tasks.
Ensure code quality, scalability, and maintainability in a production environment.
Contribute to sprint planning, documentation, and cross-functional collaboration.
Required Qualifications
Hands-on experience with enterprise data science solutions, preferably in retail, inventory management, or operations research.
Proficiency in Python, SQL, and PySpark.
Experience with Databricks or similar enterprise cloud environments.
Strong understanding of CI/CD principles and version control through GitHub Enterprise.
Experience defining and validating service contracts using Pydantic for data validation and API schema enforcement.
Experience with production-level coding and deployment practices.
Familiarity with basic machine learning techniques and mathematical optimization methods.
Proficient in data science libraries and ML pipelines such as; NumPy, SciPy, scikit-learn, MLlib, PyTorch, TensorFlow.
Self-starter with an ownership mindset and the ability to work with minimal supervision.
Preferred Qualifications
Experience in Agile development environments.
Exposure to cloud platforms (e.g., Azure, AWS, GCP).
Familiarity with APIs including how RESTful services in data science workflows.






