

Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist with a contract length of "unknown" and a pay rate of "unknown." It requires hands-on experience in retail, proficiency in Python, SQL, and PySpark, and familiarity with CI/CD principles and GitHub Enterprise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
376
-
ποΈ - Date discovered
May 24, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Illinois, United States
-
π§ - Skills detailed
#PySpark #Databricks #TensorFlow #Deployment #Python #Scala #Spark (Apache Spark) #GCP (Google Cloud Platform) #Azure #ML (Machine Learning) #NumPy #AWS (Amazon Web Services) #Cloud #Libraries #Version Control #Data Science #GitHub #pydantic #Agile #API (Application Programming Interface) #Documentation #PyTorch #SQL (Structured Query Language) #Programming #SciPy
Role description
Our client, an American retailer that owns and operates 4 brands that are well known, is looking for someone to join their team as a Data Scientist. This role is fully remote.
In this role, you will work on high-impact projects in domains such as retail, inventory management, and operations research, collaborating with data scientists, engineers, and business stakeholders.
Required Skills & Experience
β’ Hands-on experience with enterprise data science solutions, preferably in retail, inventory management, or operations research.
β’ Proficiency in Python, SQL, and PySpark.
β’ Experience with Databricks or similar enterprise cloud environments.
β’ Strong understanding of CI/CD principles and version control through GitHub Enterprise.
β’ Experience defining and validating service contracts using Pydantic for data validation and API schema enforcement.
β’ Experience with production-level coding and deployment practices.
β’ Proficient in data science libraries and ML pipelines such as; NumPy, SciPy, scikit-learn, MLlib, PyTorch, TensorFlow.
Desired Skills & Experience
β’ Experience in Agile development environments.
β’ Exposure to cloud platforms (e.g., Azure, AWS, GCP).
β’ Familiarity with APIs including how RESTful services in data science workflows.
What You Will Be Doing
β’ Develop, test, and deploy data science solutions using Python, SQL, and PySpark on enterprise platforms such as Databricks.
β’ Collaborate with data scientists to translate models into production-ready code.
β’ Implement CI/CD pipelines and manage code repositories using GitHub Enterprise.
β’ Design and optimize mathematical programming and machine learning models for real-world applications.
β’ Work independently to break down complex problems into actionable development tasks.
β’ Ensure code quality, scalability, and maintainability in a production environment.
β’ Contribute to sprint planning, documentation, and cross-functional collaboration.
Our client, an American retailer that owns and operates 4 brands that are well known, is looking for someone to join their team as a Data Scientist. This role is fully remote.
In this role, you will work on high-impact projects in domains such as retail, inventory management, and operations research, collaborating with data scientists, engineers, and business stakeholders.
Required Skills & Experience
β’ Hands-on experience with enterprise data science solutions, preferably in retail, inventory management, or operations research.
β’ Proficiency in Python, SQL, and PySpark.
β’ Experience with Databricks or similar enterprise cloud environments.
β’ Strong understanding of CI/CD principles and version control through GitHub Enterprise.
β’ Experience defining and validating service contracts using Pydantic for data validation and API schema enforcement.
β’ Experience with production-level coding and deployment practices.
β’ Proficient in data science libraries and ML pipelines such as; NumPy, SciPy, scikit-learn, MLlib, PyTorch, TensorFlow.
Desired Skills & Experience
β’ Experience in Agile development environments.
β’ Exposure to cloud platforms (e.g., Azure, AWS, GCP).
β’ Familiarity with APIs including how RESTful services in data science workflows.
What You Will Be Doing
β’ Develop, test, and deploy data science solutions using Python, SQL, and PySpark on enterprise platforms such as Databricks.
β’ Collaborate with data scientists to translate models into production-ready code.
β’ Implement CI/CD pipelines and manage code repositories using GitHub Enterprise.
β’ Design and optimize mathematical programming and machine learning models for real-world applications.
β’ Work independently to break down complex problems into actionable development tasks.
β’ Ensure code quality, scalability, and maintainability in a production environment.
β’ Contribute to sprint planning, documentation, and cross-functional collaboration.