InfoStride

Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist with a contract length of "unknown," offering a pay rate of "unknown." It requires proficiency in Python, R, SQL, and experience in building user-facing applications. A relevant degree or 4+ years of experience is necessary.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
April 10, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Indiana, United States
-
🧠 - Skills detailed
#Neural Networks #Docker #Geospatial Analysis #Kubernetes #Mathematics #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Engineering #Neo4J #Flask #Scala #Version Control #Data Quality #R #Regression #Data Manipulation #GitHub #Computer Science #Database Design #Programming #Statistics #BI (Business Intelligence) #ML (Machine Learning) #Datasets #Visualization #Data Science #Streamlit #Clustering #SQL (Structured Query Language) #Python
Role description
Job ID: 798995 Job Title: Data Scientist Location: Indiana Work Arrangements: Remote Interview: Web Cam Job Description: The Data Scientist plays a key role in delivering in-depth analyses by leveraging advanced data science techniques, methodologies, and interpretation. This role transforms complex data into accurate, meaningful insights that empower the Indiana Department of Health (IDOH) and its partners to make informed decisions that support the health, safety, and well-being of Indiana residents. Key Responsibilities: β€’ Provide mentorship and technical guidance to junior data scientists and cross-functional staff β€’ Partner with business stakeholders to understand analytical challenges and translate them into scalable data solutions β€’ Design, develop, and support internal web applications and interactive tools to operationalize data science products β€’ Analyze and assess data quality, structure, and integrity across multiple data sources β€’ Collaborate with data engineers and BI teams to optimize data models, ETL pipelines, and reporting systems β€’ Develop rapid prototypes and minimum viable products (MVPs) to meet business needs β€’ Identify opportunities to improve processes and efficiency using industry best practices β€’ Mine and analyze large datasets to uncover insights and improve operational outcomes β€’ Apply a range of analytical techniquesβ€”from basic aggregation to advanced statistical modeling β€’ Build and maintain code repositories using version control tools (e.g., GitHub) β€’ Educate end users on interpreting data insights and analytical outputs β€’ Test, evaluate, and maintain data solutions, including support for system upgrades β€’ Document technical specifications and ensure adherence to standards and best practices β€’ Communicate findings effectively to both technical and non-technical audiences Minimum Qualifications: Candidates must meet one of the following: β€’ Bachelor’s degree in Analytics, Statistics, Computer Science, Informatics, Mathematics, or related field + 2 years of experience, OR β€’ Master’s degree in a related field, OR β€’ 4+ years of relevant experience in data science or analytics Required Skills & Experience: β€’ Proficiency in programming languages such as Python, R, and SQL β€’ Experience building user-facing applications using frameworks such as Shiny, Dash, Flask, or Streamlit (2+ years) β€’ Strong background in data manipulation (cleansing, transformation, standardization) β€’ Solid understanding of statistical methods (regression, distributions, hypothesis testing, etc.) β€’ Experience with machine learning techniques (e.g., clustering, decision trees, neural networks) β€’ Knowledge of relational and dimensional database design β€’ Experience integrating with backend data pipelines and deploying analytical applications β€’ Strong analytical, problem-solving, and strategic thinking skills β€’ Excellent written, verbal, and presentation communication skills β€’ Ability to work independently and collaboratively in a fast-paced environment Preferred Qualifications: β€’ Experience leading workshops or training sessions for end users β€’ Experience in data visualization and communicating insights to diverse audiences β€’ Familiarity with tools/technologies such as: o Geospatial analysis and geocoding o Network analysis (e.g., Neo4j) o Containerization and orchestration (Docker, Kubernetes)