Job Spark

Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist, remote, with a contract length of over 6 months, offering flexible hours. Key requirements include 3–5+ years in data science, strong Python skills, and experience with end-to-end ML modeling and statistics.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 6, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Snowflake #Storytelling #BigQuery #Python #Pandas #ML (Machine Learning) #Documentation #Statistics #SQL (Structured Query Language) #AI (Artificial Intelligence) #Datasets #NumPy #Big Data #Spark (Apache Spark) #Data Science #NLP (Natural Language Processing)
Role description
Role: Data Scientist Location: Remote What You’ll Do: • Analyze large datasets to uncover trends and guide modeling direction • Build predictive models and ML pipelines across tabular, time-series, NLP, or multimodal data • Design validation strategies, experiments, and rigorous analytical workflows • Develop automated feature pipelines and reproducible research environments • Perform EDA, hypothesis testing, and model investigations • Translate findings into actionable recommendations for cross-functional teams • Partner with ML engineers to productionize models and scale data workflows • Deliver insights through clear reports and documentation What Makes You a Great Fit: • Kaggle Competitions Grandmaster or equivalent elite competition performance • 3–5+ years in data science or applied analytics • Strong Python expertise (Pandas, NumPy, Polars, scikit-learn, etc.) • Proven experience with end-to-end ML modeling • Solid grounding in statistics, experiment design, and causal analysis • Familiarity with SQL, dashboards, distributed datasets, and experiment tracking • Excellent communication and storytelling with data Bonus Skills: • Contributions across Kaggle tracks • Experience in AI labs, fintech, or ML-focused teams • Knowledge of LLMs, multimodal models, embeddings • Experience with big data tools (Spark, Ray, Snowflake, BigQuery) • Familiarity with Bayesian or probabilistic modeling Why This Role Stands Out: • Work on frontier AI research problems • Apply competition-level modeling skills to high-impact real-world challenges • Collaborate with world-class research and ML engineering teams • Flexible 30–40 hrs/week (or full-time) • Fully remote, async-friendly environment Apply Now!