Swoon

Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Scientist position for a 6-month contract, offering a pay rate of "X" per hour. Key skills include Azure, Databricks, machine learning, and strong SQL. A Master's degree and 5-7 years of relevant experience are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 31, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Datasets #ML (Machine Learning) #Data Ingestion #Forecasting #Computer Science #Data Quality #BI (Business Intelligence) #Data Engineering #Data Analysis #Tableau #Scala #Big Data #Statistics #Model Evaluation #Databricks #Data Modeling #Python #Programming #Azure Databricks #Delta Lake #Deployment #Monitoring #R #Azure #Data Processing #Hadoop #Mathematics #Data Science #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Anomaly Detection #Visualization #Regression
Role description
1. Data Science & Modeling Fundamentals • Solid understanding of: • Statistics and probability • Feature engineering techniques • Model evaluation metrics (e.g., AUC, precision/recall, RMSE) • Strong analytical thinking and problem-solving skills 1. Azure & Databricks Experience • Hands-on experience using Azure for data science workloads • Strong familiarity with Azure Databricks for: • Data processing • Model development • Production pipelines 1. Machine Learning & Advanced Modeling • Ability to design, train, and validate machine learning models, including: • Regression models (linear, regularized) • Tree-based models (Random Forest, XGBoost, LightGBM) • Time-series models (ARIMA, Prophet, or ML-based forecasting approaches) • Experience with: • Hyperparameter tuning • Cross-validation techniques • Model explainability (e.g., SHAP, feature importance) 1. Data Engineering & Pipeline Development • Ability to build and maintain scalable ETL/ELT pipelines in Databricks • Experience with: • Incremental data processing using Delta Lake • Writing strong, performant SQL for analytical queries • Data validation, reconciliation, and quality checks • Proven ability to work with large-scale datasets (millions to billions of records) • Experience implementing data quality monitoring and anomaly detection 1. Behavioral & Ownership Expectations • Self-directed and comfortable working in ambiguous problem spaces • Strong ownership mindset across the full lifecycle: Data ingestion, modeling, deployment, monitoring • Experience supporting and maintaining models in production environments • Willingness to improve, refactor, and modernize legacy pipelines and models Summary: The main function of the data scientist is to produce innovative solutions driven by exploratory data analysis from complex and high-dimensional datasets. Job Responsibilities: • Apply knowledge of statistics, machine learning, programming, data modeling, simulation, and advanced mathematics to recognize patterns, identify opportunities, pose business questions, and make valuable discoveries leading to prototype development and product improvement. • Use a flexible, analytical approach to design, develop, and evaluate predictive models and advanced algorithms that lead to optimal value extraction from the data. • Generate and test hypotheses and analyze and interpret the results of product experiments. • Work with product engineers to translate prototypes into new products, services, and features and provide guidelines for large-scale implementation. • Provide Business Intelligence (BI) and data visualization support, which includes, but limited to support for the online customer service dashboards and other ad-hoc requests requiring data analysis and visual support. Skills: • Experienced in either programming languages such as Python and/or R, big data tools such as Hadoop, or data visualization tools such as Tableau. • The ability to communicate effectively in writing, including conveying complex information and promoting in-depth engagement on course topics. • Experience working with large datasets. Education/Experience: • Master of Science degree in computer science or in a relevant field. • 5-7 years of relevant experience required.