Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Scientist position based in Bellevue, WA, on a contract basis. Requires 7+ years in data science, strong Python/R skills, and expertise in ML frameworks. Familiarity with cloud platforms and big data tools is preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date discovered
September 24, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Bellevue, WA
-
🧠 - Skills detailed
#Programming #Visualization #AI (Artificial Intelligence) #NLP (Natural Language Processing) #R #DevOps #Cloud #TensorFlow #Deep Learning #Big Data #Python #ML (Machine Learning) #Compliance #PyTorch #SQL (Structured Query Language) #Data Engineering #Predictive Modeling #Azure #Data Quality #Data Science #Data Governance #Spark (Apache Spark) #Classification #Data Modeling #Hadoop #Datasets #Databases #Scala #NoSQL #AWS (Amazon Web Services) #GCP (Google Cloud Platform)
Role description
Role: Data Scientist Location: Bellevue, WA (Onsite) Position Type : Contract JOB DESCRIPTION Role and responsibilities: β€’ 7+ years in data science, analytics, or related fields. β€’ Proven track record of delivering data-driven business solutions and deploying ML models in production. β€’ Experience leading projects or mentoring junior team members is preferred. β€’ Strong proficiency in Python, R, or similar programming languages. β€’ Deep knowledge of machine learning frameworks (Scikit-learn, TensorFlow, PyTorch, XGBoost). β€’ Expertise in statistical analysis, data visualization, and predictive modeling. β€’ Experience with SQL, NoSQL databases, and data engineering pipelines. β€’ Familiarity with cloud platforms (AWS, Azure, GCP) and big data tools (Spark, Hadoop) is a plus. β€’ Strong analytical thinking, problem-solving skills, and business acumen. β€’ Excellent communication skills to present technical insights to stakeholder Analyze complex datasets to identify trends, patterns, and business opportunities. β€’ Build, validate, and deploy machine learning models for predictive, classification, or recommendation purposes. β€’ Perform feature engineering and data preprocessing to improve model performance. β€’ Stay up-to-date with emerging techniques in machine learning, deep learning, NLP, and AI. β€’ Apply advanced statistical and AI methods to solve business challenges. β€’ Experiment with new algorithms, frameworks, and tools for improved accuracy and scalability. β€’ Collaborate with Data Engineers and DevOps teams to deploy models into production. β€’ Monitor model performance, retrain models as needed, and ensure data quality. β€’ Document model assumptions, limitations, and decision-making processes. β€’ Work closely with product managers, business stakeholders, and engineering teams to align data science solutions with business needs. β€’ Mentor junior data scientists and provide technical guidance on modeling, coding, and best practices. β€’ Present insights and findings to technical and non-technical stakeholders effectively. β€’ Ensure adherence to data governance, privacy, and compliance policies. β€’ Address ethical considerations in AI and data modeling. Best Regards, β€’ Bismillah Arzoo (AB)