Eames Consulting

Lead Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Scientist on a 6-month contract-to-hire in Seattle, WA, offering $80-90/hr. Requires expertise in machine learning, AI development, full-stack engineering, and strong Python skills. Preferred: Ph.D. or advanced degree.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date
February 12, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#Python #Transformers #SQLAlchemy #FastAPI #R #SciPy #SQL (Structured Query Language) #DevOps #Databases #Linux #JavaScript #"ETL (Extract #Transform #Load)" #Libraries #Clustering #NumPy #Cloud #NLP (Natural Language Processing) #ML (Machine Learning) #Pandas #Automation #Storytelling #Data Modeling #Data Engineering #Data Storytelling #Computer Science #Deep Learning #PyTorch #Docker #Storage #Streamlit #GIT #Leadership #GitHub #TensorFlow #Data Ingestion #Deployment #Web Scraping #AI (Artificial Intelligence) #Scala #Visualization #Monitoring #Data Science
Role description
Principal Data Scientist On-site in Seattle, WA 6-month contract-to-hire $80-90/hr We are seeking a highly skilled Senior Data Scientist with deep expertise in machine learning, AI application development, data engineering workflows, and full‑stack implementation. The ideal candidate combines strong analytical capabilities with hands‑on engineering proficiency, enabling end‑to‑end delivery of data‑powered products. This role involves designing scalable ML systems, enabling automation across the enterprise, and translating complex data into actionable business insights. Key Responsibilities: Machine Learning & AI Development • Design, train, test, and deploy advanced machine learning models, including deep learning, transformers, clustering, and statistical models. • Build AI agents, document understanding systems, and entity extraction pipelines (e.g., Auto‑RAG, custom tokenizers, clustering algorithms). • Develop intelligent automation tools that replace or augment manual workflows, such as document processors, chat‑based SQL interfaces, and computer vision defect‑detection models. Full‑Stack AI & Application Engineering • Build full‑stack AI applications using modern frameworks (FastAPI, Streamlit, Next.js, containerized back‑ends). • Architect AI/ML applications following enterprise methodologies such as TOGAF for alignment with business goals. • Implement CI/CD, MLOps practices, containerized deployments (Docker, Linux servers), APIs, and scalable data services. Data Engineering & Pipeline Design • Create robust data ingestion, transformation, and storage systems leveraging SQL, PyODBC, SQLAlchemy, and cloud services. • Automate data collection with custom web scrapers, bots, and web‑driven automation tools (e.g., PySelenium). • Optimize data preparation, validation, monitoring, and ongoing pipeline performance. Business Impact & Analytics • Conduct data storytelling and visualization to clearly communicate ROI and operational insights to stakeholders. • Support enterprise decision‑making by building predictive and prescriptive analytics models. • Develop optimization tools (e.g., carrier cost optimizers, resource planning models) to reduce operational overhead and improve performance. Technical Leadership & Collaboration • Mentor data scientists, engineers, and cross‑functional teams in machine learning best practices, Python development, and automation techniques. • Collaborate with product, engineering, and business teams to define requirements and deliver measurable outcomes. • Lead architectural discussions on AI applications, data strategies, and ML system design. Required Skills & Experience • Expert‑level Python development with scientific libraries (NumPy, SciPy, Pandas, PyTorch, TensorFlow). • Strong background in machine learning, deep learning, natural language processing, and optimization methods. • Experience building full‑stack AI applications (FastAPI, Streamlit, Next.js, JavaScript). • Robust understanding of SQL databases, data modeling, and cloud‑based deployments. • Proficiency in containerization, DevOps, MLOps, CI/CD, Git/GitHub, and Linux environments. • Hands‑on experience with automated data collection (web scraping, bots, imaging analysis). • Strong multi‑disciplinary engineering background spanning control systems, imaging, simulations, and multi‑physics modeling. • Proven ability to translate complex technical workflows into business‑aligned solutions. Preferred Qualifications • Ph.D. or advanced degree in engineering, data science, computer science, or related field. • Experience in research environments, R&D product development, and systems modeling. • History of community engagement, technical mentoring, or keynote speaking. • Patent or publication experience is a plus.