Brooksource

Senior Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist specializing in experimentation, offering a long-term contract in Greenwood Village, CO, with a pay rate of $70–80/hour. Key skills include Python, PySpark, and experience in statistical methodologies and data pipeline development.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
November 7, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greenwood Village, CO
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Spark (Apache Spark) #Databricks #GitLab #Statistics #Leadership #Data Processing #Data Engineering #Python #Regression #Automation #Programming #Scala #A/B Testing #Deployment #Data Pipeline #PySpark #Data Science
Role description
Senior Data Scientist – Experimentation Location: Greenwood Village, CO (Hybrid – 4 days onsite, 1 day remote) Bill Rate: $70–80/hour (W2 long-term contract) Overview Our client is seeking a Senior Data Scientist with strong engineering capabilities to join the experimentation team — a specialized group responsible for building and scaling statistical experimentation modules within the company's proprietary experimentation platform.. This role is ideal for an engineer-minded data scientist comfortable blending experimentation, automation, and data pipeline development at scale. Key Responsibilities • Develop and maintain modules within the experimentation platform to support automated statistical analysis (A/B testing, T-tests, regression, and causal inference). • Translate advanced statistical methodologies into scalable, production-ready code that supports automated experiment workflows. • Build and optimize large-scale data pipelines using Python and PySpark to enable real-time experimentation and reporting. • Manage GitLab integration and Databricks deployment workflows; ensure robust, version-controlled experimentation systems. • Collaborate with product managers, analysts, and data engineers to deliver automated experimentation features and support operational scalability. • Design and enhance systems that support hundreds of concurrent experiments across 48+ digital platforms. • Contribute to platform operations and occasional deployment activities, ensuring system reliability and accuracy of experimental results. Technical Environment • Python (core programming language; strong coding skills required – live coding test included). • Spark / PySpark (preferred for data processing and experimentation pipeline work). • GitLab and Databricks (for deployment, CI/CD, and integration workflows). • Familiarity with Scala is helpful but not required (used by broader data engineering team). • Experience with internal or third-party experimentation tools (e.g., APT, Eppo, Optimizely, or custom frameworks) is a plus. Day-to-Day Responsibilities • Blend engineering and applied data science work (approximately 75% engineering, 25% experimentation/statistics). • Collaborate closely with advanced statisticians (many with master’s or PhD degrees) to translate statistical theory into production-grade tools. • Work cross-functionally with PMs, data engineers, and analysts to automate experiment setup, data capture, and analysis reporting. • Own backend development and data pipeline work supporting the experimentation platform. • Participate in occasional stakeholder meetings (~10–15% of time) to align on experiment goals and outcomes. • Assist in platform operations, including potential Friday or weekend deployment work as needed. Ideal Candidate Profile The ideal candidate has 5–6+ years of experience in data engineering or data science roles, with at least 1–2 years applying statistical concepts in production. They bring a strong foundation in Python and PySpark, familiarity with GitLab deployment workflows, and a solid understanding of experimental design and causal inference. This person should be self-sufficient, technically strong, and capable of managing both the engineering and statistical aspects of the experimentation process. Experience working on experimentation or analytics automation platforms is highly valued. Soft Skills & Collaboration • Comfortable working in a hybrid environment with high autonomy and minimal supervision. • Able to communicate technical concepts clearly to both data engineers and non-technical stakeholders. • Strong analytical thinker with attention to detail and platform scalability. • Collaborative mindset; enjoys cross-functional problem-solving within data-driven product teams. Team Environment The team is composed primarily of data scientists with advanced statistical backgrounds and strong collaboration across data engineering, analytics, and product management functions. The team leads experimentation initiatives supporting the company's broader AI and knowledge management programs. Schedule & Work Environment Hybrid schedule with 4 days onsite and 1 day remote (Denver, CO). Standard working hours are 8 a.m. – 5 p.m. EST, with flexibility for early arrivals to accommodate commuting preferences. Interview Process • 1st Round – Live Python coding interview (1 hour with technical stakeholders). • 2nd Round – Discussion with leadership and PMs • 3rd Round – Technical panel with cross-functional team members (Spark, GitLab, experimentation discussion). Selling Points • High-visibility opportunity impacting experimentation across all the company's digital platforms. • Unique blend of engineering and applied statistics — rare in traditional data science roles. • Autonomy to build and deploy systems that drive experimentation and decision-making at scale. • Collaborative, high-caliber team with advanced statistical expertise. • Long-term, stable contract role within a growing AI and knowledge management program.