

Data Scientist Contractor
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist Contractor with a contract length of "unknown," offering a pay rate of "$X per hour." Key skills include advanced Databricks expertise, data engineering experience, and proficiency in cloud platforms. A Bachelor's or Master's in a related field is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
392
-
ποΈ - Date discovered
August 28, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Pleasanton, CA
-
π§ - Skills detailed
#Microsoft Power BI #ADLS (Azure Data Lake Storage) #Terraform #Scala #Apache Spark #Delta Lake #Compliance #Version Control #Data Governance #Data Quality #Visualization #Airflow #Azure #Tableau #Data Lake #ML (Machine Learning) #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #CLI (Command-Line Interface) #S3 (Amazon Simple Storage Service) #Databricks #Data Accuracy #Big Data #Cloud #Data Engineering #GCP (Google Cloud Platform) #Data Science #AWS (Amazon Web Services) #Computer Science #GIT #BI (Business Intelligence) #BigQuery #Data Pipeline #Data Processing #MLflow #Security #Python #Datasets
Role description
Description
β’ As a Data Scientist Contractor, you will analyze complex data sets to extract meaningful insights and support data-driven decision-making.
β’ Collect, process, and analyze large datasets to identify trends, patterns, and insights.
β’ Develop and implement machine learning models and algorithms to solve business problems.
β’ Create data visualizations and dashboards to communicate findings to stakeholders.
β’ Collaborate with project teams to understand data requirements and deliver relevant analytical solutions.
β’ Ensure data accuracy and integrity by performing data validation and quality checks.
Primary Skill Required for the Role:
β’ Databricks Architect
Level Required for Primary Skill:
β’ Advanced (6-9 years experience)
Job Overview:
We are seeking a skilled Data Engineer with hands-on Databricks experience to design, build, and optimize large-scale data pipelines and analytics solutions. You will work with cross-functional teams to enable scalable data processing using the Databricks Lakehouse Platform on Azure.
Key Responsibilities:
β’ Design and implement ETL/ELT pipelines using Databricks, Delta Lake, and Apache Spark
β’ Collaborate with data scientists, analysts, and stakeholders to deliver clean, reliable, and well-modeled data
β’ Build and manage data workflows with Databricks Jobs, Notebooks, and Workflows
β’ Optimize Spark jobs for performance, reliability, and cost-efficiency
β’ Maintain and monitor data pipelines, ensuring availability and data quality
β’ Implement CI/CD practices for Databricks notebooks and infrastructure-as-code (e.g., Terraform, Databricks CLI)
β’ Document data pipelines, datasets, and operational processes
β’ Ensure compliance with data governance, privacy, and security policies
Qualifications:
β’ Bachelorβs or Masterβs in Computer Science, Data Engineering, or a related field
β’ 5+ years of experience in data engineering or a similar role
β’ Strong hands-on experience with Databricks and Apache Spark (Python, Scala, or SQL)
β’ Proficiency with Delta Lake, Unity Catalog, and data lake architectures
β’ Experience with cloud platforms (Azure, AWS, or GCP), especially data services (e.g., S3, ADLS, BigQuery)
β’ Familiarity with CI/CD pipelines, version control (Git), and job orchestration tools (Airflow, DB Workflows)
β’ Strong understanding of data warehousing concepts, performance tuning, and big data processing
Preferred Skills:
β’ Experience with MLflow, Feature Store, or other machine learning tools in Databricks
β’ Knowledge of data governance tools like Unity Catalog or Purview
β’ Experience integrating BI tools (Power BI, Tableau) with Databricks
β’ Databricks certification(s) (Data Engineer Associate/Professional, Machine Learning, etc.)
Description
β’ As a Data Scientist Contractor, you will analyze complex data sets to extract meaningful insights and support data-driven decision-making.
β’ Collect, process, and analyze large datasets to identify trends, patterns, and insights.
β’ Develop and implement machine learning models and algorithms to solve business problems.
β’ Create data visualizations and dashboards to communicate findings to stakeholders.
β’ Collaborate with project teams to understand data requirements and deliver relevant analytical solutions.
β’ Ensure data accuracy and integrity by performing data validation and quality checks.
Primary Skill Required for the Role:
β’ Databricks Architect
Level Required for Primary Skill:
β’ Advanced (6-9 years experience)
Job Overview:
We are seeking a skilled Data Engineer with hands-on Databricks experience to design, build, and optimize large-scale data pipelines and analytics solutions. You will work with cross-functional teams to enable scalable data processing using the Databricks Lakehouse Platform on Azure.
Key Responsibilities:
β’ Design and implement ETL/ELT pipelines using Databricks, Delta Lake, and Apache Spark
β’ Collaborate with data scientists, analysts, and stakeholders to deliver clean, reliable, and well-modeled data
β’ Build and manage data workflows with Databricks Jobs, Notebooks, and Workflows
β’ Optimize Spark jobs for performance, reliability, and cost-efficiency
β’ Maintain and monitor data pipelines, ensuring availability and data quality
β’ Implement CI/CD practices for Databricks notebooks and infrastructure-as-code (e.g., Terraform, Databricks CLI)
β’ Document data pipelines, datasets, and operational processes
β’ Ensure compliance with data governance, privacy, and security policies
Qualifications:
β’ Bachelorβs or Masterβs in Computer Science, Data Engineering, or a related field
β’ 5+ years of experience in data engineering or a similar role
β’ Strong hands-on experience with Databricks and Apache Spark (Python, Scala, or SQL)
β’ Proficiency with Delta Lake, Unity Catalog, and data lake architectures
β’ Experience with cloud platforms (Azure, AWS, or GCP), especially data services (e.g., S3, ADLS, BigQuery)
β’ Familiarity with CI/CD pipelines, version control (Git), and job orchestration tools (Airflow, DB Workflows)
β’ Strong understanding of data warehousing concepts, performance tuning, and big data processing
Preferred Skills:
β’ Experience with MLflow, Feature Store, or other machine learning tools in Databricks
β’ Knowledge of data governance tools like Unity Catalog or Purview
β’ Experience integrating BI tools (Power BI, Tableau) with Databricks
β’ Databricks certification(s) (Data Engineer Associate/Professional, Machine Learning, etc.)