

SJ Mobilita
Lead Data Engineer - Full Stack
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer - Full Stack, offering a contract length of "X months" at a pay rate of "$X/hour". Key skills include Python, Databricks, and automation. Requires 8+ years in Data Engineering and experience in data-driven environments.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 26, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Santa Clara, CA
-
π§ - Skills detailed
#ML (Machine Learning) #Version Control #Data Engineering #Python #Data Management #Streamlit #Data Pipeline #Automation #Metadata #"ETL (Extract #Transform #Load)" #BI (Business Intelligence) #Monitoring #Deployment #Visualization #Terraform #Scala #Data Lifecycle #CLI (Command-Line Interface) #AI (Artificial Intelligence) #Data Governance #AWS (Amazon Web Services) #Data Science #Cloud #Integration Testing #React #Scripting #Databricks
Role description
Role Summary
We are seeking a highly skilled Senior Data Engineer β Full Stack to build and maintain internal tools, automation frameworks, and workflows that enhance the efficiency, reliability, and scalability of our data and machine learning platforms. This role will work closely with Data Engineers, Data Scientists, and ML Engineers to streamline operations across the data lifecycle.
Key Responsibilities
β’ Design and develop CLI tools, scripts, and internal utilities to automate repetitive tasks across the data platform, including:
β’ Pipeline execution and orchestration
β’ Data governance workflows
β’ Metadata synchronization
β’ Environment setup and configuration
β’ Test harness development
β’ Automate workflows on Databricks, including:
β’ Job deployment and scheduling
β’ Environment provisioning
β’ MLOps processes using APIs, Terraform, or Databricks SDK
β’ Build and implement robust testing frameworks:
β’ Integration testing for pipelines
β’ End-to-end validation of ETL/ELT workflows
β’ Testing and validation for ML inference workflows
β’ Improve overall productivity, scalability, and reliability of the data and ML engineering ecosystem
β’ Develop lightweight internal tools and dashboards using frameworks such as React, Streamlit, or similar technologies to:
β’ Visualize data pipelines and workflows
β’ Demonstrate model inference capabilities
β’ Provide configuration and operational controls
β’ Enable internal productivity monitoring and dashboards
β’ Collaborate with cross-functional teams to identify automation opportunities and implement best practices
Required Skills & Qualifications
β’ Strong experience in Python and scripting for automation and backend development
β’ Hands-on experience with Databricks platform and ecosystem
β’ Experience with APIs, Terraform, and/or Databricks SDK for automation
β’ Solid understanding of ETL/ELT pipelines and data platform architecture
β’ Experience building testing frameworks for data pipelines and ML workflows
β’ Familiarity with CLI tool development and system automation
β’ Knowledge of MLOps principles and practices
β’ Experience with modern development practices, including:
β’ Spec-driven development
β’ Use of coding agents or automation-assisted development tools
β’ Version control and CI/CD pipelines
Nice to Have
β’ Experience building dashboards or internal tools using React, Streamlit, or similar frameworks
β’ Familiarity with Databricks AI/BI or other data visualization tools
β’ Exposure to data governance and metadata management frameworks
β’ Experience working with cloud platforms (AWS preferred)
Preferred Experience
β’ 8+ years of experience in Data Engineering, Platform Engineering, or related roles
β’ Experience working in data-driven or ML-focused environments
What Youβll Bring
β’ Strong problem-solving mindset with a focus on automation and efficiency
β’ Ability to work in a fast-paced, collaborative environment
β’ Passion for building scalable internal tools and improving developer productivity
Role Summary
We are seeking a highly skilled Senior Data Engineer β Full Stack to build and maintain internal tools, automation frameworks, and workflows that enhance the efficiency, reliability, and scalability of our data and machine learning platforms. This role will work closely with Data Engineers, Data Scientists, and ML Engineers to streamline operations across the data lifecycle.
Key Responsibilities
β’ Design and develop CLI tools, scripts, and internal utilities to automate repetitive tasks across the data platform, including:
β’ Pipeline execution and orchestration
β’ Data governance workflows
β’ Metadata synchronization
β’ Environment setup and configuration
β’ Test harness development
β’ Automate workflows on Databricks, including:
β’ Job deployment and scheduling
β’ Environment provisioning
β’ MLOps processes using APIs, Terraform, or Databricks SDK
β’ Build and implement robust testing frameworks:
β’ Integration testing for pipelines
β’ End-to-end validation of ETL/ELT workflows
β’ Testing and validation for ML inference workflows
β’ Improve overall productivity, scalability, and reliability of the data and ML engineering ecosystem
β’ Develop lightweight internal tools and dashboards using frameworks such as React, Streamlit, or similar technologies to:
β’ Visualize data pipelines and workflows
β’ Demonstrate model inference capabilities
β’ Provide configuration and operational controls
β’ Enable internal productivity monitoring and dashboards
β’ Collaborate with cross-functional teams to identify automation opportunities and implement best practices
Required Skills & Qualifications
β’ Strong experience in Python and scripting for automation and backend development
β’ Hands-on experience with Databricks platform and ecosystem
β’ Experience with APIs, Terraform, and/or Databricks SDK for automation
β’ Solid understanding of ETL/ELT pipelines and data platform architecture
β’ Experience building testing frameworks for data pipelines and ML workflows
β’ Familiarity with CLI tool development and system automation
β’ Knowledge of MLOps principles and practices
β’ Experience with modern development practices, including:
β’ Spec-driven development
β’ Use of coding agents or automation-assisted development tools
β’ Version control and CI/CD pipelines
Nice to Have
β’ Experience building dashboards or internal tools using React, Streamlit, or similar frameworks
β’ Familiarity with Databricks AI/BI or other data visualization tools
β’ Exposure to data governance and metadata management frameworks
β’ Experience working with cloud platforms (AWS preferred)
Preferred Experience
β’ 8+ years of experience in Data Engineering, Platform Engineering, or related roles
β’ Experience working in data-driven or ML-focused environments
What Youβll Bring
β’ Strong problem-solving mindset with a focus on automation and efficiency
β’ Ability to work in a fast-paced, collaborative environment
β’ Passion for building scalable internal tools and improving developer productivity






