

Daman
Databricks Engineer (Texas Residents Only)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer on a contract basis, remote (Texas residents only), requiring monthly office visits in Austin. Key skills include Databricks, Apache Spark, cloud platforms (Azure, AWS, GCP), and CI/CD. Requires 8+ years of data engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 10, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Science #Azure #Data Warehouse #Spark (Apache Spark) #Terraform #Scala #Python #Automation #Cloud #Azure DevOps #"ETL (Extract #Transform #Load)" #DevOps #Databricks #GCP (Google Cloud Platform) #GitHub #Documentation #Data Quality #Data Lake #Data Lifecycle #AWS (Amazon Web Services) #Data Engineering #Data Pipeline #Data Processing #Delta Lake #Security #Apache Spark #Infrastructure as Code (IaC) #Computer Science #SQL (Structured Query Language) #Big Data #Data Modeling
Role description
Job Title: Databricks Engineer
Location: Remote (Texas Residents only)
Job Type: Contract
Note: The person has to come visit the Austin office once a Month.
Job Summary:
We are seeking a skilled Databricks Engineer to join our clientβs data engineering team. The ideal candidate will have strong expertise in Databricks, Apache Spark, and cloud data platforms. You will be responsible for building and optimizing scalable data pipelines, implementing best practices for data processing, and enabling advanced analytics through efficient data infrastructure on the cloud.
Key Responsibilities:
β’ Design, develop, and maintain scalable ETL pipelines using Apache Spark on Databricks.
β’ Implement and manage data lakes, Delta Lake, and data warehouse environments on cloud platforms (Azure, AWS, or GCP).
β’ Optimize Databricks cluster configurations and performance to ensure cost efficiency and scalability.
β’ Collaborate with data scientists, analysts, and business stakeholders to translate data requirements into reliable technical solutions.
β’ Develop and maintain CI/CD pipelines for data workflows using Azure DevOps, GitHub Actions, or similar tools.
β’ Integrate structured and unstructured data from multiple on-prem and cloud sources.
β’ Apply data quality, testing, and security best practices throughout the data lifecycle.
β’ Contribute to architecture discussions and documentation of technical workflows and processes.
β’ Implement automation and infrastructure-as-code (IaC) solutions using tools like Terraform or ARM templates.
Required Skills and Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field.
β’ 8+ years of experience in data engineering or big data development.
β’ Strong hands-on experience with Databricks, Apache Spark, and Delta Lake.
β’ Experience with at least one major cloud platform (Azure, AWS, or GCP).
β’ Proficiency in Python and SQL (Scala is a plus).
β’ Solid understanding of data modeling, data warehousing, and distributed data processing.
β’ Experience with CI/CD, DevOps, and infrastructure automation tools (Terraform, ARM templates, etc.).
β’ Strong analytical, problem-solving, and communication skills.
Job Title: Databricks Engineer
Location: Remote (Texas Residents only)
Job Type: Contract
Note: The person has to come visit the Austin office once a Month.
Job Summary:
We are seeking a skilled Databricks Engineer to join our clientβs data engineering team. The ideal candidate will have strong expertise in Databricks, Apache Spark, and cloud data platforms. You will be responsible for building and optimizing scalable data pipelines, implementing best practices for data processing, and enabling advanced analytics through efficient data infrastructure on the cloud.
Key Responsibilities:
β’ Design, develop, and maintain scalable ETL pipelines using Apache Spark on Databricks.
β’ Implement and manage data lakes, Delta Lake, and data warehouse environments on cloud platforms (Azure, AWS, or GCP).
β’ Optimize Databricks cluster configurations and performance to ensure cost efficiency and scalability.
β’ Collaborate with data scientists, analysts, and business stakeholders to translate data requirements into reliable technical solutions.
β’ Develop and maintain CI/CD pipelines for data workflows using Azure DevOps, GitHub Actions, or similar tools.
β’ Integrate structured and unstructured data from multiple on-prem and cloud sources.
β’ Apply data quality, testing, and security best practices throughout the data lifecycle.
β’ Contribute to architecture discussions and documentation of technical workflows and processes.
β’ Implement automation and infrastructure-as-code (IaC) solutions using tools like Terraform or ARM templates.
Required Skills and Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, or a related field.
β’ 8+ years of experience in data engineering or big data development.
β’ Strong hands-on experience with Databricks, Apache Spark, and Delta Lake.
β’ Experience with at least one major cloud platform (Azure, AWS, or GCP).
β’ Proficiency in Python and SQL (Scala is a plus).
β’ Solid understanding of data modeling, data warehousing, and distributed data processing.
β’ Experience with CI/CD, DevOps, and infrastructure automation tools (Terraform, ARM templates, etc.).
β’ Strong analytical, problem-solving, and communication skills.