

TekVivid, Inc
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Azure Databricks Engineer with 9+ years of data engineering experience, focusing on Azure services and Spark. It requires onsite work in Dallas, TX, San Jose, CA, or Pleasanton, CA, with a competitive pay rate.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 3, 2025
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Texas, United States
-
π§ - Skills detailed
#Data Engineering #Vault #Delta Lake #"ETL (Extract #Transform #Load)" #Azure DevOps #Azure #Azure Data Factory #Databricks #Apache Spark #Computer Science #Data Pipeline #Data Architecture #Synapse #Code Reviews #Spark (Apache Spark) #Complex Queries #ADLS (Azure Data Lake Storage) #Scala #Data Modeling #SQL (Structured Query Language) #ADF (Azure Data Factory) #GIT #PySpark #Azure cloud #Azure Databricks #Data Lake #Azure SQL #"ACID (Atomicity #Consistency #Isolation #Durability)" #Datasets #DevOps #Cloud
Role description
Job Description: Senior Azure Databricks Engineer
Location Options (Onsite Required): Dallas, TX, San Jose, CA, Pleasanton, CA
Experience: 9+ Years
About the Role
We are seeking a highly experienced Senior Azure Databricks Engineer to support digital transformation initiatives for our client. The ideal candidate has strong expertise in Azure cloud services, Databricks, Spark, large-scale data engineering, and end-to-end pipeline development. This is an onsite role requiring candidates to work from any of the client locations listed above.
Responsibilities
β’ Design, build, and optimize end-to-end data pipelines using Azure Databricks and Apache Spark.
β’ Develop scalable ETL/ELT processes for structured and unstructured data.
β’ Integrate Databricks with various Azure services (ADLS, ADF, Azure SQL, Synapse, Key Vault, etc.).
β’ Implement Delta Lake featuresβtime travel, ACID transactions, schema evolution, and optimization.
β’ Perform advanced data transformation, cleansing, aggregation, and feature engineering.
β’ Optimize Spark jobs for performance, reliability, and cost efficiency.
β’ Analyze and troubleshoot complex data issues in production environments.
β’ Collaborate with data architects, analysts, and business teams to define requirements & deliver solutions.
β’ Follow CI/CD best practices for data engineering using Git, Azure DevOps, or similar tools.
β’ Document solutions, create technical specifications, and support code reviews.
Required Skills
β’ 9+ years of professional Data Engineering experience.
β’ Strong hands-on expertise with Azure Databricks and Apache Spark (PySpark / Scala).
β’ Strong experience with Azure cloud services:
β’ Azure Data Lake (ADLS)
β’ Azure Data Factory (ADF)
β’ Azure SQL / Synapse
β’ Azure Key Vault
β’ Strong SQL development skills (complex queries, window functions, tuning).
β’ Experience with Delta Lake features and Databricks workspace management.
β’ Experience building and running pipelines on CI/CD frameworks (Azure DevOps preferred).
β’ Strong understanding of data modeling, partitioning, performance tuning, and job orchestration.
β’ Experience working with large datasets in distributed environments.
β’ Excellent communication and problem-solving skills.
Education: Bachelorβs or Masterβs degree in Computer Science, Data Engineering, or related field.
Job Description: Senior Azure Databricks Engineer
Location Options (Onsite Required): Dallas, TX, San Jose, CA, Pleasanton, CA
Experience: 9+ Years
About the Role
We are seeking a highly experienced Senior Azure Databricks Engineer to support digital transformation initiatives for our client. The ideal candidate has strong expertise in Azure cloud services, Databricks, Spark, large-scale data engineering, and end-to-end pipeline development. This is an onsite role requiring candidates to work from any of the client locations listed above.
Responsibilities
β’ Design, build, and optimize end-to-end data pipelines using Azure Databricks and Apache Spark.
β’ Develop scalable ETL/ELT processes for structured and unstructured data.
β’ Integrate Databricks with various Azure services (ADLS, ADF, Azure SQL, Synapse, Key Vault, etc.).
β’ Implement Delta Lake featuresβtime travel, ACID transactions, schema evolution, and optimization.
β’ Perform advanced data transformation, cleansing, aggregation, and feature engineering.
β’ Optimize Spark jobs for performance, reliability, and cost efficiency.
β’ Analyze and troubleshoot complex data issues in production environments.
β’ Collaborate with data architects, analysts, and business teams to define requirements & deliver solutions.
β’ Follow CI/CD best practices for data engineering using Git, Azure DevOps, or similar tools.
β’ Document solutions, create technical specifications, and support code reviews.
Required Skills
β’ 9+ years of professional Data Engineering experience.
β’ Strong hands-on expertise with Azure Databricks and Apache Spark (PySpark / Scala).
β’ Strong experience with Azure cloud services:
β’ Azure Data Lake (ADLS)
β’ Azure Data Factory (ADF)
β’ Azure SQL / Synapse
β’ Azure Key Vault
β’ Strong SQL development skills (complex queries, window functions, tuning).
β’ Experience with Delta Lake features and Databricks workspace management.
β’ Experience building and running pipelines on CI/CD frameworks (Azure DevOps preferred).
β’ Strong understanding of data modeling, partitioning, performance tuning, and job orchestration.
β’ Experience working with large datasets in distributed environments.
β’ Excellent communication and problem-solving skills.
Education: Bachelorβs or Masterβs degree in Computer Science, Data Engineering, or related field.






