Tixy Tech

Databricks Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Databricks Engineer, contract length unspecified, with a pay rate of "unknown." Work location is remote, requiring US citizenship. Key skills include Python, PySpark, Databricks Workflows, and Azure services. 7+ years of data engineering experience required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 24, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Azure cloud #Distributed Computing #Azure #ADLS (Azure Data Lake Storage) #DevOps #Python #Programming #Scala #Cloud #Agile #Batch #"ETL (Extract #Transform #Load)" #Azure Databricks #Scrum #Data Processing #Data Quality #Data Engineering #SQL (Structured Query Language) #API (Application Programming Interface) #PySpark #Databricks #Spark (Apache Spark)
Role description
Job Title: Senior Databricks Engineer Work Permit: US Citizen Only Job Summary We are seeking an experienced Databricks Engineer with strong expertise in building modern data applications and pipelines on the Azure Databricks platform. The ideal candidate will have deep hands-on experience with Python, PySpark, Databricks Workflows, and API integrations, along with a solid understanding of data engineering best practices and CI/CD pipelines. Key Responsibilities • Design and build Databricks Apps to support advanced data and analytics workflows • Develop and maintain Python-based applications on the Databricks platform • Integrate APIs, Databricks Workflows, and Jobs into end-to-end application flows • Build and manage batch ETL pipelines using Databricks and Azure data services • Ensure data quality, performance, and reliability across data platforms • Collaborate with cross-functional teams to translate business requirements into scalable technical solutions • Implement and support CI/CD pipelines and DevOps best practices • Optimize data processing using PySpark and distributed computing techniques Must Have Skills • Strong experience with Databricks Workflows and Jobs • Hands-on exposure to Databricks Genie • Proficiency in Python and PySpark • Experience with Azure cloud services (Data Factory, ADLS, etc.) • Expertise in API integration • Strong knowledge of ETL pipeline development • Experience with CI/CD and DevOps practices Nice to Have • Experience with Databricks Apps development Qualifications • 7+ years of experience in data engineering or platform development • Proven hands-on experience building Databricks Apps in production environments • Strong programming skills in Python, PySpark, and SQL • Experience working with Azure-based data platforms • Familiarity with Agile/Scrum methodologies Key Competencies • Strong problem-solving and analytical skills • Ability to work independently and in collaborative environments • Excellent communication and stakeholder management skills • Focus on performance optimization and scalable design