

BAPON Technologies
Databricks Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer on a "contract" basis, offering a pay rate of "$XX/hour". The position requires expertise in Databricks, Apache Spark, and cloud platforms (preferably Azure), with a focus on building scalable data pipelines.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
April 1, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Luton, England, United Kingdom
-
π§ - Skills detailed
#ADF (Azure Data Factory) #S3 (Amazon Simple Storage Service) #Data Processing #Big Data #AWS (Amazon Web Services) #Databases #Azure Data Factory #Computer Science #BI (Business Intelligence) #PySpark #Spark (Apache Spark) #Delta Lake #Data Engineering #Cloud #Storage #Compliance #Batch #GIT #Version Control #GCP (Google Cloud Platform) #Python #Data Governance #Apache Spark #Tableau #Data Modeling #Data Pipeline #ADLS (Azure Data Lake Storage) #"ETL (Extract #Transform #Load)" #Microsoft Power BI #Security #Data Science #Databricks #Data Storage #Airflow #Scala #Programming #SQL (Structured Query Language) #Monitoring #Datasets #Data Analysis #Azure #Logging
Role description
Databricks Engineer with strong experience in building scalable data pipelines and processing large datasets using Apache Spark. Skilled in designing efficient ETL workflows, optimising performance, and working within cloud-based data platforms. Experienced in delivering reliable, high-quality data solutions that support analytics and business decision-making.
Key Responsibilities
β’ Develop and maintain scalable data pipelines using Databricks and Apache Spark
β’ Design and implement ETL/ELT processes for structured and unstructured data
β’ Work with Delta Lake to ensure data reliability, versioning, and performance
β’ Optimise Spark jobs for speed, cost efficiency, and resource utilization
β’ Ingest data from multiple sources including APIs, databases, and streaming platforms
β’ Collaborate with data analysts, data scientists, and stakeholders to deliver data solutions
β’ Implement data validation, monitoring, and logging frameworks
β’ Manage and orchestrate workflows using tools like Databricks Workflows or Azure Data Factory
β’ Ensure data governance, security, and compliance best practices
Technical Skills
β’ Databricks (Core platform expertise)
β’ Apache Spark (PySpark / SQL / Scala)
β’ Programming: Python, SQL
β’ Cloud Platforms: Azure (preferred), AWS, or GCP
β’ Data Storage: Delta Lake, ADLS / S3 / GCS
β’ Orchestration: Azure Data Factory, Airflow, or Databricks Workflows
Additional Skills
β’ Experience with data modeling (star schema, dimensional modeling)
β’ Understanding of batch and streaming data processing
β’ Familiarity with CI/CD pipelines and version control (Git)
β’ Knowledge of performance tuning and query optimization
β’ Exposure to BI tools like Power BI or Tableau
Qualifications
β’ Bachelorβs degree in Computer Science, Engineering, or related field
β’ Relevant experience in data engineering or big data technologies
β’ Databricks or cloud certifications (preferred but not mandatory)
Key Strengths
β’ Strong problem-solving and analytical thinking
β’ Ability to work in fast-paced, client-focused environments
β’ Clear communication and collaboration skills
β’ Focus on building scalable and maintainable data solutions
Databricks Engineer with strong experience in building scalable data pipelines and processing large datasets using Apache Spark. Skilled in designing efficient ETL workflows, optimising performance, and working within cloud-based data platforms. Experienced in delivering reliable, high-quality data solutions that support analytics and business decision-making.
Key Responsibilities
β’ Develop and maintain scalable data pipelines using Databricks and Apache Spark
β’ Design and implement ETL/ELT processes for structured and unstructured data
β’ Work with Delta Lake to ensure data reliability, versioning, and performance
β’ Optimise Spark jobs for speed, cost efficiency, and resource utilization
β’ Ingest data from multiple sources including APIs, databases, and streaming platforms
β’ Collaborate with data analysts, data scientists, and stakeholders to deliver data solutions
β’ Implement data validation, monitoring, and logging frameworks
β’ Manage and orchestrate workflows using tools like Databricks Workflows or Azure Data Factory
β’ Ensure data governance, security, and compliance best practices
Technical Skills
β’ Databricks (Core platform expertise)
β’ Apache Spark (PySpark / SQL / Scala)
β’ Programming: Python, SQL
β’ Cloud Platforms: Azure (preferred), AWS, or GCP
β’ Data Storage: Delta Lake, ADLS / S3 / GCS
β’ Orchestration: Azure Data Factory, Airflow, or Databricks Workflows
Additional Skills
β’ Experience with data modeling (star schema, dimensional modeling)
β’ Understanding of batch and streaming data processing
β’ Familiarity with CI/CD pipelines and version control (Git)
β’ Knowledge of performance tuning and query optimization
β’ Exposure to BI tools like Power BI or Tableau
Qualifications
β’ Bachelorβs degree in Computer Science, Engineering, or related field
β’ Relevant experience in data engineering or big data technologies
β’ Databricks or cloud certifications (preferred but not mandatory)
Key Strengths
β’ Strong problem-solving and analytical thinking
β’ Ability to work in fast-paced, client-focused environments
β’ Clear communication and collaboration skills
β’ Focus on building scalable and maintainable data solutions






