eTeam

Sr.Databricks Engineer (AWS)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Databricks Engineer (AWS) based in Glasgow, with a contract until 31/12/2026, offering £402/day. Key skills include Databricks, Apache Spark, AWS services, and Python. Experience in data migration, preferably in the Finance industry, is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
402
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Spark (Apache Spark) #Databricks #Delta Lake #Data Migration #Data Pipeline #"ETL (Extract #Transform #Load)" #IAM (Identity and Access Management) #Data Governance #PySpark #AWS (Amazon Web Services) #Athena #MLflow #Version Control #Security #Cloud #Migration #Lambda (AWS Lambda) #Data Quality #S3 (Amazon Simple Storage Service) #VPC (Virtual Private Cloud) #Python #Unit Testing #Apache Spark #Scala #Data Engineering #GitLab
Role description
Role Title: Sr.Databricks Engineer (AWS) Location: Glasgow Duration: 31/12/2026 Days on site: 2-3 Rate: £402/day on Umbrella Role Description: We are currently migrating our data pipelines from AWS to Databricks, and are seeking a Senior Databricks Engineer to lead and contribute to this transformation. This is a hands-on engineering role focused on designing, building, and optimizing scalable data solutions using the Databricks platform. Key Responsibilities: Lead the migration of existing AWS-based data pipelines to Databricks. Design and implement scalable data engineering solutions using Apache Spark on Databricks. Collaborate with cross-functional teams to understand data requirements and translate them into efficient pipelines. Optimize performance and cost-efficiency of Databricks workloads. Develop and maintain CI/CD workflows for Databricks using GitLab or similar tools. Ensure data quality and reliability through robust unit testing and validation frameworks. Implement best practices for data governance, security, and access control within Databricks. Provide technical mentorship and guidance to junior engineers. Must-Have Skills: Strong hands-on experience with Databricks and Apache Spark (preferably PySpark). Proven track record of building and optimizing data pipelines in cloud environments. Experience with AWS services such as S3, Glue, Lambda, Step Functions, Athena, IAM, and VPC. Proficiency in Python for data engineering tasks. Familiarity with GitLab for version control and CI/CD. Strong understanding of unit testing and data validation techniques. Preferred Qualifications: Experience with Databricks Delta Lake, Unity Catalog, and MLflow. Knowledge of CloudFormation or other infrastructure-as-code tools. AWS or Databricks certifications. Experience in large-scale data migration projects. Background in Finance Industry.