

Gigged.AI
Data Engineer with Databricks Experience - Inside IR35 - UK Based
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Databricks experience, remote, UK-based, inside IR35, for 3 months starting October. Key skills include PySpark, Spark SQL, and ETL/ELT pipeline optimization. Experience in data modeling and problem-solving is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 2, 2025
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
United Kingdom
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Data Modeling #Databricks #Migration #Spark (Apache Spark) #Datasets #Spark SQL #Scala #Version Control #Documentation #Data Pipeline #PySpark #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Lake #Data Architecture #Data Engineering
Role description
Gigged.AI is a freelance talent marketplace specialising in the IT and technology sector. We have an opportunity live at the moment with one of our clients for a Data Engineer with Databricks experience - Inside IR35 - UK Based
If interested then you must submit a proposal through the Gigged.AI website for this gig. You can do so here - https://app.gigged.ai/
Location: Remote, UK based
IR35: Inside IR35
Duration: Start of October until Christmas
We are seeking skilled Data Engineers with strong Databricks expertise to join our team on a contract basis. This opportunity is with a large, household-name energy client undergoing a multi-year migration and reimagining of their data estate, centered around a move from a legacy data lake into Databricks.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using PySpark and Spark SQL within Databricks.
- Produce clean, reliable datasets and data models to support functional teams within the organization.
- Transform and integrate data from multiple sources, optimizing processes for scalability and performance.
- Collaborate with data architects, analysts, and stakeholders to translate business requirements into technical solutions.
- Follow best practices for data engineering, including testing, documentation, and version control.
Required Skills & Experience:
- Strong hands-on experience with Databricks.
- Proficiency in PySpark and SQL (especially Spark SQL).
- Experience building and optimizing ETL/ELT pipelines for large-scale datasets.
- Solid understanding of data modeling concepts.
- Strong problem-solving and communication skills.
Gigged.AI is a freelance talent marketplace specialising in the IT and technology sector. We have an opportunity live at the moment with one of our clients for a Data Engineer with Databricks experience - Inside IR35 - UK Based
If interested then you must submit a proposal through the Gigged.AI website for this gig. You can do so here - https://app.gigged.ai/
Location: Remote, UK based
IR35: Inside IR35
Duration: Start of October until Christmas
We are seeking skilled Data Engineers with strong Databricks expertise to join our team on a contract basis. This opportunity is with a large, household-name energy client undergoing a multi-year migration and reimagining of their data estate, centered around a move from a legacy data lake into Databricks.
Key Responsibilities:
- Design, develop, and maintain scalable data pipelines using PySpark and Spark SQL within Databricks.
- Produce clean, reliable datasets and data models to support functional teams within the organization.
- Transform and integrate data from multiple sources, optimizing processes for scalability and performance.
- Collaborate with data architects, analysts, and stakeholders to translate business requirements into technical solutions.
- Follow best practices for data engineering, including testing, documentation, and version control.
Required Skills & Experience:
- Strong hands-on experience with Databricks.
- Proficiency in PySpark and SQL (especially Spark SQL).
- Experience building and optimizing ETL/ELT pipelines for large-scale datasets.
- Solid understanding of data modeling concepts.
- Strong problem-solving and communication skills.