

Inara
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with strong Python and Databricks skills, offering £500 - £550 for a 3-month remote contract. Key requirements include experience in production data environments, data modeling, and cloud platforms (AWS, Azure, GCP).
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
550
-
🗓️ - Date
January 27, 2026
🕒 - Duration
3 to 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Edinburgh, Scotland, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #GCP (Google Cloud Platform) #Databricks #Data Processing #Cloud #Azure #Monitoring #Data Engineering #Data Governance #Scala #Observability #Spark (Apache Spark) #Data Science #Data Architecture #Data Quality #AWS (Amazon Web Services) #Python #"ETL (Extract #Transform #Load)" #dbt (data build tool)
Role description
Senior Data Engineer | Python | Databricks | Production Data Platforms
Rate: £500 - £550 (Inside IR35)
Contract Length: 3 months
Workplace: Remote with very occasional visit to client site
We’re working with a forward-thinking organisation that’s building modern, production-grade data platforms and is looking for a Senior Data Engineer to play a key role in shaping how data is engineered, deployed, and consumed across the business.
This is a hands-on role for someone who enjoys solving complex data problems and taking data models from concept to production.
What you’ll be doing
Designing, building, and maintaining scalable data pipelines using Python
• Working extensively with Databricks to develop and optimise data processing workflows
• Productionising data models, ensuring they are robust, reliable, and performant
• Collaborating with analytics, data science, and platform teams to deliver high-quality data products
• Improving data quality, monitoring, and observability across the platform
• Influencing technical direction and best practices within the data engineering function
What we’re looking for
Strong experience as a Senior Data Engineer in production environments
Advanced Python skills for data engineering
Hands-on experience with Databricks (Spark, notebooks, jobs, workflows)
• Proven experience building and productionising data models
• Solid understanding of data warehousing, ETL/ELT, and modern data architectures
• Experience working in cloud environments (AWS, Azure, or GCP)
Nice to have
• Experience with dbt or similar transformation tools
• Exposure to CI/CD for data pipelines
• Knowledge of data governance, lineage, or observability tooling
• Experience working in consultancy or multi-client environments
Senior Data Engineer | Python | Databricks | Production Data Platforms
Rate: £500 - £550 (Inside IR35)
Contract Length: 3 months
Workplace: Remote with very occasional visit to client site
We’re working with a forward-thinking organisation that’s building modern, production-grade data platforms and is looking for a Senior Data Engineer to play a key role in shaping how data is engineered, deployed, and consumed across the business.
This is a hands-on role for someone who enjoys solving complex data problems and taking data models from concept to production.
What you’ll be doing
Designing, building, and maintaining scalable data pipelines using Python
• Working extensively with Databricks to develop and optimise data processing workflows
• Productionising data models, ensuring they are robust, reliable, and performant
• Collaborating with analytics, data science, and platform teams to deliver high-quality data products
• Improving data quality, monitoring, and observability across the platform
• Influencing technical direction and best practices within the data engineering function
What we’re looking for
Strong experience as a Senior Data Engineer in production environments
Advanced Python skills for data engineering
Hands-on experience with Databricks (Spark, notebooks, jobs, workflows)
• Proven experience building and productionising data models
• Solid understanding of data warehousing, ETL/ELT, and modern data architectures
• Experience working in cloud environments (AWS, Azure, or GCP)
Nice to have
• Experience with dbt or similar transformation tools
• Exposure to CI/CD for data pipelines
• Knowledge of data governance, lineage, or observability tooling
• Experience working in consultancy or multi-client environments






