

bigspark
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Glasgow on a 6-month hybrid contract, paying "£X per hour". Requires 3+ years of data engineering experience, strong skills in Python/Scala/Java, and expertise in Big Data technologies and cloud platforms.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 7, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Trino #GIT #Spark (Apache Spark) #Databases #NoSQL #Storage #"ETL (Extract #Transform #Load)" #DevOps #Linux #Data Lakehouse #Azure #SQL (Structured Query Language) #PostgreSQL #Lambda (AWS Lambda) #Jenkins #Data Vault #Datasets #AWS (Amazon Web Services) #Programming #Compliance #AWS Kinesis #Batch #Kubernetes #Azure Event Hubs #Synapse #Delta Lake #Data Lake #MongoDB #Apache Iceberg #AI (Artificial Intelligence) #Big Data #GitLab #GDPR (General Data Protection Regulation) #Vault #IAM (Identity and Access Management) #Data Engineering #Python #DynamoDB #Java #Scala #S3 (Amazon Simple Storage Service) #Data Quality #Apache Spark #Observability #Snowflake #Databricks #Cloud #GitHub #Airflow #Docker #Terraform #Security #MySQL #GCP (Google Cloud Platform) #Athena #BigQuery #Kafka (Apache Kafka)
Role description
Data Engineer – Glasgow Hybrid – 6 month contract – Inside IR35
About bigspark
We are creating a world of opportunity for businesses by responsibly harnessing data and AI to enable positive change. We adapt to our clients needs and then bring our engineering, development and consultancy expertise. Our people and our solutions ensure they head into the future equipped to succeed.
Our clients include Tier 1 Banking and Insurance clients, we have also been listed in the Sunday Times Top 100 Fastest Growing Private Companies.
The Role
Were looking for a Data Engineer to developer enterprise-scale data platforms and pipelines that power analytics, AI, and business decision-making. You'll work in a hybrid capacity whichj will require 2 days per month onsite in Glasgow offices.
What You'll Do
• Develop highly available, scalable batch and streaming pipelines (ETL/ELT) using modern orchestration frameworks.
• Integrate and process large, diverse datasets across hybrid and multi-cloud environments.
What You'll Bring
• 3+ years commercial data engineering experience
• Strong programming skills in Python, Scala, or Java, with clean coding and testing practices.
• Big Data & Analytics Platforms: Hands-on experience with Apache Spark (core, SQL, streaming), Databricks, Snowflake, Flink, Beam.
• Data Lakehouse & Storage Formats: Expert knowledge of Delta Lake, Apache Iceberg, Hudi, and file formats like Parquet, ORC, Avro.
• Streaming & Messaging: Experience with Kafka (including Schema Registry & Kafka Streams), Pulsar, AWS Kinesis, or Azure Event Hubs.
• Data Modelling & Virtualisation: Knowledge of dimensional, Data Vault, and semantic modelling; tools like Denodo or Starburst/Trino.
• Cloud Platforms: Strong AWS experience (Glue, EMR, Athena, S3, Lambda, Step Functions), plus awareness of Azure Synapse, GCP BigQuery.
• Databases: Proficient with SQL and NoSQL stores (PostgreSQL, MySQL, DynamoDB, MongoDB, Cassandra).
• Orchestration & Workflow: Experience with Autosys/CA7/Control-M, Airflow, Dagster, Prefect, or managed equivalents.
• Observability & Lineage: Familiarity with OpenLineage, Marquez, Great Expectations, Monte Carlo, or Soda for data quality.
• DevOps & CI/CD: Proficient in Git (GitHub/GitLab), Jenkins, Terraform, Docker, Kubernetes (EKS/AKS/GKE, OpenShift).
• Security & Governance: Experience with encryption, tokenisation (e.g., Protegrity), IAM policies, and GDPR compliance.
• Linux administration skills and strong infrastructure-as-code experience.
Data Engineer – Glasgow Hybrid – 6 month contract – Inside IR35
About bigspark
We are creating a world of opportunity for businesses by responsibly harnessing data and AI to enable positive change. We adapt to our clients needs and then bring our engineering, development and consultancy expertise. Our people and our solutions ensure they head into the future equipped to succeed.
Our clients include Tier 1 Banking and Insurance clients, we have also been listed in the Sunday Times Top 100 Fastest Growing Private Companies.
The Role
Were looking for a Data Engineer to developer enterprise-scale data platforms and pipelines that power analytics, AI, and business decision-making. You'll work in a hybrid capacity whichj will require 2 days per month onsite in Glasgow offices.
What You'll Do
• Develop highly available, scalable batch and streaming pipelines (ETL/ELT) using modern orchestration frameworks.
• Integrate and process large, diverse datasets across hybrid and multi-cloud environments.
What You'll Bring
• 3+ years commercial data engineering experience
• Strong programming skills in Python, Scala, or Java, with clean coding and testing practices.
• Big Data & Analytics Platforms: Hands-on experience with Apache Spark (core, SQL, streaming), Databricks, Snowflake, Flink, Beam.
• Data Lakehouse & Storage Formats: Expert knowledge of Delta Lake, Apache Iceberg, Hudi, and file formats like Parquet, ORC, Avro.
• Streaming & Messaging: Experience with Kafka (including Schema Registry & Kafka Streams), Pulsar, AWS Kinesis, or Azure Event Hubs.
• Data Modelling & Virtualisation: Knowledge of dimensional, Data Vault, and semantic modelling; tools like Denodo or Starburst/Trino.
• Cloud Platforms: Strong AWS experience (Glue, EMR, Athena, S3, Lambda, Step Functions), plus awareness of Azure Synapse, GCP BigQuery.
• Databases: Proficient with SQL and NoSQL stores (PostgreSQL, MySQL, DynamoDB, MongoDB, Cassandra).
• Orchestration & Workflow: Experience with Autosys/CA7/Control-M, Airflow, Dagster, Prefect, or managed equivalents.
• Observability & Lineage: Familiarity with OpenLineage, Marquez, Great Expectations, Monte Carlo, or Soda for data quality.
• DevOps & CI/CD: Proficient in Git (GitHub/GitLab), Jenkins, Terraform, Docker, Kubernetes (EKS/AKS/GKE, OpenShift).
• Security & Governance: Experience with encryption, tokenisation (e.g., Protegrity), IAM policies, and GDPR compliance.
• Linux administration skills and strong infrastructure-as-code experience.






