BigSpark B.V.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 6-month hybrid contract in Glasgow, offering competitive pay. Requires 3+ years of data engineering experience, strong skills in Python/Scala/Java, and expertise in Apache Spark, AWS, and data governance.
🌎 - Country
United Kingdom
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 8, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #DynamoDB #GIT #Spark (Apache Spark) #SQL (Structured Query Language) #AWS Kinesis #Observability #DevOps #Azure #Apache Iceberg #Docker #Datasets #MongoDB #BigQuery #Trino #Data Lake #Data Quality #MySQL #Python #Data Lakehouse #Java #IAM (Identity and Access Management) #Jenkins #Scala #Synapse #GCP (Google Cloud Platform) #Athena #AI (Artificial Intelligence) #Batch #Databases #Terraform #GDPR (General Data Protection Regulation) #Data Vault #Delta Lake #Kafka (Apache Kafka) #Linux #Storage #Cloud #Vault #AWS (Amazon Web Services) #Airflow #GitHub #Compliance #PostgreSQL #Snowflake #GitLab #Lambda (AWS Lambda) #NoSQL #Apache Spark #Azure Event Hubs #Programming #Security #Big Data #Databricks #Data Engineering #"ETL (Extract #Transform #Load)" #Kubernetes
Role description
Data Engineer - Glasgow Hybrid - 6 month contract - Inside IR35 About bigspark We are creating a world of opportunity for businesses by responsibly harnessing data and AI to enable positive change. We adapt to our clients needs and then bring our engineering, development and consultancy expertise. Our people and our solutions ensure they head into the future equipped to succeed. Our clients include Tier 1 Banking and Insurance clients, we have also been listed in the Sunday Times Top 100 Fastest Growing Private Companies. The Role Were looking for a Data Engineer to developer enterprise-scale data platforms and pipelines that power analytics, AI, and business decision-making. You'll work in a hybrid capacity which will require 2 days per month onsite in Glasgow offices. What You'll Do Develop highly available, scalable batch and streaming pipelines (ETL/ELT) using modern orchestration frameworks. Integrate and process large, diverse datasets across hybrid and multi-cloud environments. What You'll Bring 3+ years commercial data engineering experience Strong programming skills in Python, Scala, or Java, with clean coding and testing practices. Big Data & Analytics Platforms: Hands-on experience with Apache Spark (core, SQL, streaming), Databricks, Snowflake, Flink, Beam. Data Lakehouse & Storage Formats: Expert knowledge of Delta Lake, Apache Iceberg, Hudi, and file formats like Parquet, ORC, Avro. Streaming & Messaging: Experience with Kafka (including Schema Registry & Kafka Streams), Pulsar, AWS Kinesis, or Azure Event Hubs. Data Modelling & Virtualisation: Knowledge of dimensional, Data Vault, and semantic modelling; tools like Denodo or Starburst/Trino.Cloud Platforms: Strong AWS experience (Glue, EMR, Athena, S3, Lambda, Step Functions), plus awareness of Azure Synapse, GCP BigQuery. Databases: Proficient with SQL and NoSQL stores (PostgreSQL, MySQL, DynamoDB, MongoDB, Cassandra). Orchestration & Workflow: Experience with Autosys/CA7/Control-M, Airflow, Dagster, Prefect, or managed equivalents. Observability & Lineage: Familiarity with OpenLineage, Marquez, Great Expectations, Monte Carlo, or Soda for data quality. DevOps & CI/CD: Proficient in Git (GitHub/GitLab), Jenkins, Terraform, Docker, Kubernetes (EKS/AKS/GKE, OpenShift). Security & Governance: Experience with encryption, tokenisation (e.g., Protegrity), IAM policies, and GDPR compliance. Linux administration skills and strong infrastructure-as-code experience.