VE3

Senior Data Engineer (Contract) | AWS Glue & Kafka | Outside IR35

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer (Contract) for 3 months, paying "competitive rate" and located "remotely." Requires ~10 years of experience with AWS Glue, Kafka, PySpark, data modeling, and CI/CD. Strong communication skills are essential.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 7, 2025
πŸ•’ - Duration
3 to 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Outside IR35
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
London, England, United Kingdom
-
🧠 - Skills detailed
#Data Modeling #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #AWS S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Infrastructure as Code (IaC) #Batch #PySpark #RDS (Amazon Relational Database Service) #IAM (Identity and Access Management) #Data Engineering #Python #S3 (Amazon Simple Storage Service) #Observability #Data Pipeline #AWS Glue #Cloud #Terraform #Security #Monitoring #Kafka (Apache Kafka)
Role description
Title: Senior Data Engineer (Contract) | AWS Glue & Kafka | Outside IR35 Length: 3 months Start: ASAP IR35: Outside About The Role We’re seeking a hands-on Senior Data Engineer (~10 years’ experience) to deliver production data pipelines on AWS. You’ll design and build streaming (Kafka) and batch pipelines using Glue/EMR (PySpark), implement data contracts and quality gates, and set up CI/CD and observability. You’ve shipped real systems, coached teams, and you document as you go. RequirementsWhat You’ll Do β€’ Architect and deliver lake/lakehouse data flows on AWS (S3 + Glue + Glue ETL/EMR). β€’ Build Kafka consumers/producers, manage schema evolution, resilience, and DLQs. β€’ Implement PySpark transformations, CDC merges, partitioning and optimization. β€’ Add quality/observability (tests, monitoring, alerting, lineage basics). β€’ Harden security (IAM least privilege, KMS, private networking). β€’ Create runbooks, diagrams, and handover materials. What You’ll Bring β€’ Deep AWS (Glue, RDS. S3, EMR, IAM/KMS, CloudWatch). β€’ Strong Kafka (MSK/Confluent, schema registry, consumer group tuning). β€’ Python/PySpark in production with tests and CI/CD. β€’ Data modeling (bronze/silver/gold, CDC, SCD2) and data contracts. β€’ IaC (Terraform/CDK) and cost/performance tuning experience. β€’ Clear communication and stakeholder engagement. BenefitsWhy Join Us? β€’ Work on cutting-edge technologies and impactful projects. β€’ Opportunities for career growth and development. β€’ Collaborative and inclusive work environment. β€’ Competitive salary and benefits package.