

HelloKindred
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract until November 30, 2026, offering a hybrid work setup. Key skills include Kafka, OpenTelemetry, Splunk integration, and Python. Experience in data pipelines, observability standards, and security practices is required.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
February 7, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Sheffield, England, United Kingdom
-
π§ - Skills detailed
#Data Engineering #Logging #Documentation #Python #Security #"ETL (Extract #Transform #Load)" #JSON (JavaScript Object Notation) #Compliance #Observability #Cloud #Data Pipeline #Data Quality #Prometheus #Kafka (Apache Kafka) #Monitoring #CLI (Command-Line Interface) #Kubernetes #Splunk
Role description
Who is HelloKindred?
HelloKindred are specialists in staffing marketing, creative and technology roles, offering a range of talent solutions that can be delivered on-site, remotely or hybrid.
Our vision is to make work accessible and peopleβs lives better. We do this by disrupting traditional employment barriers β connecting ambitious talent to flexible opportunities with trusted brands.
Job Description
Anticipated Contract End Date/Length: November 30, 2026
Work set up: Hybrid (3 days per week in office)
Our client in the Information Technology and Services industry is looking for a Data Engineer to design, build, and optimize streaming data pipelines focused on OpenShift telemetry, observability, and proactive monitoring capabilities. This role plays a key part in enabling multi-tenant observability, ensuring data quality, lineage, and reliability across streaming platforms, while integrating telemetry into Splunk for advanced dashboards, alerting, and analytics. The position requires strong expertise in Kafka-based pipelines, OpenTelemetry instrumentation, schema governance, and collaboration with platform, SRE, and application teams to achieve Observability Level 4 maturity.
What you will do:
β’ Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry including metrics, logs, and traces at scale.
β’ Stream OpenShift telemetry via Kafka and build resilient consumer services for transformation and enrichment.
β’ Engineer data models and routing for multi-tenant observability and ensure lineage, quality, and SLAs across the stream layer.
β’ Integrate processed telemetry into Splunk for dashboards, alerting, analytics, and proactive insights.
β’ Implement schema management using Avro or Protobuf, including governance and versioning for telemetry events.
β’ Build automated validation, replay, and backfill mechanisms to ensure data reliability and recovery.
β’ Instrument services with OpenTelemetry and standardize tracing, metrics, and structured logging across platforms.
β’ Use LLM capabilities to enhance observability through query assistance, anomaly summarization, and runbook generation.
β’ Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
β’ Ensure security, compliance, and best practices across data pipelines and observability platforms.
β’ Document data flows, schemas, dashboards, and operational runbooks.
Qualifications
β’ Hands-on experience building streaming data pipelines with Kafka including producers, consumers, schema registry, and Kafka tooling such as Connect, KSQL, or KStream.
β’ Proficiency with OpenShift and Kubernetes telemetry, including OpenTelemetry, Prometheus, and CLI tooling.
β’ Experience integrating telemetry into Splunk using HEC, UF, sourcetypes, and CIM, and building dashboards and alerting.
β’ Strong data engineering skills using Python or similar languages for ETL, ELT, enrichment, and validation.
β’ Knowledge of event schemas including Avro, Protobuf, and JSON with understanding of backward and forward compatibility.
β’ Familiarity with observability standards and ability to drive proactive monitoring and automated insights.
β’ Understanding of hybrid cloud and multi-cluster telemetry patterns.
β’ Knowledge of security and compliance practices for data pipelines including RBAC, encryption, and secret management.
β’ Strong problem-solving, collaboration, communication, and documentation skills.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Candidates must be legally authorized to live and work in the country where the position is based, without requiring employer sponsorship.
HelloKindred is committed to fair, transparent, and inclusive hiring practices. We assess candidates based on skills, experience, and role-related requirements.
We appreciate your interest in this opportunity. While we review every application carefully, only candidates selected for an interview will be contacted.
HelloKindred is an equal opportunity employer. We welcome applicants of all backgrounds and do not discriminate on the basis of race, colour, religion, sex, gender identity or expression, sexual orientation, age, national origin, disability, veteran status, or any other protected characteristic under applicable law.
Who is HelloKindred?
HelloKindred are specialists in staffing marketing, creative and technology roles, offering a range of talent solutions that can be delivered on-site, remotely or hybrid.
Our vision is to make work accessible and peopleβs lives better. We do this by disrupting traditional employment barriers β connecting ambitious talent to flexible opportunities with trusted brands.
Job Description
Anticipated Contract End Date/Length: November 30, 2026
Work set up: Hybrid (3 days per week in office)
Our client in the Information Technology and Services industry is looking for a Data Engineer to design, build, and optimize streaming data pipelines focused on OpenShift telemetry, observability, and proactive monitoring capabilities. This role plays a key part in enabling multi-tenant observability, ensuring data quality, lineage, and reliability across streaming platforms, while integrating telemetry into Splunk for advanced dashboards, alerting, and analytics. The position requires strong expertise in Kafka-based pipelines, OpenTelemetry instrumentation, schema governance, and collaboration with platform, SRE, and application teams to achieve Observability Level 4 maturity.
What you will do:
β’ Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry including metrics, logs, and traces at scale.
β’ Stream OpenShift telemetry via Kafka and build resilient consumer services for transformation and enrichment.
β’ Engineer data models and routing for multi-tenant observability and ensure lineage, quality, and SLAs across the stream layer.
β’ Integrate processed telemetry into Splunk for dashboards, alerting, analytics, and proactive insights.
β’ Implement schema management using Avro or Protobuf, including governance and versioning for telemetry events.
β’ Build automated validation, replay, and backfill mechanisms to ensure data reliability and recovery.
β’ Instrument services with OpenTelemetry and standardize tracing, metrics, and structured logging across platforms.
β’ Use LLM capabilities to enhance observability through query assistance, anomaly summarization, and runbook generation.
β’ Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
β’ Ensure security, compliance, and best practices across data pipelines and observability platforms.
β’ Document data flows, schemas, dashboards, and operational runbooks.
Qualifications
β’ Hands-on experience building streaming data pipelines with Kafka including producers, consumers, schema registry, and Kafka tooling such as Connect, KSQL, or KStream.
β’ Proficiency with OpenShift and Kubernetes telemetry, including OpenTelemetry, Prometheus, and CLI tooling.
β’ Experience integrating telemetry into Splunk using HEC, UF, sourcetypes, and CIM, and building dashboards and alerting.
β’ Strong data engineering skills using Python or similar languages for ETL, ELT, enrichment, and validation.
β’ Knowledge of event schemas including Avro, Protobuf, and JSON with understanding of backward and forward compatibility.
β’ Familiarity with observability standards and ability to drive proactive monitoring and automated insights.
β’ Understanding of hybrid cloud and multi-cluster telemetry patterns.
β’ Knowledge of security and compliance practices for data pipelines including RBAC, encryption, and secret management.
β’ Strong problem-solving, collaboration, communication, and documentation skills.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Candidates must be legally authorized to live and work in the country where the position is based, without requiring employer sponsorship.
HelloKindred is committed to fair, transparent, and inclusive hiring practices. We assess candidates based on skills, experience, and role-related requirements.
We appreciate your interest in this opportunity. While we review every application carefully, only candidates selected for an interview will be contacted.
HelloKindred is an equal opportunity employer. We welcome applicants of all backgrounds and do not discriminate on the basis of race, colour, religion, sex, gender identity or expression, sexual orientation, age, national origin, disability, veteran status, or any other protected characteristic under applicable law.






