

Infinity Quest
Data Engineering Tech Lead
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineering Tech Lead for a contract of "X months" with a pay rate of "$Y/hour." Key skills include Kafka, OpenShift telemetry, Python, and Splunk integration. Experience in observability standards and security compliance is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 29, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Greater Sheffield Area
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #Logging #Observability #Compliance #CLI (Command-Line Interface) #Splunk #Cloud #Data Engineering #Python #Security #Documentation #Data Pipeline #Visualization #Monitoring #"ETL (Extract #Transform #Load)" #REST (Representational State Transfer) #Prometheus #JSON (JavaScript Object Notation) #Kubernetes
Role description
Key Responsibilities:
• Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale.
• Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
• Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer.
• Integrate processed telemetry into Splunk for visualization, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights).
• Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events.
• Build automated validation, replay, and backfill mechanisms for data reliability and recovery.
• Instrument services with OpenTelemetry; standardize tracing, metrics, and structured logging across platforms.
• Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, runbook generation).
• Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
• Ensure security, compliance, and best practices for data pipelines and observability platforms.
• Document data flows, schemas, dashboards, and operational runbooks.
Required Skills:
• Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect/KSQL/KStream).
• Proficiency with OpenShift/Kubernetes telemetry (OpenTelemetry, Prometheus) and CLI tooling.
• Experience integrating telemetry into Splunk (HEC, UF, sourcetypes, CIM), building dashboards and alerting.
• Strong data engineering skills in Python (or similar) for ETL/ELT, enrichment, and validation.
• Knowledge of event schemas (Avro/Protobuf/JSON), contracts, and backward/forward compatibility.
• Familiarity with observability standards and practices; ability to drive toward Level 4 maturity (proactive monitoring, automated insights).
• Understanding of hybrid cloud and multi-cluster telemetry patterns.
• Security and compliance for data pipelines: secret management, RBAC, encryption in transit/at rest.
• Good problem-solving skills and ability to work in a collaborative team environment.
• Strong communication and documentation skills.
Key Responsibilities:
• Design, implement, and maintain data pipelines to ingest and process OpenShift telemetry (metrics, logs, traces) at scale.
• Stream OpenShift telemetry via Kafka (producers, topics, schemas) and build resilient consumer services for transformation and enrichment.
• Engineer data models and routing for multi-tenant observability; ensure lineage, quality, and SLAs across the stream layer.
• Integrate processed telemetry into Splunk for visualization, dashboards, alerting, and analytics to achieve Observability Level 4 (proactive insights).
• Implement schema management (Avro/Protobuf), governance, and versioning for telemetry events.
• Build automated validation, replay, and backfill mechanisms for data reliability and recovery.
• Instrument services with OpenTelemetry; standardize tracing, metrics, and structured logging across platforms.
• Use LLMs to enhance observability capabilities (e.g., query assistance, anomaly summarization, runbook generation).
• Collaborate with platform, SRE, and application teams to integrate telemetry, alerts, and SLOs.
• Ensure security, compliance, and best practices for data pipelines and observability platforms.
• Document data flows, schemas, dashboards, and operational runbooks.
Required Skills:
• Hands-on experience building streaming data pipelines with Kafka (producers/consumers, schema registry, Kafka Connect/KSQL/KStream).
• Proficiency with OpenShift/Kubernetes telemetry (OpenTelemetry, Prometheus) and CLI tooling.
• Experience integrating telemetry into Splunk (HEC, UF, sourcetypes, CIM), building dashboards and alerting.
• Strong data engineering skills in Python (or similar) for ETL/ELT, enrichment, and validation.
• Knowledge of event schemas (Avro/Protobuf/JSON), contracts, and backward/forward compatibility.
• Familiarity with observability standards and practices; ability to drive toward Level 4 maturity (proactive monitoring, automated insights).
• Understanding of hybrid cloud and multi-cluster telemetry patterns.
• Security and compliance for data pipelines: secret management, RBAC, encryption in transit/at rest.
• Good problem-solving skills and ability to work in a collaborative team environment.
• Strong communication and documentation skills.






