

Intelliswift Software
Data Engineer - Apache Flink
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Apache Flink in London, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Java, Apache Flink, Kafka, Docker, Kubernetes, and Terraform, with a focus on streaming data pipelines and distributed systems.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
March 20, 2026
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
London Area, United Kingdom
-
π§ - Skills detailed
#Automation #Data Engineering #Terraform #Data Pipeline #API (Application Programming Interface) #Debugging #Kubernetes #Kafka (Apache Kafka) #GIT #Docker #Java #Metadata #Programming #AWS (Amazon Web Services) #Version Control #Deployment
Role description
Data Engineer
London
WORK REQUIREMENTS
β’ Excellent expertise in Object oriented programming Java and API's.
β’ Experience in Streaming data pipelines using Apache Flink.
β’ Experience in distributed systems specially Kafka and its eco systems like Gateways, Publisher, Consumer.
β’ Good understanding of containerization using Docker, Kubernetes and EKS.
β’ Good experience of deployment tools like terraform.
β’ Understanding of version control like Git and CI/CD workflow.
β’ Understanding of File formats like Parquet and Avro.
β’ Debugging and trouble shooting.
β’ Good Communication and excellent collaboration skills to interact with internal customers for collecting requirements
Deliverables:
β’ Automation of cluster setup in Machinery for PFA customer. Service deployment (Kafka, Schema Registry, - Metadata
β’ Service, Data Platform Gateway, and Kafka Admin Operator) in new AWS accounts
β’ Integrate PFA with Kafka Automation, Schema Automation and Journal automation.
β’ Integrate PFA with Apple Ingest Control Plane.
β’ Implement the required enhancements to existing streaming pipelines developed using Apache Flink and Java.
β’ Onboarding topics, flinks jobs for journaling into iceberg format.
β’ Monitor and address user requests and issues as they arise.
β’ Support Kafka infrastructure and flink streaming pipelines to meet defined SLAβs and SLOβs
Data Engineer
London
WORK REQUIREMENTS
β’ Excellent expertise in Object oriented programming Java and API's.
β’ Experience in Streaming data pipelines using Apache Flink.
β’ Experience in distributed systems specially Kafka and its eco systems like Gateways, Publisher, Consumer.
β’ Good understanding of containerization using Docker, Kubernetes and EKS.
β’ Good experience of deployment tools like terraform.
β’ Understanding of version control like Git and CI/CD workflow.
β’ Understanding of File formats like Parquet and Avro.
β’ Debugging and trouble shooting.
β’ Good Communication and excellent collaboration skills to interact with internal customers for collecting requirements
Deliverables:
β’ Automation of cluster setup in Machinery for PFA customer. Service deployment (Kafka, Schema Registry, - Metadata
β’ Service, Data Platform Gateway, and Kafka Admin Operator) in new AWS accounts
β’ Integrate PFA with Kafka Automation, Schema Automation and Journal automation.
β’ Integrate PFA with Apple Ingest Control Plane.
β’ Implement the required enhancements to existing streaming pipelines developed using Apache Flink and Java.
β’ Onboarding topics, flinks jobs for journaling into iceberg format.
β’ Monitor and address user requests and issues as they arise.
β’ Support Kafka infrastructure and flink streaming pipelines to meet defined SLAβs and SLOβs





