Synechron

Lead Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead Data Engineer with a contract length of "unknown" and a pay rate of "unknown," located in "unknown." Key skills include Hadoop, Spark, Splunk, Python, and experience in data pipeline optimization and CI/CD practices.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Deployment #Agile #Integration Testing #Splunk #SQL (Structured Query Language) #Leadership #Scripting #"ETL (Extract #Transform #Load)" #Scrum #Spark (Apache Spark) #NoSQL #Data Pipeline #Python #Data Engineering #Datasets #Hadoop #Kanban #BI (Business Intelligence)
Role description
Overview: Synechron UK is looking for a highly experienced Lead Data Engineering Consultant to lead and develop cutting-edge data engineering platforms. The ideal candidate will bring a mix of technical expertise, leadership skills, and innovative thinking to support our data-driven initiatives. Key Responsibilities: • Lead the design, development, and optimisation of enterprise data platforms using tools such as Hadoop, Spark, and Splunk. • Develop and maintain robust ETL/ELT data pipelines to handle raw, structured, semi-structured, and unstructured data (SQL and NoSQL). • Integrate large, disparate datasets with modern tools and frameworks, ensuring smooth data flow across systems. • Collaborate closely with BI and Analytics teams in fast-paced environments, supporting their data needs. • Drive the creation of technical test plans, conduct unit and integration testing, and maintain high code quality within automated environments. • Promote a Site Reliability Engineering (SRE) culture by addressing operational challenges and ensuring service resilience and sustainability. • Support continuous integration, delivery, and deployment (CI/CD) pipelines, incorporating source control practices. Qualifications: • Extensive experience with Hadoop, Spark, and Splunk. • Proficiency in Python scripting, with experience in handling various data formats. • Strong background in building and optimising data pipelines and supporting BI/Analytics teams. • Experience with CI/CD pipelines and source control. • Knowledge of agile methodologies such as Scrum or Kanban. • Ability to develop and implement automated test plans and ensure high-quality deliverables.