

Synechron
Big Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer, a 6-month contract position with a pay rate of "£X/hour". Candidates must have extensive experience with Hadoop, Spark, and Splunk, strong Python scripting skills, and expertise in ETL/ELT pipelines.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 8, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#Programming #Spark (Apache Spark) #Hadoop #"ETL (Extract #Transform #Load)" #Python #Leadership #Data Framework #Splunk #SQL (Structured Query Language) #Data Engineering #BI (Business Intelligence) #Big Data #Data Pipeline #Scripting #Automation #Cybersecurity #Data Privacy #Scrum #Security #Agile #Version Control #Compliance #Scala #Datasets #Kanban #NoSQL
Role description
About the Role:
Synechron UK is seeking an experienced Lead Data Engineer to spearhead the development and evolution of our data engineering platforms. This leadership role requires a hands-on professional proficient in designing, building, and optimising enterprise-grade data solutions, with a focus on innovation, resilience, and regulatory compliance.
Key Responsibilities:
• Lead and develop robust data engineering platforms leveraging technologies such as Hadoop, Spark, and Splunk.
• Design, implement, and maintain scalable ETL/ELT data pipelines for diverse data types, including raw, structured, semi-structured, and unstructured data (SQL and NoSQL).
• Integrate large and disparate datasets using modern tools and frameworks to support analytical and operational needs.
• Collaborate effectively with BI and Analytics teams in dynamic environments, providing technical guidance and support.
• Develop, review, and maintain automated test plans, including unit and integration tests, to ensure high-quality, reliable code.
• Drive SRE principles within data engineering practices, ensuring service resilience, sustainability, and adherence to recovery time objectives.
• Support source control practices and implement CI/CD pipelines for continuous delivery of data solutions.
• Stay informed on current industry trends, regulatory requirements (cybersecurity, data privacy, data residency), and incorporate best practices into engineering processes.
• Represent Synechron in industry groups and vendor interactions to influence and adopt emerging technologies and standards.
Qualifications and Skills:
• Extensive experience with big data frameworks such as Hadoop, Spark, and Splunk.
• Strong scripting capabilities in Python, with experience in object-oriented and functional programming paradigms.
• Proven expertise in handling varied data formats and integrating large datasets across multiple platforms.
• Deep understanding of building, optimising, and managing complex ETL/ELT pipelines.
• Familiarity with version control systems and CI/CD tools.
• Experience working closely with BI and analytics teams, supporting data-driven decision-making.
• Excellent problem-solving skills with a data-driven mindset.
• Knowledge of agile methodologies (Scrum, Kanban).
• Ability to contribute in collaborative, fast-paced environments, including pair programming and team standups.
• Strong commitment to quality, test-driven development, and automation.
About the Role:
Synechron UK is seeking an experienced Lead Data Engineer to spearhead the development and evolution of our data engineering platforms. This leadership role requires a hands-on professional proficient in designing, building, and optimising enterprise-grade data solutions, with a focus on innovation, resilience, and regulatory compliance.
Key Responsibilities:
• Lead and develop robust data engineering platforms leveraging technologies such as Hadoop, Spark, and Splunk.
• Design, implement, and maintain scalable ETL/ELT data pipelines for diverse data types, including raw, structured, semi-structured, and unstructured data (SQL and NoSQL).
• Integrate large and disparate datasets using modern tools and frameworks to support analytical and operational needs.
• Collaborate effectively with BI and Analytics teams in dynamic environments, providing technical guidance and support.
• Develop, review, and maintain automated test plans, including unit and integration tests, to ensure high-quality, reliable code.
• Drive SRE principles within data engineering practices, ensuring service resilience, sustainability, and adherence to recovery time objectives.
• Support source control practices and implement CI/CD pipelines for continuous delivery of data solutions.
• Stay informed on current industry trends, regulatory requirements (cybersecurity, data privacy, data residency), and incorporate best practices into engineering processes.
• Represent Synechron in industry groups and vendor interactions to influence and adopt emerging technologies and standards.
Qualifications and Skills:
• Extensive experience with big data frameworks such as Hadoop, Spark, and Splunk.
• Strong scripting capabilities in Python, with experience in object-oriented and functional programming paradigms.
• Proven expertise in handling varied data formats and integrating large datasets across multiple platforms.
• Deep understanding of building, optimising, and managing complex ETL/ELT pipelines.
• Familiarity with version control systems and CI/CD tools.
• Experience working closely with BI and analytics teams, supporting data-driven decision-making.
• Excellent problem-solving skills with a data-driven mindset.
• Knowledge of agile methodologies (Scrum, Kanban).
• Ability to contribute in collaborative, fast-paced environments, including pair programming and team standups.
• Strong commitment to quality, test-driven development, and automation.






