Infoplus Technologies UK Limited

Senior Data Engineer

โญ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer based in Glasgow, UK (Hybrid, 3 days onsite) for a 6-month contract (Inside IR35). Key skills include Python, DataBricks, ETL processes, Snowflake, and agile methodologies.
๐ŸŒŽ - Country
United Kingdom
๐Ÿ’ฑ - Currency
ยฃ GBP
-
๐Ÿ’ฐ - Day rate
Unknown
-
๐Ÿ—“๏ธ - Date
October 29, 2025
๐Ÿ•’ - Duration
More than 6 months
-
๐Ÿ๏ธ - Location
Hybrid
-
๐Ÿ“„ - Contract
Inside IR35
-
๐Ÿ”’ - Security
Unknown
-
๐Ÿ“ - Location detailed
Glasgow, Scotland, United Kingdom
-
๐Ÿง  - Skills detailed
#Data Pipeline #Monitoring #Data Processing #Data Engineering #Code Reviews #Spark (Apache Spark) #Cloud #Automation #Snowflake #Data Orchestration #Data Integration #Database Administration #Apache Airflow #Microsoft Power BI #Python #Big Data #Scala #GIT #REST API #Linux #BI (Business Intelligence) #Programming #REST (Representational State Transfer) #Data Extraction #"ETL (Extract #Transform #Load)" #Agile #Databricks #Airflow #Visualization #Hadoop #Documentation #Libraries
Role description
ยท Job Title: Senior Data Engineer ยท Location: Glasgow, UK (Hybrid, 3 days onsite) ยท Duration: 6 months (Inside IR35) Job Description: Role Responsibilities You will be responsible for: โ€ข Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks โ€ข Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs. โ€ข Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency. โ€ข Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations โ€ข Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives. โ€ข Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality. โ€ข Developing and maintain tooling and automation scripts to streamline repetitive tasks. โ€ข Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes โ€ข Utilizing REST APls and other integration techniques to connect various data sources โ€ข Maintaining documentation, including data flow diagrams, technical specifications, and processes. You Have: โ€ข Proficiency in Python programming, including experience in writing efficient and maintainable code. โ€ข Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines โ€ข Proficiency in working with Snowflake or similar cloud-based data warehousing solutions โ€ข Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment. โ€ข Experience with code versioning tools (e.g., Git) โ€ข Meticulous attention to detail and a passion for problem solving โ€ข Knowledge of Linux operating systems โ€ข Familiarity with REST APIs and integration techniques You might also have: โ€ข Familiarity with data visualization tools and libraries (e.g., Power BI) โ€ข Background in database administration or performance tuning Familiarity with data orchestration tools, such as Apache Airflow โ€ข Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing โ€ข Experience with ServiceNow integration