

FDM Group
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer on a 6-12 month contract in Glasgow, requiring expertise in Python, Databricks, and Snowflake. Key skills include ETL processes, data pipeline management, and agile methodologies. Experience in finance sector preferred.
🌎 - Country
United Kingdom
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 10, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow
-
🧠 - Skills detailed
#Automation #Documentation #Agile #Python #BI (Business Intelligence) #Programming #Spark (Apache Spark) #Data Extraction #Data Orchestration #Data Integration #Code Reviews #Data Engineering #Cloud #Airflow #Apache Airflow #Databricks #GIT #Scala #Snowflake #REST API #Big Data #Libraries #Linux #REST (Representational State Transfer) #Visualization #Data Processing #Microsoft Power BI #Monitoring #Data Pipeline #Database Administration #Hadoop #"ETL (Extract #Transform #Load)"
Role description
About The Role
FDM is a global business and technology consultancy seeking a Senior Data Engineer to work for our client within the finance sector. This is initially a 6-12 month contract with the potential to extend and will be a hybrid role based in Glasgow.
Our client is seeking a Senior Data Engineer to play a key role in designing, building, and optimizing scalable data solutions. In this role, you will collaborate with cross-functional teams to understand data requirements and design efficient ETL processes using Python and Databricks, taking full ownership of the engineering lifecycle- from data extraction and cleansing to transformation and loading. You will develop and deploy ETL jobs across diverse data sources, create and manage robust pipelines with proper monitoring and error handling, and contribute to agile ceremonies such as sprint planning and daily stand-ups.
Responsibilities:
Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks
Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs.
Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations
Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality.
Developing and maintain tooling and automation scripts to streamline repetitive tasks.
Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
Utilizing REST APIs and other integration techniques to connect various data sources
Maintaining documentation, including data flow diagrams, technical specifications, and processes.
About You
Proficiency in Python programming, including experience in writing efficient and maintainable code.
Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines
Proficiency in working with Snowflake or similar cloud-based data warehousing solutions
Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices
Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment.
Experience with code versioning tools (e.g., Git)
Meticulous attention to detail and a passion for problem solving
Knowledge of Linux operating systems
Familiarity with REST APIs and integration techniques
Desirable:
Familiarity with data visualization tools and libraries (e.g., Power BI)
Background in database administration or performance tuning
Familiarity with data orchestration tools, such as Apache Airflow
Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing
Experience with ServiceNow integration
About Us
We are a business and technology consultancy and one of the UK's leading employers, recruiting the brightest talent to become the innovators of tomorrow. We have centres across Europe, North America and Asia-Pacific, and a global workforce of over 3,500 Consultants. FDM has shown exponential growth throughout the years, firmly establishing itself as an award-winning employer and is listed on the FTSE4Good Index.
Diversity and Inclusion
FDM Group is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, national origin, age, disability, veteran status or any other status protected by federal, provincial or local laws.
Why join us
Career coaching, mentoring and access to upskilling throughout your entire FDM career
Assignments with global companies and opportunities to work abroad
Opportunity to re-skill and up-skill into new areas, develop non-linear career paths and build a skillset within your field
Annual leave and work-place pension
About The Role
FDM is a global business and technology consultancy seeking a Senior Data Engineer to work for our client within the finance sector. This is initially a 6-12 month contract with the potential to extend and will be a hybrid role based in Glasgow.
Our client is seeking a Senior Data Engineer to play a key role in designing, building, and optimizing scalable data solutions. In this role, you will collaborate with cross-functional teams to understand data requirements and design efficient ETL processes using Python and Databricks, taking full ownership of the engineering lifecycle- from data extraction and cleansing to transformation and loading. You will develop and deploy ETL jobs across diverse data sources, create and manage robust pipelines with proper monitoring and error handling, and contribute to agile ceremonies such as sprint planning and daily stand-ups.
Responsibilities:
Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks
Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs.
Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations
Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality.
Developing and maintain tooling and automation scripts to streamline repetitive tasks.
Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
Utilizing REST APIs and other integration techniques to connect various data sources
Maintaining documentation, including data flow diagrams, technical specifications, and processes.
About You
Proficiency in Python programming, including experience in writing efficient and maintainable code.
Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines
Proficiency in working with Snowflake or similar cloud-based data warehousing solutions
Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices
Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment.
Experience with code versioning tools (e.g., Git)
Meticulous attention to detail and a passion for problem solving
Knowledge of Linux operating systems
Familiarity with REST APIs and integration techniques
Desirable:
Familiarity with data visualization tools and libraries (e.g., Power BI)
Background in database administration or performance tuning
Familiarity with data orchestration tools, such as Apache Airflow
Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing
Experience with ServiceNow integration
About Us
We are a business and technology consultancy and one of the UK's leading employers, recruiting the brightest talent to become the innovators of tomorrow. We have centres across Europe, North America and Asia-Pacific, and a global workforce of over 3,500 Consultants. FDM has shown exponential growth throughout the years, firmly establishing itself as an award-winning employer and is listed on the FTSE4Good Index.
Diversity and Inclusion
FDM Group is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, sexual orientation, national origin, age, disability, veteran status or any other status protected by federal, provincial or local laws.
Why join us
Career coaching, mentoring and access to upskilling throughout your entire FDM career
Assignments with global companies and opportunities to work abroad
Opportunity to re-skill and up-skill into new areas, develop non-linear career paths and build a skillset within your field
Annual leave and work-place pension






