Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "X months" and a pay rate of "$X per hour." Key skills include Python, DataBricks, Snowflake, ETL processes, and agile methodologies. Experience in data warehousing and cloud services is required.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 29, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#GIT #Code Reviews #"ETL (Extract #Transform #Load)" #Visualization #Microsoft Power BI #REST API #Database Administration #Snowflake #BI (Business Intelligence) #Programming #Documentation #Agile #Libraries #Automation #Data Extraction #Airflow #Hadoop #Monitoring #REST (Representational State Transfer) #Data Integration #Linux #Data Processing #Databricks #Spark (Apache Spark) #Python #Scala #Apache Airflow #Data Pipeline #Cloud #Data Orchestration #Big Data #Data Engineering
Role description
Role Responsibilities You will be responsible for: β€’ Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks β€’ Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs. β€’ Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency. β€’ Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations β€’ Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives. β€’ Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality. β€’ Developing and maintain tooling and automation scripts to streamline repetitive tasks. β€’ Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes β€’ Utilizing REST APls and other integration techniques to connect various data sources β€’ Maintaining documentation, including data flow diagrams, technical specifications, and processes. You Have: β€’ Proficiency in Python programming, including experience in writing efficient and maintainable code. β€’ Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines β€’ Proficiency in working with Snowflake or similar cloud-based data warehousing solutions β€’ Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices β€’ Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment. β€’ Experience with code versioning tools (e.g., Git) β€’ Meticulous attention to detail and a passion for problem solving β€’ Knowledge of Linux operating systems β€’ Familiarity with REST APIs and integration techniques You might also have: β€’ Familiarity with data visualization tools and libraries (e.g., Power BI) β€’ Background in database administration or performance tuning β€’ Familiarity with data orchestration tools, such as Apache Airflow β€’ Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing β€’ Experience with ServiceNow integration