

eTeam
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Glasgow, offering a 12-month contract at GBP 375 per day, hybrid work (2 onsite, 3 remote). Key skills include Python, DataBricks, ETL processes, and Snowflake experience.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 16, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Hadoop #Data Engineering #Microsoft Power BI #Programming #Linux #Monitoring #Data Processing #Cloud #Data Pipeline #Scala #Snowflake #Code Reviews #Databricks #GIT #REST (Representational State Transfer) #Data Integration #REST API #Agile #Data Extraction #Documentation #Python #Spark (Apache Spark) #Big Data #Automation #Apache Airflow #Data Orchestration #Libraries #Airflow #Visualization #Database Administration #BI (Business Intelligence) #"ETL (Extract #Transform #Load)"
Role description
Role: Data Engineer
Location: Glasgow
Working Mode: Hybrid (Weekly 2 days Onsite 3 days remote)
Contract Type: Inside IR35
Duration: 12 months
Rate: GBP 375 per day Inside IR35
Role Responsibilities
You will be responsible for:
• - Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks
• - Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs.
• - Taking ownership of the end-to-end engineering life cycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
• - Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations
• - Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
• - Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality.
• - Developing and maintain tooling and automation scripts to streamline repetitive tasks.
• - Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
• - Utilizing REST APIs and other integration techniques to connect various data sources
• - Maintaining documentation, including data flow diagrams, technical specifications, and processes.
You have:
• - Proficiency in Python programming, including experience in writing efficient and maintainable code.
• - Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines
• - Proficiency in working with Snowflake or similar cloud-based data warehousing solutions
• - Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices
• - Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment.
• - Experience with code versioning tools (eg, Git)
• - Meticulous attention to detail and a passion for problem solving
• - Knowledge of Linux operating systems
• - Familiarity with REST APIs and integration techniques
You might also have:
• - Familiarity with data visualization tools and libraries (eg, Power BI)
• - Background in database administration or performance tuning
• - Familiarity with data orchestration tools, such as Apache Airflow
• - Previous exposure to big data technologies (eg, Hadoop, Spark) for large data processing
Role: Data Engineer
Location: Glasgow
Working Mode: Hybrid (Weekly 2 days Onsite 3 days remote)
Contract Type: Inside IR35
Duration: 12 months
Rate: GBP 375 per day Inside IR35
Role Responsibilities
You will be responsible for:
• - Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks
• - Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs.
• - Taking ownership of the end-to-end engineering life cycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
• - Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations
• - Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
• - Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality.
• - Developing and maintain tooling and automation scripts to streamline repetitive tasks.
• - Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
• - Utilizing REST APIs and other integration techniques to connect various data sources
• - Maintaining documentation, including data flow diagrams, technical specifications, and processes.
You have:
• - Proficiency in Python programming, including experience in writing efficient and maintainable code.
• - Hands-on experience with cloud services, especially DataBricks, for building and managing scalable data pipelines
• - Proficiency in working with Snowflake or similar cloud-based data warehousing solutions
• - Solid understanding of ETL principles, data modelling, data warehousing concepts, and data integration best practices
• - Familiarity with agile methodologies and the ability to work collaboratively in a fast-paced, dynamic environment.
• - Experience with code versioning tools (eg, Git)
• - Meticulous attention to detail and a passion for problem solving
• - Knowledge of Linux operating systems
• - Familiarity with REST APIs and integration techniques
You might also have:
• - Familiarity with data visualization tools and libraries (eg, Power BI)
• - Background in database administration or performance tuning
• - Familiarity with data orchestration tools, such as Apache Airflow
• - Previous exposure to big data technologies (eg, Hadoop, Spark) for large data processing






