

Ubique Systems
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include PySpark, SQL, Snowflake, and Python. Experience in ETL processes and agile methodologies is required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 29, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Data Pipeline #Monitoring #Agile #Databricks #Python #Snowflake #SQL (Structured Query Language) #PySpark #Scala #Data Engineering #Documentation #Code Reviews #REST (Representational State Transfer) #Spark (Apache Spark) #Automation #Data Extraction #"ETL (Extract #Transform #Load)"
Role description
Skill :
PySpark
SQL
Snowflake
Python
Role Responsibilities
You will be responsible for:
• Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks
• Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs.
• Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
• Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations
• Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
• Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality.
• Developing and maintain tooling and automation scripts to streamline repetitive tasks.
• Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
• Utilizing REST APls and other integration techniques to connect various data sources
• Maintaining documentation, including data flow diagrams, technical specifications, and processes.
Skill :
PySpark
SQL
Snowflake
Python
Role Responsibilities
You will be responsible for:
• Collaborating with cross-functional teams to understand data requirements, and design efficient, scalable, and reliable ETL processes using Python and DataBricks
• Developing and deploying ETL jobs that extract data from various sources, transforming it to meet business needs.
• Taking ownership of the end-to-end engineering lifecycle, including data extraction, cleansing, transformation, and loading, ensuring accuracy and consistency.
• Creating and manage data pipelines, ensuring proper error handling, monitoring and performance optimizations
• Working in an agile environment, participating in sprint planning, daily stand-ups, and retrospectives.
• Conducting code reviews, provide constructive feedback, and enforce coding standards to maintain a high quality.
• Developing and maintain tooling and automation scripts to streamline repetitive tasks.
• Implementing unit, integration, and other testing methodologies to ensure the reliability of the ETL processes
• Utilizing REST APls and other integration techniques to connect various data sources
• Maintaining documentation, including data flow diagrams, technical specifications, and processes.






