

Talent Complete
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a B2B contract in Glasgow for 6 months, offering a competitive pay rate. Key skills include Python, Databricks, Snowflake, and experience with large datasets. A Bachelor's degree and 4+ years in data pipelines are required.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
Unknown
-
ποΈ - Date
November 7, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Glasgow, Scotland, United Kingdom
-
π§ - Skills detailed
#Data Modeling #GIT #Spark (Apache Spark) #Version Control #Database Performance #"ETL (Extract #Transform #Load)" #Linux #Code Reviews #NumPy #Datasets #Microsoft Power BI #Agile #PySpark #Pandas #Data Integration #Big Data #BI (Business Intelligence) #Data Engineering #Python #REST (Representational State Transfer) #Automation #Hadoop #REST API #Scala #Data Quality #Data Pipeline #Snowflake #Databricks #Cloud #Data Framework #Airflow #Computer Science
Role description
Join our hybrid Data Engineering team in Glasgow (B2B contract) and help us build modern, high-performance data pipelines with Python, Databricks, and Snowflake. Youβll collaborate across teams, drive technical excellence, and continuously enhance our data processes within an Agile setup.
Key Responsibilities:
β’ Design, develop, and deploy ETL and data pipeline solutions using Python, Databricks, and Snowflake.
β’ Collaborate across teams to define data requirements and ensure alignment with business goals.
β’ Manage data quality, integrity, and performance optimization across large datasets.
β’ Implement testing and automation frameworks, ensuring reliability and scalability.
β’ Utilize REST APIs for data integrations and Airflow for orchestration.
β’ Participate in Agile ceremonies, code reviews, and continuous improvement initiatives.
β’ Document data flows, technical designs, and transformation logic.
Required Skills & Experience:
β’ Bachelorβs degree in Computer Science, Engineering, or related field (or equivalent).
β’ 4+ years developing data pipelines and data warehousing solutions using Python (Pandas, PySpark, NumPy).
β’ 3+ years working with Databricks and Snowflake (or similar cloud data platforms).
β’ Experience handling large, complex datasets and advanced ETL/data modeling.
β’ Strong understanding of data integration, version control (Git), and Agile practices.
β’ Familiarity with Linux, REST APIs, Power BI, and Airflow.
β’ Experience with database performance tuning and big data frameworks (Hadoop, Spark).
β’ Excellent communication, analytical, and problem-solving skills; able to collaborate effectively in cross-functional teams.
β’ Self-driven, organized, and adaptable to changing priorities.
Nice to Have:
β’ Background in financial services and understanding of regulatory frameworks.
Join our hybrid Data Engineering team in Glasgow (B2B contract) and help us build modern, high-performance data pipelines with Python, Databricks, and Snowflake. Youβll collaborate across teams, drive technical excellence, and continuously enhance our data processes within an Agile setup.
Key Responsibilities:
β’ Design, develop, and deploy ETL and data pipeline solutions using Python, Databricks, and Snowflake.
β’ Collaborate across teams to define data requirements and ensure alignment with business goals.
β’ Manage data quality, integrity, and performance optimization across large datasets.
β’ Implement testing and automation frameworks, ensuring reliability and scalability.
β’ Utilize REST APIs for data integrations and Airflow for orchestration.
β’ Participate in Agile ceremonies, code reviews, and continuous improvement initiatives.
β’ Document data flows, technical designs, and transformation logic.
Required Skills & Experience:
β’ Bachelorβs degree in Computer Science, Engineering, or related field (or equivalent).
β’ 4+ years developing data pipelines and data warehousing solutions using Python (Pandas, PySpark, NumPy).
β’ 3+ years working with Databricks and Snowflake (or similar cloud data platforms).
β’ Experience handling large, complex datasets and advanced ETL/data modeling.
β’ Strong understanding of data integration, version control (Git), and Agile practices.
β’ Familiarity with Linux, REST APIs, Power BI, and Airflow.
β’ Experience with database performance tuning and big data frameworks (Hadoop, Spark).
β’ Excellent communication, analytical, and problem-solving skills; able to collaborate effectively in cross-functional teams.
β’ Self-driven, organized, and adaptable to changing priorities.
Nice to Have:
β’ Background in financial services and understanding of regulatory frameworks.






