Talent Complete

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a B2B contract in Glasgow for 6 months, offering a competitive pay rate. Key skills include Python, Databricks, Snowflake, and experience with large datasets. A Bachelor's degree and 4+ years in data pipelines are required.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 7, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Data Modeling #GIT #Spark (Apache Spark) #Version Control #Database Performance #"ETL (Extract #Transform #Load)" #Linux #Code Reviews #NumPy #Datasets #Microsoft Power BI #Agile #PySpark #Pandas #Data Integration #Big Data #BI (Business Intelligence) #Data Engineering #Python #REST (Representational State Transfer) #Automation #Hadoop #REST API #Scala #Data Quality #Data Pipeline #Snowflake #Databricks #Cloud #Data Framework #Airflow #Computer Science
Role description
Join our hybrid Data Engineering team in Glasgow (B2B contract) and help us build modern, high-performance data pipelines with Python, Databricks, and Snowflake. You’ll collaborate across teams, drive technical excellence, and continuously enhance our data processes within an Agile setup. Key Responsibilities: β€’ Design, develop, and deploy ETL and data pipeline solutions using Python, Databricks, and Snowflake. β€’ Collaborate across teams to define data requirements and ensure alignment with business goals. β€’ Manage data quality, integrity, and performance optimization across large datasets. β€’ Implement testing and automation frameworks, ensuring reliability and scalability. β€’ Utilize REST APIs for data integrations and Airflow for orchestration. β€’ Participate in Agile ceremonies, code reviews, and continuous improvement initiatives. β€’ Document data flows, technical designs, and transformation logic. Required Skills & Experience: β€’ Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent). β€’ 4+ years developing data pipelines and data warehousing solutions using Python (Pandas, PySpark, NumPy). β€’ 3+ years working with Databricks and Snowflake (or similar cloud data platforms). β€’ Experience handling large, complex datasets and advanced ETL/data modeling. β€’ Strong understanding of data integration, version control (Git), and Agile practices. β€’ Familiarity with Linux, REST APIs, Power BI, and Airflow. β€’ Experience with database performance tuning and big data frameworks (Hadoop, Spark). β€’ Excellent communication, analytical, and problem-solving skills; able to collaborate effectively in cross-functional teams. β€’ Self-driven, organized, and adaptable to changing priorities. Nice to Have: β€’ Background in financial services and understanding of regulatory frameworks.