NLB Services

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Glasgow (hybrid) for 6 to 12 months, offering a competitive pay rate. Key skills required include 4+ years in Python and data pipelines, 3+ years with Databricks and Snowflake, and experience in complex data environments.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 5, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Apache Airflow #Snowflake #Databricks #Libraries #Scala #Big Data #BI (Business Intelligence) #NumPy #Pandas #REST (Representational State Transfer) #Airflow #Microsoft Power BI #PySpark #Python #Data Orchestration #Linux #REST API #GIT #Spark (Apache Spark) #Data Pipeline #Visualization #Database Administration #Hadoop #Data Engineering #Cloud #Data Processing
Role description
Data Engineer Location - Glasgow (hybrid) 3 days in a week Contract role (6 to 12 Months) Skills / Qualifications: Β· 4+ years of experience developing data pipelines and data warehousing solutions using Python and libraries such as Pandas, NumPy, PySpark, etc. Β· 3+ years hands-on experience with cloud services, especially Databricks, for building and managing scalable data pipelines Β· 3+ years of proficiency in working with Snowflake or similar cloud-based data warehousing solutions Β· 3+ years of experience in data development and solutions in highly complex data environments with large data volumes. Experience with code versioning tools (e.g., Git) Β· Knowledge of Linux operating systems Β· Familiarity with REST APIs and integration techniques Β· Familiarity with data visualization tools and libraries (e.g., Power BI) Β· Background in database administration or performance tuning Β· Familiarity with data orchestration tools, such as Apache Airflow Β· Previous exposure to big data technologies (e.g., Hadoop, Spark) for large data processing