Data Engineer (Python / PySpark / Data Pipelines / Big Data)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Python/PySpark/Data Pipelines/Big Data) in Glasgow, hybrid, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, PySpark, SQL, and Core Java.
🌎 - Country
United Kingdom
πŸ’± - Currency
Β£ GBP
-
πŸ’° - Day rate
360
-
πŸ—“οΈ - Date discovered
September 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Glasgow, Scotland, United Kingdom
-
🧠 - Skills detailed
#Data Architecture #BitBucket #Data Lake #Security #AWS (Amazon Web Services) #Java #Data Warehouse #PySpark #Spark (Apache Spark) #SQL (Structured Query Language) #GIT #Version Control #GitLab #Data Engineering #Data Pipeline #Python #Big Data
Role description
We are hiring for Data Engineer (Python / PySpark / Data pipelines / Big Data) Location : Glasgow - Hybrid β€’ Strong experience with Python, PySpark, and SQL. β€’ Build and maintain robust data architectures and pipelines to ensure durable, complete, and consistent data transfer and processing. β€’ Proficiency in Core Java, including Collections, Concurrency, and Memory Management. β€’ Design and implement data warehouses and data lakes that can handle large volumes of data and meet all security requirements. β€’ A solid background in performance tuning, profiling, and resolving production issues in distributed systems. β€’ Experience with version control systems like Git, GitLab, or Bitbucket & AWS is a plus. Key Skills : Data architectures / Data pipelines / data warehouses / data lakes / Python / PySpark