

Data Engineer (Python / PySpark / Data Pipelines / Big Data)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Python/PySpark/Data Pipelines/Big Data) in Glasgow, hybrid, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, PySpark, SQL, and Core Java.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
360
-
ποΈ - Date discovered
September 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Glasgow, Scotland, United Kingdom
-
π§ - Skills detailed
#Data Architecture #BitBucket #Data Lake #Security #AWS (Amazon Web Services) #Java #Data Warehouse #PySpark #Spark (Apache Spark) #SQL (Structured Query Language) #GIT #Version Control #GitLab #Data Engineering #Data Pipeline #Python #Big Data
Role description
We are hiring for Data Engineer (Python / PySpark / Data pipelines / Big Data)
Location : Glasgow - Hybrid
β’ Strong experience with Python, PySpark, and SQL.
β’ Build and maintain robust data architectures and pipelines to ensure durable, complete, and consistent data transfer and processing.
β’ Proficiency in Core Java, including Collections, Concurrency, and Memory Management.
β’ Design and implement data warehouses and data lakes that can handle large volumes of data and meet all security requirements.
β’ A solid background in performance tuning, profiling, and resolving production issues in distributed systems.
β’ Experience with version control systems like Git, GitLab, or Bitbucket & AWS is a plus.
Key Skills : Data architectures / Data pipelines / data warehouses / data lakes / Python / PySpark
We are hiring for Data Engineer (Python / PySpark / Data pipelines / Big Data)
Location : Glasgow - Hybrid
β’ Strong experience with Python, PySpark, and SQL.
β’ Build and maintain robust data architectures and pipelines to ensure durable, complete, and consistent data transfer and processing.
β’ Proficiency in Core Java, including Collections, Concurrency, and Memory Management.
β’ Design and implement data warehouses and data lakes that can handle large volumes of data and meet all security requirements.
β’ A solid background in performance tuning, profiling, and resolving production issues in distributed systems.
β’ Experience with version control systems like Git, GitLab, or Bitbucket & AWS is a plus.
Key Skills : Data architectures / Data pipelines / data warehouses / data lakes / Python / PySpark