

Damco Solutions Limited
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Scala-Spark Developer) in London, on a contract inside IR35, offering competitive pay. Key skills include Apache Spark, Scala, Hadoop, and ETL. Strong software development experience and problem-solving skills are required.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 7, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Inside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
London Area, United Kingdom
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Unix #GCP (Google Cloud Platform) #Synapse #Azure Databricks #AWS EMR (Amazon Elastic MapReduce) #HDFS (Hadoop Distributed File System) #Apache Spark #Cloud #Agile #Azure #Monitoring #Hadoop #BigQuery #Scala #Data Engineering #AWS (Amazon Web Services) #Databricks #Observability #Data Pipeline #NoSQL #Scripting #SQL (Structured Query Language) #Spark (Apache Spark) #Shell Scripting #Data Processing #Version Control #GIT #Spark SQL #Big Data
Role description
Job Title: Scala-Spark Developer
Location: London
Employment Type: Contract – Inside IR35
About the Role
We’re looking for a seasoned Spark–Scala Developer to design, build, and optimize large-scale distributed data processing pipelines. You’ll work closely with cross-functional teams in an Agile environment to deliver high-quality, production-grade data solutions.
Key Responsibilities
• Design, develop, and maintain scalable data pipelines using Apache Spark (Core, SQL, Streaming) and Scala.
• Work with Hadoop ecosystem—HDFS, Hive, and related Big Data technologies.
• Implement ETL/Data Warehousing solutions and optimize data processing performance.
• Collaborate with upstream/downstream teams to identify, troubleshoot, and resolve data issues.
• Translate business requirements into robust technical solutions aligned with quality standards.
• Participate in Agile ceremonies—daily standups, sprint planning, reviews, and retrospectives.
• Support production systems and contribute to continuous improvement and best practices.
Required Qualifications
• Good experience in software development and support.
• Strong proficiency in Scala with clean, maintainable coding practices.
• Hands-on expertise in Apache Spark (Spark Core, Spark SQL, Spark Streaming).
• Solid understanding of Data Structures & Algorithms.
• Experience with Hadoop, HDFS, Hive and broader Big Data stack.
• Strong Database knowledge with SQL and exposure to NoSQL.
• Familiarity with ETL and Data Warehousing concepts.
• Excellent problem-solving, analytical, and communication skills.
• Comfortable working in a global delivery and cross-functional team environment.
Nice to Have
• UNIX shell scripting for job scheduling/runs.
• Experience with CI/CD, version control (Git), and monitoring/observability for data pipelines.
• Knowledge of cloud data platforms (AWS EMR/Glue, Azure Databricks/Synapse, GCP DataProc/BigQuery).
Job Title: Scala-Spark Developer
Location: London
Employment Type: Contract – Inside IR35
About the Role
We’re looking for a seasoned Spark–Scala Developer to design, build, and optimize large-scale distributed data processing pipelines. You’ll work closely with cross-functional teams in an Agile environment to deliver high-quality, production-grade data solutions.
Key Responsibilities
• Design, develop, and maintain scalable data pipelines using Apache Spark (Core, SQL, Streaming) and Scala.
• Work with Hadoop ecosystem—HDFS, Hive, and related Big Data technologies.
• Implement ETL/Data Warehousing solutions and optimize data processing performance.
• Collaborate with upstream/downstream teams to identify, troubleshoot, and resolve data issues.
• Translate business requirements into robust technical solutions aligned with quality standards.
• Participate in Agile ceremonies—daily standups, sprint planning, reviews, and retrospectives.
• Support production systems and contribute to continuous improvement and best practices.
Required Qualifications
• Good experience in software development and support.
• Strong proficiency in Scala with clean, maintainable coding practices.
• Hands-on expertise in Apache Spark (Spark Core, Spark SQL, Spark Streaming).
• Solid understanding of Data Structures & Algorithms.
• Experience with Hadoop, HDFS, Hive and broader Big Data stack.
• Strong Database knowledge with SQL and exposure to NoSQL.
• Familiarity with ETL and Data Warehousing concepts.
• Excellent problem-solving, analytical, and communication skills.
• Comfortable working in a global delivery and cross-functional team environment.
Nice to Have
• UNIX shell scripting for job scheduling/runs.
• Experience with CI/CD, version control (Git), and monitoring/observability for data pipelines.
• Knowledge of cloud data platforms (AWS EMR/Glue, Azure Databricks/Synapse, GCP DataProc/BigQuery).






