

Stable
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 12-24 month contract, hybrid location. Key skills include ETL/ELT development, Databricks, Apache Spark, and experience in engineering environments. Familiarity with PDM, PLM, and cloud data platforms is essential.
🌎 - Country
United Kingdom
💱 - Currency
£ GBP
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 1, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Outside IR35
-
🔒 - Security
Unknown
-
📍 - Location detailed
Reading, England, United Kingdom
-
🧠 - Skills detailed
#Databricks #Documentation #Batch #Data Architecture #Data Ingestion #Security #"ETL (Extract #Transform #Load)" #Hadoop #Apache Spark #Data Processing #Data Quality #Spark (Apache Spark) #Kafka (Apache Kafka) #Azure #Synapse #Data Lake #Scala #Data Engineering #Data Management #Oracle #SAP #Cloud #Data Catalog #NiFi (Apache NiFi) #Metadata #Datasets #Airflow
Role description
Data Engineer
Reading
12-24 Month Contract
Hybrid
Inside/Outside IR35
Role Responsibilities:
• Proven experience as a Data Engineer working in engineering, manufacturing, or enterprise environments.
• Strong hands-on experience with ETL/ELT development across batch and streaming workloads.
• Skilled in building and maintaining data-streaming pipelines (e.g., Kafka, Spark Streaming, or equivalent).
• Proficiency with Databricks and Apache Spark for large-scale data processing.
• Experience with Hadoop and/or wider Apache ecosystem tools (e.g., Kafka, NiFi, Airflow).
• Strong understanding of Product Data Management (PDM), PLM, and/or engineering data structures.
• Experience integrating data from PDM, PLM, ERP, manufacturing, or other engineering systems.
• Hands-on experience with cloud data platforms (Azure Data Lake, Data Factory, Synapse, or similar).
• Ability to work with structured and unstructured engineering data across product lifecycle processes.
• Comfortable working in secure, regulated, and multi-national environments.
Key Responsibilities:
• Design, build, and optimise data ingestion, transformation, and integration pipelines.
• Implement scalable processing frameworks using Databricks, Spark, Hadoop, and Apache technologies.
• Develop and support real-time and near-real-time data-streaming pipelines for engineering and operational data.
• Integrate engineering datasets across PDM, PLM, ERP, manufacturing, and digital engineering platforms.
• Support modelling and structuring of engineering data in alignment with architectural guidance.
• Assist Data Architects with the implementation of data standards and data models (light ISO 10303 exposure where required).
• Develop data solutions aligned with defined architectural frameworks and security requirements.
• Ensure data quality, lineage, metadata, and governance practices are implemented in all pipelines.
• Collaborate with architects, engineering SMEs, and cross-functional teams to gather requirements and deliver robust data solutions.
Desirable Skills:
• Produce documentation, operational handover materials, and best-practice guidance.
• Troubleshoot, optimise, and maintain performance of cloud-based and distributed data systems.
• Experience with PLM platforms such as Siemens Teamcenter, Windchill, 3DX, or Aras.
• Exposure to ERP systems (SAP, Oracle, IFS, or equivalent).
• Familiarity with manufacturing systems (MES/MOM).
• Working knowledge of engineering data standards (e.g., ISO 10303 / STEP).
• Experience with data cataloguing, metadata management, lineage tools, or governance platforms.
• Understanding of engineering lifecycle processes, digital thread concepts, and systems engineering methodologies.
• Experience in large-scale defence, aerospace, or highly regulated engineering environments.
• Strong communication and stakeholder engagement skills.
Data Engineer
Reading
12-24 Month Contract
Hybrid
Inside/Outside IR35
Role Responsibilities:
• Proven experience as a Data Engineer working in engineering, manufacturing, or enterprise environments.
• Strong hands-on experience with ETL/ELT development across batch and streaming workloads.
• Skilled in building and maintaining data-streaming pipelines (e.g., Kafka, Spark Streaming, or equivalent).
• Proficiency with Databricks and Apache Spark for large-scale data processing.
• Experience with Hadoop and/or wider Apache ecosystem tools (e.g., Kafka, NiFi, Airflow).
• Strong understanding of Product Data Management (PDM), PLM, and/or engineering data structures.
• Experience integrating data from PDM, PLM, ERP, manufacturing, or other engineering systems.
• Hands-on experience with cloud data platforms (Azure Data Lake, Data Factory, Synapse, or similar).
• Ability to work with structured and unstructured engineering data across product lifecycle processes.
• Comfortable working in secure, regulated, and multi-national environments.
Key Responsibilities:
• Design, build, and optimise data ingestion, transformation, and integration pipelines.
• Implement scalable processing frameworks using Databricks, Spark, Hadoop, and Apache technologies.
• Develop and support real-time and near-real-time data-streaming pipelines for engineering and operational data.
• Integrate engineering datasets across PDM, PLM, ERP, manufacturing, and digital engineering platforms.
• Support modelling and structuring of engineering data in alignment with architectural guidance.
• Assist Data Architects with the implementation of data standards and data models (light ISO 10303 exposure where required).
• Develop data solutions aligned with defined architectural frameworks and security requirements.
• Ensure data quality, lineage, metadata, and governance practices are implemented in all pipelines.
• Collaborate with architects, engineering SMEs, and cross-functional teams to gather requirements and deliver robust data solutions.
Desirable Skills:
• Produce documentation, operational handover materials, and best-practice guidance.
• Troubleshoot, optimise, and maintain performance of cloud-based and distributed data systems.
• Experience with PLM platforms such as Siemens Teamcenter, Windchill, 3DX, or Aras.
• Exposure to ERP systems (SAP, Oracle, IFS, or equivalent).
• Familiarity with manufacturing systems (MES/MOM).
• Working knowledge of engineering data standards (e.g., ISO 10303 / STEP).
• Experience with data cataloguing, metadata management, lineage tools, or governance platforms.
• Understanding of engineering lifecycle processes, digital thread concepts, and systems engineering methodologies.
• Experience in large-scale defence, aerospace, or highly regulated engineering environments.
• Strong communication and stakeholder engagement skills.






