

ETL Developer (Prophecy )
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an ETL Developer (Prophecy) with a contract length of "unknown" and a pay rate of "unknown." Key skills required include 2+ years with Prophecy and 5+ years in data engineering, particularly with Spark and Databricks.
π - Country
United Kingdom
π± - Currency
Β£ GBP
-
π° - Day rate
-
ποΈ - Date discovered
June 18, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Leeds, England, United Kingdom
-
π§ - Skills detailed
#Data Lakehouse #Scala #Cloud #Automated Testing #Data Mart #GIT #Data Lake #Delta Lake #Spark (Apache Spark) #Airflow #SQL (Structured Query Language) #Data Catalog #Data Engineering #Talend #Jenkins #"ETL (Extract #Transform #Load)" #Version Control #Data Pipeline #Data Processing #Data Architecture #Informatica #Batch #DataStage #GitHub #Databricks #PySpark #Deployment
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Key Responsibilities:
β’ Build and optimize Prophecy data pipelines for large scale batch and streaming data workloads using Pyspark
β’ Define end-to-end data architecture leveraging prophecy integrated with databricks or Spark or other cloud-native compute engines
β’ Establish coding standards, reusable components, and naming conventions using Prophecy's visual designer and metdata-driven approach
β’ Implement scalable and efficient data models (e.g star schema, scd typ2) for data marts and analytics layer
β’ Integrate Prophecy pipelines with orchestration tools like Airflow, data catalog tools for lineage
β’ Implement version control, automated testing and deployment pipelines using Git and CI/CD (e.g GitHub and Jenkins)
β’ Monitor and tune performance of Spark jobs optimize data partitions and caching strategies
β’ Having experience and exposure to convert legacy etl tools like datastage, informatica into Prophecy pipelines using Transpiler component of Prophecy
Required skill & experience:
β’ 2+ years of hands-on experience with Prophecy (Using pyspark) approach
β’ 5+ years of experience in data engineering with tools such as Spark, Databricks,scala/Pyspark or SQL
β’ Strong understanding of ETL/ELT pipelines, distributed data processing and data lake architecture.
β’ Having exposure to ETL tools such as Informatica,
β’ Datastage or talend is added advantage
β’ Experience with Unity catalog, Delta lake and modern data lakehouse concepts
β’ Strong communication and stakeholder management skills.