

Tential Solutions
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a 6-month remote contract, focusing on data integration in financial services. Key skills include Apache Flink, Kafka, Databricks, and Postgres CDC. A minimum of 5 years of relevant experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 5, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Migration #Data Engineering #Data Pipeline #Programming #Kafka (Apache Kafka) #Python #Data Integration #"ACID (Atomicity #Consistency #Isolation #Durability)" #Snowflake #Consulting #Scala #Delta Lake #Data Lineage #Data Extraction #Databricks #Batch #Leadership #"ETL (Extract #Transform #Load)" #Java #Storage
Role description
Project Overview
We are seeking a high-caliber Data Engineer to support a massive post-merger integration (PMI) between two tier-1 financial institutions. As a consultant, you will be responsible for the architecture and execution of real-time and batch data pipelines that unify disparate banking systems into a cohesive, scalable target-state platform.
Role Information
• Location: Remote
• Duration: 6 Months (High likelihood of extension based on project phases)
• Start Date: ASAP
Key Responsibilities
• Data Integration & Streaming: Design and implement robust, low-latency streaming pipelines using Apache Flink and Kafka to handle high-volume banking transactions.
• CDC Implementation: Manage data extraction from legacy systems using Postgres WAL Change Data Capture (CDC) to ensure data consistency during the migration.
• Modern Data Stack Management: Build and optimize Lakehouse architectures using Databricks and Snowflake, leveraging Delta Lake for ACID transactions and data versioning.
• Pipeline Optimization: Refine ETL/ELT processes to meet strict financial regulatory and performance benchmarks.
• Consulting & Collaboration: Work alongside Big 4 partners and client stakeholders to map data lineages and resolve integration blockers in a fast-paced merger environment.
Technical Requirements
• Experience: Minimum 5 years of professional experience in Data Engineering, ideally within Financial Services or Fintech.
• Core Stack: Expert-level proficiency in Apache Flink, Kafka, and Databricks.
• Database Expertise: Strong hands-on experience with Postgres (CDC) and Snowflake data warehousing.
• Storage Frameworks: Deep understanding of Delta Lake and Lakehouse design principles.
• Programming: Proficiency in Python, Scala, or Java for streaming applications.
Consulting Attributes
• Financial Services Context: Understanding of banking data domains (e.g., retail banking, payments, or risk management) is highly preferred.
• Adaptability: Ability to hit the ground running in a high-pressure, ASAP-start environment.
• Communication: Strong ability to document technical workflows and present findings to both technical and non-technical leadership.
Project Overview
We are seeking a high-caliber Data Engineer to support a massive post-merger integration (PMI) between two tier-1 financial institutions. As a consultant, you will be responsible for the architecture and execution of real-time and batch data pipelines that unify disparate banking systems into a cohesive, scalable target-state platform.
Role Information
• Location: Remote
• Duration: 6 Months (High likelihood of extension based on project phases)
• Start Date: ASAP
Key Responsibilities
• Data Integration & Streaming: Design and implement robust, low-latency streaming pipelines using Apache Flink and Kafka to handle high-volume banking transactions.
• CDC Implementation: Manage data extraction from legacy systems using Postgres WAL Change Data Capture (CDC) to ensure data consistency during the migration.
• Modern Data Stack Management: Build and optimize Lakehouse architectures using Databricks and Snowflake, leveraging Delta Lake for ACID transactions and data versioning.
• Pipeline Optimization: Refine ETL/ELT processes to meet strict financial regulatory and performance benchmarks.
• Consulting & Collaboration: Work alongside Big 4 partners and client stakeholders to map data lineages and resolve integration blockers in a fast-paced merger environment.
Technical Requirements
• Experience: Minimum 5 years of professional experience in Data Engineering, ideally within Financial Services or Fintech.
• Core Stack: Expert-level proficiency in Apache Flink, Kafka, and Databricks.
• Database Expertise: Strong hands-on experience with Postgres (CDC) and Snowflake data warehousing.
• Storage Frameworks: Deep understanding of Delta Lake and Lakehouse design principles.
• Programming: Proficiency in Python, Scala, or Java for streaming applications.
Consulting Attributes
• Financial Services Context: Understanding of banking data domains (e.g., retail banking, payments, or risk management) is highly preferred.
• Adaptability: Ability to hit the ground running in a high-pressure, ASAP-start environment.
• Communication: Strong ability to document technical workflows and present findings to both technical and non-technical leadership.





