

Tential Solutions
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience in Financial Services, focusing on data integration and streaming using Apache Flink and Kafka. The contract is for 6 months, remote, with a pay rate of "TBD."
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 17, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Storage #Data Lineage #"ETL (Extract #Transform #Load)" #Programming #Snowflake #Data Engineering #Leadership #Consulting #Scala #Delta Lake #Data Pipeline #Java #Databricks #"ACID (Atomicity #Consistency #Isolation #Durability)" #Batch #Python #Data Extraction #Migration #Kafka (Apache Kafka) #Data Integration
Role description
Project Overview
We are seeking a high-caliber Data Engineer to support a massive post-merger integration (PMI) between two tier-1 financial institutions. As a consultant, you will be responsible for the architecture and execution of real-time and batch data pipelines that unify disparate banking systems into a cohesive, scalable target-state platform.
Role Information
• Location: Remote
• Duration: 6 Months (High likelihood of extension based on project phases)
• Start Date: ASAP
Key Responsibilities
• Data Integration & Streaming: Design and implement robust, low-latency streaming pipelines using Apache Flink and Kafka to handle high-volume banking transactions.
• CDC Implementation: Manage data extraction from legacy systems using Postgres WAL Change Data Capture (CDC) to ensure data consistency during the migration.
• Modern Data Stack Management: Build and optimize Lakehouse architectures using Databricks and Snowflake, leveraging Delta Lake for ACID transactions and data versioning.
• Pipeline Optimization: Refine ETL/ELT processes to meet strict financial regulatory and performance benchmarks.
• Consulting & Collaboration: Work alongside Big 4 partners and client stakeholders to map data lineages and resolve integration blockers in a fast-paced merger environment.
Technical Requirements
• Experience: Minimum 5 years of professional experience in Data Engineering, ideally within Financial Services or Fintech.
• Core Stack: Expert-level proficiency in Apache Flink, Kafka, and Databricks.
• Database Expertise: Strong hands-on experience with Postgres (CDC) and Snowflake data warehousing.
• Storage Frameworks: Deep understanding of Delta Lake and Lakehouse design principles.
• Programming: Proficiency in Python, Scala, or Java for streaming applications.
Consulting Attributes
• Financial Services Context: Understanding of banking data domains (e.g., retail banking, payments, or risk management) is highly preferred.
• Adaptability: Ability to hit the ground running in a high-pressure, ASAP-start environment.
• Communication: Strong ability to document technical workflows and present findings to both technical and non-technical leadership.
Project Overview
We are seeking a high-caliber Data Engineer to support a massive post-merger integration (PMI) between two tier-1 financial institutions. As a consultant, you will be responsible for the architecture and execution of real-time and batch data pipelines that unify disparate banking systems into a cohesive, scalable target-state platform.
Role Information
• Location: Remote
• Duration: 6 Months (High likelihood of extension based on project phases)
• Start Date: ASAP
Key Responsibilities
• Data Integration & Streaming: Design and implement robust, low-latency streaming pipelines using Apache Flink and Kafka to handle high-volume banking transactions.
• CDC Implementation: Manage data extraction from legacy systems using Postgres WAL Change Data Capture (CDC) to ensure data consistency during the migration.
• Modern Data Stack Management: Build and optimize Lakehouse architectures using Databricks and Snowflake, leveraging Delta Lake for ACID transactions and data versioning.
• Pipeline Optimization: Refine ETL/ELT processes to meet strict financial regulatory and performance benchmarks.
• Consulting & Collaboration: Work alongside Big 4 partners and client stakeholders to map data lineages and resolve integration blockers in a fast-paced merger environment.
Technical Requirements
• Experience: Minimum 5 years of professional experience in Data Engineering, ideally within Financial Services or Fintech.
• Core Stack: Expert-level proficiency in Apache Flink, Kafka, and Databricks.
• Database Expertise: Strong hands-on experience with Postgres (CDC) and Snowflake data warehousing.
• Storage Frameworks: Deep understanding of Delta Lake and Lakehouse design principles.
• Programming: Proficiency in Python, Scala, or Java for streaming applications.
Consulting Attributes
• Financial Services Context: Understanding of banking data domains (e.g., retail banking, payments, or risk management) is highly preferred.
• Adaptability: Ability to hit the ground running in a high-pressure, ASAP-start environment.
• Communication: Strong ability to document technical workflows and present findings to both technical and non-technical leadership.






