

Lead Data Engineer - Kafka, Snowflake, and Python
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Lead Data Engineer position based in Jersey City, NJ, for a long-term contract. Required skills include Kafka, Snowflake, and Python, with experience in data pipelines and cloud platforms. Domain knowledge in financial markets is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 17, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Jersey City, NJ
-
π§ - Skills detailed
#Data Processing #Python #Data Pipeline #Data Engineering #MIFID (Markets in Financial Instruments Directive) #Compliance #Batch #Scripting #Azure #Spark (Apache Spark) #GCP (Google Cloud Platform) #"ETL (Extract #Transform #Load)" #Kafka (Apache Kafka) #SQL (Structured Query Language) #Data Manipulation #Cloud #AWS (Amazon Web Services) #Data Integration #Scala #Snowflake #Programming #Automation #Data Modeling #Data Warehouse
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Group Nine, is seeking the following. Apply via Dice today!
Title: Lead Data Engineer - Kafka, Snowflake, and Python
JR ID JR1029512
Location Jersey City, NJ (3 days onsite, Hybrid)
Duration: Long Term Contract
Lead the development and optimization of batch and real-time data pipelines, ensuring scalability, reliability, and performance. Architect, design, and deploy data integration, streaming, and analytics solutions leveraging Spark, Kafka, and Snowflake. Ability to help voluntarily and proactively, and support Team Members, Peers to deliver their tasks to ensure End-to-end delivery. Evaluates technical performance challenges and recommend tuning solutions. Hands-on knowledge of Data Service Engineer to design, develop, and maintain our Reference Data System utilizing modern data technologies including Kafka, Snowflake, and Python.
Requirement:
β’ Proven experience in building and maintaining data pipelines, especially using Kafka, Snowflake, and Python.
β’ Strong expertise in distributed data processing and streaming architectures.
β’ Experience with Snowflake data warehouse platform: data loading, performance tuning, and management.
β’ Proficiency in Python scripting and programming for data manipulation and automation.
β’ Familiarity with Kafka ecosystem (Confluent, Kafka Connect, Kafka Streams).
β’ Knowledge of SQL, data modeling, and ETL/ELT processes.
β’ Understanding of cloud platforms (AWS, Azure, Google Cloud Platform) is a plus.
Nice to have:
Domain Knowledge in any of the below area:
β’ Trade Processing, Settlement, Reconciliation, and related back/middle-office functions within financial markets (Equities, Fixed Income, Derivatives, FX, etc.).
β’ Strong understanding of trade lifecycle events, order types, allocation rules, and settlement processes.
β’ Corporate Data System- Funding Support, Planning & Analysis, Regulatory reporting & Compliance. Knowledge of regulatory standards (such as Dodd-Frank, EMIR, MiFID II) related to trade reporting and lifecycle management.