

Senior Data Engineer – Finance & Capital Markets
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Finance & Capital Markets, offering a remote contract for U.S. candidates. Requires 6–10 years of Data Engineering experience, including 3+ years in finance, with skills in AWS, Apache Spark, and Kafka. Pay rate unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date discovered
September 27, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Remote
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Databricks #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Spark (Apache Spark) #Snowflake #Data Architecture #Batch #Cloud #Scala #Big Data #Data Pipeline #Python #Data Quality #S3 (Amazon Simple Storage Service) #Data Engineering #Compliance #Data Warehouse #Delta Lake #Redshift #Apache Spark #dbt (data build tool) #Data Processing #Observability
Role description
🚀 We’re Hiring: Senior Data Engineer – Finance & Capital Markets
🌎 Location: Remote (U.S. candidates only)
🏦 Industry: Financial Services | Capital Markets
🧠 Experience: 6–10 years in Data Engineering (3+ years in Finance & Capital Markets)
We’re looking for a Senior Data Engineer who thrives in high-performance environments and has deep experience building data platforms for trading, risk, and market analytics.
In this role, you’ll design scalable, real-time, and batch data pipelines using the latest cloud and big data technologies—all on AWS—to drive mission-critical financial applications.
🛠️ What You’ll Do:
• Build and optimize large-scale data pipelines using Apache Spark, Kafka, and AWS services
• Implement Medallion Architecture (Bronze/Silver/Gold layers) for structured data processing
• Work with Parquet and Iceberg table formats to support schema evolution and performance
• Design and support real-time streaming pipelines for market/trading and time-series data
• Collaborate with data architects on lakehouse and data warehouse architecture
• Ensure data quality, governance, and compliance with financial regulations
• Mentor junior engineers and contribute to best practices
✅ What You Bring:
• Hands-on experience with AWS services: S3, Glue, Redshift, EMR, Kinesis, Lake Formation
• Proficiency in Apache Spark, Kafka, and Python
• Deep understanding of Parquet, Iceberg, and Medallion Architecture
• Strong grasp of financial data models – trading data, risk analytics, market data
• 6–10 years of experience in Data Engineering, including 3+ years in Finance & Capital Markets
✨ Bonus Skills:
• Exposure to Databricks, DBT, Delta Lake, or Snowflake
• Experience with CI/CD for data workflows and data observability frameworks
💼 Why Join Us?
• Build high-impact data platforms powering real-time trading and risk systems
• Collaborate with top-tier financial and technology professionals
• 100% Remote – U.S. candidates only
🚀 We’re Hiring: Senior Data Engineer – Finance & Capital Markets
🌎 Location: Remote (U.S. candidates only)
🏦 Industry: Financial Services | Capital Markets
🧠 Experience: 6–10 years in Data Engineering (3+ years in Finance & Capital Markets)
We’re looking for a Senior Data Engineer who thrives in high-performance environments and has deep experience building data platforms for trading, risk, and market analytics.
In this role, you’ll design scalable, real-time, and batch data pipelines using the latest cloud and big data technologies—all on AWS—to drive mission-critical financial applications.
🛠️ What You’ll Do:
• Build and optimize large-scale data pipelines using Apache Spark, Kafka, and AWS services
• Implement Medallion Architecture (Bronze/Silver/Gold layers) for structured data processing
• Work with Parquet and Iceberg table formats to support schema evolution and performance
• Design and support real-time streaming pipelines for market/trading and time-series data
• Collaborate with data architects on lakehouse and data warehouse architecture
• Ensure data quality, governance, and compliance with financial regulations
• Mentor junior engineers and contribute to best practices
✅ What You Bring:
• Hands-on experience with AWS services: S3, Glue, Redshift, EMR, Kinesis, Lake Formation
• Proficiency in Apache Spark, Kafka, and Python
• Deep understanding of Parquet, Iceberg, and Medallion Architecture
• Strong grasp of financial data models – trading data, risk analytics, market data
• 6–10 years of experience in Data Engineering, including 3+ years in Finance & Capital Markets
✨ Bonus Skills:
• Exposure to Databricks, DBT, Delta Lake, or Snowflake
• Experience with CI/CD for data workflows and data observability frameworks
💼 Why Join Us?
• Build high-impact data platforms powering real-time trading and risk systems
• Collaborate with top-tier financial and technology professionals
• 100% Remote – U.S. candidates only