Senior Data Engineer (Finance Exp Needed)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 10-12+ years of experience, including 3+ years in Finance & Capital Markets. It offers a remote contract, focusing on AWS, Apache Spark, and Kafka, with a pay rate of "unknown."
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 27, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Databricks #AWS (Amazon Web Services) #Kafka (Apache Kafka) #Spark (Apache Spark) #Snowflake #AWS S3 (Amazon Simple Storage Service) #Data Architecture #Batch #Scala #Data Pipeline #Python #Data Quality #S3 (Amazon Simple Storage Service) #Data Engineering #Compliance #Delta Lake #Redshift #Apache Spark #dbt (data build tool) #Data Processing #Observability
Role description
Job Title: Senior Data Engineer (Finance exp needed) Location: Remote Position Overview We are looking for an experienced Senior Data Engineer with strong expertise in building and optimizing large-scale data pipelines and platforms on AWS. The candidate must have experience in Finance and Capital Markets, with the ability to handle complex, real-time, and batch data processing requirements. Key Responsibilities β€’ Design, develop, and optimize scalable data pipelines using AWS services, Apache Spark, and Kafka. β€’ Implement Medallion Architecture (Bronze, Silver, Gold layers) to structure data processing. β€’ Work with Parquet and Iceberg table formats to support schema evolution and high-performance queries. β€’ Build real-time streaming pipelines for market/trading data and time-series analytics. β€’ Collaborate with data architects to design lakehouse and warehouse solutions. β€’ Ensure data quality, governance, and compliance with financial regulations. β€’ Mentor and guide junior data engineers. Required Skills & Experience β€’ Strong hands-on experience with AWS (S3, Glue, Redshift, EMR, Kinesis, Lake Formation). β€’ Expertise in Apache Spark, Kafka, and Python. β€’ Knowledge of Parquet, Iceberg, and Medallion Architecture. β€’ Strong understanding of financial data models, trading data, and risk analytics. β€’ 10 -12+ years of experience in Data Engineering, with at least 3+ years in Finance & Capital Markets. Preferred β€’ Exposure to Databricks, DBT, Delta Lake, or Snowflake. β€’ Experience with CI/CD and data observability frameworks.