

Matlen Silver
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Required skills include 5-8+ years in data engineering, proficiency in Python and SQL, and experience with AWS, GCP, or Azure.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Terraform #GIT #MongoDB #Python #ML (Machine Learning) #Data Modeling #Security #Data Science #Documentation #Infrastructure as Code (IaC) #Spark (Apache Spark) #ADF (Azure Data Factory) #Scala #Vault #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #dbt (data build tool) #GCP (Google Cloud Platform) #Databricks #Cloud #Monitoring #Snowflake #Delta Lake #AWS (Amazon Web Services) #Azure #SQL (Structured Query Language) #Data Ingestion #Airflow #Data Engineering
Role description
Job Description:
PME (Product Master Environment) is the firm-wide provider of product & pricing reference data to over 400+ consuming applications spanning multiple lines of business (Global Markets, Global Wealth & Investment Management, etc.) from the front office (trading apps, etc.) to the back office. PME is positioned within the broader Global Markets Reference Data Organization which also includes Party, Client Account and Firm Account reference data. Were looking for a hands on Data Engineer who is fluent in building and operating modern, scalable ETL/ELT pipelines across AWS, GCP, and Azure. Youll design, implement, and optimize data ingestion, transformation, and orchestration workflows leveraging Snowflake and Databricks (Spark), with strong working knowledge of MongoDB for operational and analytics use cases. Youll collaborate with data scientists, platform engineers, and product teams to deliver reliable, secure, and cost efficient data products.
Job Requirements:
This is a development-centric role and you will be expected to deliver technical solutions by managing all phases of software development life cycle.
• This is an application developer 100% hands-on role
• Analyze business or technical requirements and design solutions to develop business critical components
• Work closely with technical and business partners across the globe
• Responsible for application design, development, delivery and post-production support
• Enhance existing application stack and act as application manager for some key reference data applications
Required
• 5-8+ years in data engineering or software engineering with hands on pipeline development.
• Production experience with at least one of the following clouds: AWS, GCP, Azure.
• Strong proficiency in Python and SQL (Scala a plus).
• Deep experience with Snowflake (ELT, performance tuning, security) and Databricks/Spark (Delta Lake, structured streaming).
• Experience integrating and modeling data from MongoDB.
• Solid grasp of orchestration (Airflow/ADF/Cloud Composer), CI/CD, Git, and IaC (Terraform).
• Strong understanding of data modeling, distributed systems, file formats, and performance.
• Track record of shipping production grade pipelines with monitoring, alerting, and documentation.
Preferred
• Experience with Kafka/Kinesis/Event Hubs for streaming ingestion.
• dbt for transformation & testing in Snowflake/Databricks.
• Security best practices: encryption, KMS/Key Vault, tokenization, network isolation.
• Experience with cost governance/FinOps for data platforms.
• Exposure to ML feature pipelines and feature stores (e.g., Databricks Feature Store).
Job Description:
PME (Product Master Environment) is the firm-wide provider of product & pricing reference data to over 400+ consuming applications spanning multiple lines of business (Global Markets, Global Wealth & Investment Management, etc.) from the front office (trading apps, etc.) to the back office. PME is positioned within the broader Global Markets Reference Data Organization which also includes Party, Client Account and Firm Account reference data. Were looking for a hands on Data Engineer who is fluent in building and operating modern, scalable ETL/ELT pipelines across AWS, GCP, and Azure. Youll design, implement, and optimize data ingestion, transformation, and orchestration workflows leveraging Snowflake and Databricks (Spark), with strong working knowledge of MongoDB for operational and analytics use cases. Youll collaborate with data scientists, platform engineers, and product teams to deliver reliable, secure, and cost efficient data products.
Job Requirements:
This is a development-centric role and you will be expected to deliver technical solutions by managing all phases of software development life cycle.
• This is an application developer 100% hands-on role
• Analyze business or technical requirements and design solutions to develop business critical components
• Work closely with technical and business partners across the globe
• Responsible for application design, development, delivery and post-production support
• Enhance existing application stack and act as application manager for some key reference data applications
Required
• 5-8+ years in data engineering or software engineering with hands on pipeline development.
• Production experience with at least one of the following clouds: AWS, GCP, Azure.
• Strong proficiency in Python and SQL (Scala a plus).
• Deep experience with Snowflake (ELT, performance tuning, security) and Databricks/Spark (Delta Lake, structured streaming).
• Experience integrating and modeling data from MongoDB.
• Solid grasp of orchestration (Airflow/ADF/Cloud Composer), CI/CD, Git, and IaC (Terraform).
• Strong understanding of data modeling, distributed systems, file formats, and performance.
• Track record of shipping production grade pipelines with monitoring, alerting, and documentation.
Preferred
• Experience with Kafka/Kinesis/Event Hubs for streaming ingestion.
• dbt for transformation & testing in Snowflake/Databricks.
• Security best practices: encryption, KMS/Key Vault, tokenization, network isolation.
• Experience with cost governance/FinOps for data platforms.
• Exposure to ML feature pipelines and feature stores (e.g., Databricks Feature Store).






