

Amaze Systems
Data Integration Lead (Streaming & Warehousing)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Integration Lead (Streaming & Warehousing) on a long-term contract, remote position. Requires 5+ years in data engineering, retail wealth expertise, strong skills in AWS, Kafka, Databricks, SQL, and experience with Snowflake migrations.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Engineering #DevSecOps #"ETL (Extract #Transform #Load)" #Snowflake #Airflow #AWS (Amazon Web Services) #Spark SQL #Data Governance #Leadership #Data Quality #Databricks #Metadata #S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #Data Integration #Oracle #Migration #Spark (Apache Spark) #Observability #Strategy #PySpark #SQL (Structured Query Language) #Scala #Data Modeling #Batch #Infrastructure as Code (IaC) #dbt (data build tool) #Data Pipeline
Role description
Position: Data Integration Lead (Streaming & Warehousing)-Retail Wealth
Location: Remote
Duration: Long-Term Contract
Role
The role is a hands-on Data Engineering Lead to build streaming and batch data pipelines powering our digital self‑service roadmap. Design robust ETL/ELT on AWS, Kafka, Databricks, Oracle, and guide the transition to Snowflake. The role blends architecture and delivery—owning data quality, observability, and governance in the Retail Wealth domain.
• Domain Expertise: Deep experience in retail wealth, broker dealers, warehouses, family offices, custodians, trade order management, and digital self-service platforms (e.g., robo advisors).
• Technical Stack: AWS, Kafka, Databricks, Oracle, Snowflake (future), ETL processes.
• Hands-On Leadership: Integration lead must design/develop APIs and coordinate with architects; data lead must be strong in ETL/data integration.
Responsibilities
• Design and build streaming & batch pipelines for ingestion, curation, and consumption (real‑time + micro‑batch).
• Engineer scalable ELT/ETL on Databricks (PySpark/Spark SQL), integrating sources including APEX, custodians, broker dealers, and market/reference data.
• Optimize workloads on AWS (S3, Glue, EMR/Databricks, Lakehouse patterns); manage Oracle sources; drive Snowflake migration strategy and execution.
• Enforce data quality (DQ), lineage, metadata, and governance with best‑practice frameworks and tooling.
• Partner with analytics, product, and integration teams to support dashboards, operational reporting, advanced analytics, and ODS.
• Establish DevSecOps for data (versioned transformations, CI/CD, IaC patterns, secrets mgmt).
• Define and track SLAs/SLOs, cost controls, and performance baselines.
Must-Have Qualifications
· 5+ years (7+ preferred) in data engineering with lead-level ownership delivering production pipelines.
· Retail Wealth expertise: custodians, broker dealers, warehouses/family offices; order/trade and position/transaction data.
· Hands-on with Kafka (topics, partitions, schema/registry), Databricks (PySpark/Spark SQL), AWS data stack, and Oracle sources.
· Strong SQL/performance tuning; ELT/ETL design patterns; batch orchestration (e.g., Airflow/Databricks Jobs).
· Practical data governance: lineage, DQ, PII controls, encryption, RBAC, and regulatory awareness (FINRA/SEC).
· Experience planning/executing Snowflake migrations (data modeling, performance, cost/pricing levers).
Nice to Have
• Familiarity with Apex Fintech data domains and vendor ecosystems (Orion, Envestnet, Pershing/Schwab).
• Knowledge of DTCC/NSCC, Morningstar data, advisory/UMA/SMA billing & commissions.
• Observability for data (Great Expectations/Deequ, Delta Live Tables), cost optimization, and dbt or equivalent.
• Bachelor’s in CS/Engineering/Math/IS (Master’s a plus).
Position: Data Integration Lead (Streaming & Warehousing)-Retail Wealth
Location: Remote
Duration: Long-Term Contract
Role
The role is a hands-on Data Engineering Lead to build streaming and batch data pipelines powering our digital self‑service roadmap. Design robust ETL/ELT on AWS, Kafka, Databricks, Oracle, and guide the transition to Snowflake. The role blends architecture and delivery—owning data quality, observability, and governance in the Retail Wealth domain.
• Domain Expertise: Deep experience in retail wealth, broker dealers, warehouses, family offices, custodians, trade order management, and digital self-service platforms (e.g., robo advisors).
• Technical Stack: AWS, Kafka, Databricks, Oracle, Snowflake (future), ETL processes.
• Hands-On Leadership: Integration lead must design/develop APIs and coordinate with architects; data lead must be strong in ETL/data integration.
Responsibilities
• Design and build streaming & batch pipelines for ingestion, curation, and consumption (real‑time + micro‑batch).
• Engineer scalable ELT/ETL on Databricks (PySpark/Spark SQL), integrating sources including APEX, custodians, broker dealers, and market/reference data.
• Optimize workloads on AWS (S3, Glue, EMR/Databricks, Lakehouse patterns); manage Oracle sources; drive Snowflake migration strategy and execution.
• Enforce data quality (DQ), lineage, metadata, and governance with best‑practice frameworks and tooling.
• Partner with analytics, product, and integration teams to support dashboards, operational reporting, advanced analytics, and ODS.
• Establish DevSecOps for data (versioned transformations, CI/CD, IaC patterns, secrets mgmt).
• Define and track SLAs/SLOs, cost controls, and performance baselines.
Must-Have Qualifications
· 5+ years (7+ preferred) in data engineering with lead-level ownership delivering production pipelines.
· Retail Wealth expertise: custodians, broker dealers, warehouses/family offices; order/trade and position/transaction data.
· Hands-on with Kafka (topics, partitions, schema/registry), Databricks (PySpark/Spark SQL), AWS data stack, and Oracle sources.
· Strong SQL/performance tuning; ELT/ETL design patterns; batch orchestration (e.g., Airflow/Databricks Jobs).
· Practical data governance: lineage, DQ, PII controls, encryption, RBAC, and regulatory awareness (FINRA/SEC).
· Experience planning/executing Snowflake migrations (data modeling, performance, cost/pricing levers).
Nice to Have
• Familiarity with Apex Fintech data domains and vendor ecosystems (Orion, Envestnet, Pershing/Schwab).
• Knowledge of DTCC/NSCC, Morningstar data, advisory/UMA/SMA billing & commissions.
• Observability for data (Great Expectations/Deequ, Delta Live Tables), cost optimization, and dbt or equivalent.
• Bachelor’s in CS/Engineering/Math/IS (Master’s a plus).






