

Global Business Ser. 4u
Sr Data Engineer - Remote
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Engineer (Remote) with a contract duration of over 6 months, offering a competitive pay rate. Key skills required include Databricks, Apache Spark, DBT, Python, and SQL. Experience in financial services is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#PySpark #Macros #SQL (Structured Query Language) #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Python #Apache Spark #dbt (data build tool) #Data Engineering #Databricks #Scala #Airflow #Data Ingestion
Role description
We are seeking a Senior / Lead Data Engineer to join a mature enterprise data platform team built on Databricks, Spark, and DBT. This role is designed for an engineer who can lead, own, and drive work, not simply execute tasks.
The environment is past early build‑out and moving into scale, refinement, and optimization. The engineer will work across ingestion pipelines and refined (gold‑layer) assets, while also mentoring junior engineers and taking accountability for delivery.
This is not an order‑taker role. Candidates must bring ideas, make design decisions, and lead workstreams independently.
Key Responsibilities
• Lead the design, development, and optimization of data ingestion pipelines using Databricks and Spark
• Build and maintain refined (gold‑layer) data assets using DBT and SQL
• Author and manage DBT models and macros to create reusable, scalable transformations
• Work within a medallion architecture (bronze / silver / gold layers)
• Partner with product owners, engineers, and stakeholders to understand data use cases
• Take ownership of workstreams and mentor junior engineers, including task delegation and progress oversight
• Identify performance, scalability, and cost‑optimization opportunities in Databricks workloads
• Contribute to reusable ingestion and transformation frameworks used across teams
Required Technical Skills (Non‑Negotiable)
Candidates must have strong, hands‑on experience with:
• Databricks (production experience required)
• Apache Spark / PySpark
• DBT, including DBT macros
• Python
• SQL (used primarily through DBT)
• Strong understanding of data warehousing and medallion architecture concepts
This role requires real implementation depth, not surface‑level exposure.
Experience & Seniority Expectations
• Senior / Lead‑level engineer (title history as Lead is a strong plus)
• Proven ability to:
• Take ownership and accountability
• Drive solutions independently
• Lead and mentor junior engineers
• Comfortable operating in a team of 7–8 engineers
• Able to stay on a project long‑term (1+ year mindset strongly preferred)
Nice‑to‑Have Skills
• Airflow (primary orchestration tool)
• Experience working with hundreds of pipelines in large enterprise environments
• Prior experience building reusable data assets and frameworks
• Financial services exposure (banking, investments, insurance) — preferred but not required
Domain & Business Acumen (Important)
• Financial services experience is not a hard requirement
• Candidates must be:
• Willing to learn business context (banking, mutual funds, brokerage systems)
• Interested in becoming a data SME who can answer stakeholder questions
Ability to connect technical work to business value is a strong differentiator
We are seeking a Senior / Lead Data Engineer to join a mature enterprise data platform team built on Databricks, Spark, and DBT. This role is designed for an engineer who can lead, own, and drive work, not simply execute tasks.
The environment is past early build‑out and moving into scale, refinement, and optimization. The engineer will work across ingestion pipelines and refined (gold‑layer) assets, while also mentoring junior engineers and taking accountability for delivery.
This is not an order‑taker role. Candidates must bring ideas, make design decisions, and lead workstreams independently.
Key Responsibilities
• Lead the design, development, and optimization of data ingestion pipelines using Databricks and Spark
• Build and maintain refined (gold‑layer) data assets using DBT and SQL
• Author and manage DBT models and macros to create reusable, scalable transformations
• Work within a medallion architecture (bronze / silver / gold layers)
• Partner with product owners, engineers, and stakeholders to understand data use cases
• Take ownership of workstreams and mentor junior engineers, including task delegation and progress oversight
• Identify performance, scalability, and cost‑optimization opportunities in Databricks workloads
• Contribute to reusable ingestion and transformation frameworks used across teams
Required Technical Skills (Non‑Negotiable)
Candidates must have strong, hands‑on experience with:
• Databricks (production experience required)
• Apache Spark / PySpark
• DBT, including DBT macros
• Python
• SQL (used primarily through DBT)
• Strong understanding of data warehousing and medallion architecture concepts
This role requires real implementation depth, not surface‑level exposure.
Experience & Seniority Expectations
• Senior / Lead‑level engineer (title history as Lead is a strong plus)
• Proven ability to:
• Take ownership and accountability
• Drive solutions independently
• Lead and mentor junior engineers
• Comfortable operating in a team of 7–8 engineers
• Able to stay on a project long‑term (1+ year mindset strongly preferred)
Nice‑to‑Have Skills
• Airflow (primary orchestration tool)
• Experience working with hundreds of pipelines in large enterprise environments
• Prior experience building reusable data assets and frameworks
• Financial services exposure (banking, investments, insurance) — preferred but not required
Domain & Business Acumen (Important)
• Financial services experience is not a hard requirement
• Candidates must be:
• Willing to learn business context (banking, mutual funds, brokerage systems)
• Interested in becoming a data SME who can answer stakeholder questions
Ability to connect technical work to business value is a strong differentiator






