

Programmers.io
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Databricks Lead) in Detroit, MI, for 24+ months at a competitive pay rate. Requires 14+ years in Data Engineering, 4+ years with Databricks, and expertise in PySpark, Oracle migration, and cloud-native architectures.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 24, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Detroit, MI
-
🧠 - Skills detailed
#Delta Lake #ODI (Oracle Data Integrator) #Data Vault #DevOps #Databases #Scala #Data Migration #Cloud #Agile #Data Pipeline #Batch #"ETL (Extract #Transform #Load)" #Migration #Scrum #Strategy #AWS (Amazon Web Services) #Spark SQL #Oracle #Data Processing #Data Governance #Data Framework #DataOps #Data Engineering #SQL (Structured Query Language) #PySpark #Databricks #Vault #Spark (Apache Spark)
Role description
Position Title: Databricks Lead (Data Migration Lead)
Location : Detroit MI – Onsite
Duration : 24+ Months
Job Overview
We are looking for a highly senior, deeply hands on Databricks Lead to lead a large‑scale Oracle‑to‑Databricks migration, covering schema migration, code conversion, and ODI job modernization. The ideal candidate has extensive experience building enterprise-grade data platforms on Databricks, has executed at least one greenfield Databricks implementation, and is exceptionally strong in PySpark, Spark SQL, framework development, and Databricks Workflows.
Key Responsibilities
• Architect, design, and implement cloud-native data platforms using Databricks (ingestion → transformation → consumption).
• Lead the full Oracle → Databricks migration including schema translation, ETL/ELT logic modernization, and ODI job replacement.
• Develop reusable PySpark frameworks, data processing patterns, and orchestration using Databricks Workflows.
• Build scalable, secure, and cost‑optimized Databricks infrastructure and data pipelines.
• Collaborate with business and technical stakeholders to drive data modernization strategy.
• Establish development best practices, coding standards, CI/CD, and DevOps/DataOps patterns.
• Provide technical mentorship and create training plans for engineering teams.
• Contribute to building MLOps and advanced operations frameworks.
Required Qualifications
• 14+ years in Data Engineering/Architecture with at least 4+ years hands-on Databricks experience delivering end‑to‑end cloud data solutions.
• Strong experience migrating from Oracle/on‑prem systems to Databricks, including SQL, PL/SQL, ETL logic, and ODI pipelines.
• Deep hands-on expertise in:
• PySpark, Spark SQL, Delta Lake, Unity Catalog
• Building reusable data frameworks
• Designing high‑performance batch and streaming pipelines
• Proven experience with greenfield Databricks implementations.
• Strong understanding of cloud-native architectures on AWS and modern data platform concepts.
• Solid knowledge of data warehousing, columnar databases, and performance optimization.
• Good understanding of Agile/Scrum development processes.
• Bonus: Experience designing Data Products, Data Mesh architectures, Data Vault or enterprise data governance models.
• Good Understanding of Golden Gate.
Position Title: Databricks Lead (Data Migration Lead)
Location : Detroit MI – Onsite
Duration : 24+ Months
Job Overview
We are looking for a highly senior, deeply hands on Databricks Lead to lead a large‑scale Oracle‑to‑Databricks migration, covering schema migration, code conversion, and ODI job modernization. The ideal candidate has extensive experience building enterprise-grade data platforms on Databricks, has executed at least one greenfield Databricks implementation, and is exceptionally strong in PySpark, Spark SQL, framework development, and Databricks Workflows.
Key Responsibilities
• Architect, design, and implement cloud-native data platforms using Databricks (ingestion → transformation → consumption).
• Lead the full Oracle → Databricks migration including schema translation, ETL/ELT logic modernization, and ODI job replacement.
• Develop reusable PySpark frameworks, data processing patterns, and orchestration using Databricks Workflows.
• Build scalable, secure, and cost‑optimized Databricks infrastructure and data pipelines.
• Collaborate with business and technical stakeholders to drive data modernization strategy.
• Establish development best practices, coding standards, CI/CD, and DevOps/DataOps patterns.
• Provide technical mentorship and create training plans for engineering teams.
• Contribute to building MLOps and advanced operations frameworks.
Required Qualifications
• 14+ years in Data Engineering/Architecture with at least 4+ years hands-on Databricks experience delivering end‑to‑end cloud data solutions.
• Strong experience migrating from Oracle/on‑prem systems to Databricks, including SQL, PL/SQL, ETL logic, and ODI pipelines.
• Deep hands-on expertise in:
• PySpark, Spark SQL, Delta Lake, Unity Catalog
• Building reusable data frameworks
• Designing high‑performance batch and streaming pipelines
• Proven experience with greenfield Databricks implementations.
• Strong understanding of cloud-native architectures on AWS and modern data platform concepts.
• Solid knowledge of data warehousing, columnar databases, and performance optimization.
• Good understanding of Agile/Scrum development processes.
• Bonus: Experience designing Data Products, Data Mesh architectures, Data Vault or enterprise data governance models.
• Good Understanding of Golden Gate.






