

Experienced Data Engineer / Data Architect – Databricks Migration Expert
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Experienced Data Engineer/Data Architect specializing in Databricks migration, requiring 10+ years of IT experience, 4+ years with Databricks, and expertise in PySpark, SQL, and data lake architecture. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
June 14, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
St Louis, MO
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Spark (Apache Spark) #Data Pipeline #Delta Lake #Data Architecture #PySpark #Data Governance #Agile #Data Modeling #Data Engineering #Security #Databricks #Cloud #MLflow #Spark SQL #Data Lakehouse #GCP (Google Cloud Platform) #Azure Databricks #Azure #"ETL (Extract #Transform #Load)" #Scala #AWS (Amazon Web Services) #Migration #Data Lake #Big Data
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
We are seeking an Experienced Data Engineer / Data Architect with 10+ years of IT experience and 4+ years of deep expertise on Databrick.
This is a high-impact, onsite role where you will lead end-to-end Databricks migration and modernization projects for enterprise customers. If you have successfully driven multiple full lifecycle migrations to Databricks on Azure or AWS, this opportunity is for you.
Key Responsibilities
• Lead and execute end-to-end Databricks migration projects, from assessment to production cutover.
• Design and implement data lake modernization, ETL/ELT re-engineering, and performance optimization.
• Architect secure, scalable, high-performance solutions on Databricks with Delta Lake, Unity Catalog, and MLflow.
• Align with stakeholders, define migration strategies, oversee execution, testing, and production go-live.
• Collaborate with cross-functional teams in agile enterprise environments and manage vendors/partners as needed.
• Champion data governance and security best practices within the Databricks ecosystem.
• Hands-on delivery using Azure Databricks, AWS Databricks, and hybrid cloud architectures.
Required Skills and Experience
• 9+ years of overall experience in Data Engineering / Big Data / Cloud Data platforms.
• 4+ years of hands-on experience with Databricks (Azure or AWS or GCP ).
• Proven track record of leading 3-4+ full-cycle Databricks migration projects.
• Expertise in PySpark, SQL, Delta Lake, and Data Lakehouse architecture.
• Strong understanding of Databricks architecture: Spark optimization, Delta Lake, Unity Catalog, MLflow.
• Deep knowledge of data modeling, distributed systems, and data pipelines.