

RAZOR
Data Engineer with Databricks
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Databricks, offering a long-term, 100% remote contract supporting the Department of Veterans Affairs. Requires 5+ years of experience, expertise in Azure Databricks, ETL pipelines, and data access governance using Immuta or Unity Catalog.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 28, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Washington DC-Baltimore Area
-
🧠 - Skills detailed
#Databricks #ADF (Azure Data Factory) #"ETL (Extract #Transform #Load)" #Azure #Cloud #Data Engineering #Cybersecurity #Migration #Azure Databricks #Security #Vault #Oracle #Data Access #Data Security #ADLS (Azure Data Lake Storage) #Data Privacy
Role description
Razor is seeking a hands-on Data Engineer with deep experience in Azure Databricks environments and enterprise data access governance (Databricks Unity Catalog or Immuta).
This role supports our long term, 100% remote federal government contract supporting the Department of Veterans Affairs. Specifically, the role is helping VA’s migration from Oracle into the Summit Data Platform (SDP) and will be responsible for building modern ETL pipelines, Parquet-based data models, and secure access frameworks aligned with VA data policies.
Key Responsibilities
• Engineer and optimize ETL / ELT pipelines to move data from Oracle and other systems into Azure Databricks using Parquet file formats
• Develop data access controls, policy enforcement, and row-level security (RLS) using Immuta or Databricks-native access governance
• Partner with VA teams to understand and operationalize the Summit Data Platform (SDP)
• Contribute technical input and feedback to data security, architecture, and modernization initiatives
• Document and enforce standards, patterns, and best practices for reusable and compliant data engineering
• Collaborate with cloud architects, cybersecurity, and product teams to ensure secure and efficient data delivery
Required Skills
• 5+ years as a Data Engineer
• Strong experience with Azure Databricks and Parquet file architecture
• Hands-on building secure ETL pipelines for enterprise-scale cloud data platforms
• Experience implementing data access policies via Immuta or Databricks Unity Catalog
• Understanding of Azure-native data services (ADLS, ADF, Key Vault, etc.)
• Ability to quickly learn and adapt to VA’s Summit Data Platform (SDP)
Nice to Have
• Experience migrating from Oracle to modern cloud platforms
• Knowledge of data privacy, governance, and FedRAMP / FISMA concepts
Razor is seeking a hands-on Data Engineer with deep experience in Azure Databricks environments and enterprise data access governance (Databricks Unity Catalog or Immuta).
This role supports our long term, 100% remote federal government contract supporting the Department of Veterans Affairs. Specifically, the role is helping VA’s migration from Oracle into the Summit Data Platform (SDP) and will be responsible for building modern ETL pipelines, Parquet-based data models, and secure access frameworks aligned with VA data policies.
Key Responsibilities
• Engineer and optimize ETL / ELT pipelines to move data from Oracle and other systems into Azure Databricks using Parquet file formats
• Develop data access controls, policy enforcement, and row-level security (RLS) using Immuta or Databricks-native access governance
• Partner with VA teams to understand and operationalize the Summit Data Platform (SDP)
• Contribute technical input and feedback to data security, architecture, and modernization initiatives
• Document and enforce standards, patterns, and best practices for reusable and compliant data engineering
• Collaborate with cloud architects, cybersecurity, and product teams to ensure secure and efficient data delivery
Required Skills
• 5+ years as a Data Engineer
• Strong experience with Azure Databricks and Parquet file architecture
• Hands-on building secure ETL pipelines for enterprise-scale cloud data platforms
• Experience implementing data access policies via Immuta or Databricks Unity Catalog
• Understanding of Azure-native data services (ADLS, ADF, Key Vault, etc.)
• Ability to quickly learn and adapt to VA’s Summit Data Platform (SDP)
Nice to Have
• Experience migrating from Oracle to modern cloud platforms
• Knowledge of data privacy, governance, and FedRAMP / FISMA concepts






