

Take2 Consulting, LLC
Databricks SME
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks SME with a contract length of "unknown" and a pay rate of "unknown." Key skills include Databricks, data engineering, ETL processes, and cloud data platforms, with experience in Azure and Oracle migrations required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 31, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Virginia, United States
-
π§ - Skills detailed
#Data Ingestion #Computer Science #Data Migration #Data Access #SAP #Data Engineering #Data Security #Databricks #Scala #Migration #Data Pipeline #Cloud #Data Lake #Security #Code Reviews #Azure #Documentation #Oracle #"ETL (Extract #Transform #Load)" #Databases #Data Integrity
Role description
Position Overview
We are seeking an experienced Databricks SME to join our team in a crucial role, supporting the migration and development efforts related to cloud data platforms. This role requires a highly skilled professional with deep expertise in Databricks, data engineering, and associated security technologies to facilitate seamless data transformation, security controls, and platform integration within Azure environments. The successful candidate will play a pivotal role in optimizing data workflows, implementing security policies, and supporting ongoing migration projects.
Key Responsibilities
β’ Lead the development and optimization of data pipelines using Databricks, specifically focusing on processing and managing data stored in Parquet files.
β’ Configure and manage data access controls, implementing row-level security policies through Databricks and Immuta platforms.
β’ Collaborate with cross-functional teams to migrate Oracle databases to Databricks in Azure, ensuring data integrity and performance.
β’ Design, develop, and maintain Extract, Transform, Load (ETL) processes to facilitate data ingestion into the Summit Data Platform (SDP)βthe VA's data platform.
β’ Gain comprehensive understanding of SAP and SDP architectures to align data engineering efforts efficiently.
β’ Provide technical guidance, feedback, and best practices for data security and platform enhancements.
β’ Assist in designing governance frameworks and procedures for data access and security.
β’ Participate in sprint planning, code reviews, and documentation efforts to ensure high-quality deliverables.
Qualifications & Experience:
β’ Proven experience as a Data Engineer with extensive hands-on work with Databricks on Azure.
β’ Strong familiarity with managing data stored in Parquet format and experience with cloud-based data lake architectures.
β’ Experience with Immuta or similar data security platforms, including the implementation of row-level security policies.
β’ Demonstrated expertise in building scalable ETL pipelines for data migration and integration.
β’ Knowledge of relational databases, especially Oracle, and experience transitioning data to cloud platforms.
β’ Comfortable learning new platforms such as Summit Data Platform (SDP) and providing actionable insights.
β’ Excellent problem-solving skills, communication, and ability to work collaboratively in a team environment.
Preferred Certifications & Education:
β’ Masterβs Degree in Computer Science, Electronics Engineering, or a related technical discipline, or equivalent professional experience.
β’ Minimum of 10 years of relevant technical experience or 20 years with significant hands-on data engineering expertise.
β’ Relevant certifications in Databricks, Azure Data Services, or security platforms are a plus.
Position Overview
We are seeking an experienced Databricks SME to join our team in a crucial role, supporting the migration and development efforts related to cloud data platforms. This role requires a highly skilled professional with deep expertise in Databricks, data engineering, and associated security technologies to facilitate seamless data transformation, security controls, and platform integration within Azure environments. The successful candidate will play a pivotal role in optimizing data workflows, implementing security policies, and supporting ongoing migration projects.
Key Responsibilities
β’ Lead the development and optimization of data pipelines using Databricks, specifically focusing on processing and managing data stored in Parquet files.
β’ Configure and manage data access controls, implementing row-level security policies through Databricks and Immuta platforms.
β’ Collaborate with cross-functional teams to migrate Oracle databases to Databricks in Azure, ensuring data integrity and performance.
β’ Design, develop, and maintain Extract, Transform, Load (ETL) processes to facilitate data ingestion into the Summit Data Platform (SDP)βthe VA's data platform.
β’ Gain comprehensive understanding of SAP and SDP architectures to align data engineering efforts efficiently.
β’ Provide technical guidance, feedback, and best practices for data security and platform enhancements.
β’ Assist in designing governance frameworks and procedures for data access and security.
β’ Participate in sprint planning, code reviews, and documentation efforts to ensure high-quality deliverables.
Qualifications & Experience:
β’ Proven experience as a Data Engineer with extensive hands-on work with Databricks on Azure.
β’ Strong familiarity with managing data stored in Parquet format and experience with cloud-based data lake architectures.
β’ Experience with Immuta or similar data security platforms, including the implementation of row-level security policies.
β’ Demonstrated expertise in building scalable ETL pipelines for data migration and integration.
β’ Knowledge of relational databases, especially Oracle, and experience transitioning data to cloud platforms.
β’ Comfortable learning new platforms such as Summit Data Platform (SDP) and providing actionable insights.
β’ Excellent problem-solving skills, communication, and ability to work collaboratively in a team environment.
Preferred Certifications & Education:
β’ Masterβs Degree in Computer Science, Electronics Engineering, or a related technical discipline, or equivalent professional experience.
β’ Minimum of 10 years of relevant technical experience or 20 years with significant hands-on data engineering expertise.
β’ Relevant certifications in Databricks, Azure Data Services, or security platforms are a plus.






