

Catapult Federal Services
Databricks Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." It requires remote work with quarterly travel to Gaithersburg, MD. Key skills include Databricks, Azure, Python, and Spark. A Databricks certification and 3+ years of relevant experience are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 27, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Gaithersburg, MD
-
π§ - Skills detailed
#Data Pipeline #Computer Science #AI (Artificial Intelligence) #Infrastructure as Code (IaC) #Databricks #Microsoft Azure #Azure cloud #Cloud #Azure #Data Analysis #Data Management #Agile #Datasets #.Net #Automation #Data Engineering #Data Governance #Scala #Security #Spark (Apache Spark) #Data Science #Data Quality #Metadata #AWS (Amazon Web Services) #Python #Storage #Data Storage #"ETL (Extract #Transform #Load)" #Data Security #Compliance #R #Data Lake #Data Integrity #Data Catalog
Role description
β’
β’
β’ Not Open to C2C
β’
β’
β’ Databricks Data Engineer
Location: Remote with Quarterly travel to Gaithersburg, MD (greater D.C. area preferred)
Clearance: Public Trust (U.S. Citizenship Required)
Databricks Data Engineer to develop/support new/existing data pipelines, and data analytics environments in Azure cloud-based data lake. As a data engineer, you will translate business requirements to data engineering solutions to support an enterprise scale Microsoft Azure based data analytics platform. You will support continued maintenance of ETL operations and development of new pipelines ensuring data quality and data management. The ideal candidate will bring deep expertise in Databricks, a solid foundation in advanced AI technologies, and applies critical thinking to create innovative functions and solve technical issues by cross-functional team collaboration.
In this role, you will:
- Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
- Manage ingestion routines for processing multi-terabyte datasets efficiently for multiple projects simultaneously, where each project may have multiple Databricks workspaces.
- Integrate data from various structured and unstructured sources to enable high-quality business insights. Proficiency in data analysis techniques for deriving insights from large datasets.
- Implement effective data management strategies to ensure data integrity, availability, and accessibility. Identify opportunities for cost optimization in data storage, processing, and analytics operations.
- Monitor and support user requests, addressing platform or performance issues, cluster stability, Spark optimization, and configuration management.
- Collaborate with the team to enable advanced AI-driven analytics and data science workflows.
- Integrate with various Azure services including Azure Functions, Storage Services, Data Factory, Log Analytics, and User Management for seamless data workflows.
- Experience with the above Azure services is a plus.
- Provision and manage infrastructure using Infrastructure-as-Code (IaC)
- Apply best practices for data security, data governance, and compliance, ensuring support for federal regulations and public trust standards.
- Proactively collaborate with technical and non-technical teams to gather requirements and translate business needs into data solutions.
For this position, you must possess:
- BS degree in Computer Science or related field and 3+ years or Masterβs degree with 2+ years of experience
- 3+ years of experience developing and designing Ingestion flows (structured, streaming, and unstructured data) using cloud platform services with data quality
- Databricks Data Engineer certification and 2+ years of experience maintaining Databricks platform and development in Spark
- Ability to work directly with clients and act as front line support for requests coming in from clients. Clearly document and express the solution in form of architecture and interface diagrams.
- Proficient at Python, Spark and R are essential. .NET based development is a plus.
- Knowledge and experience with data governance, including metadata management, enterprise data catalog, design standards, data quality governance, and data security.
- Experience with Agile process methodology, CI/CD automation, and cloud-based developments (Azure, AWS).
- Not required, but additional education, certifications, and/or experience are a plus: Certifications in Azure cloud, Knowledge of FinOps principles and cost management to be considered US Citizen to be considered
β’
β’
β’ Not Open to C2C
β’
β’
β’ Databricks Data Engineer
Location: Remote with Quarterly travel to Gaithersburg, MD (greater D.C. area preferred)
Clearance: Public Trust (U.S. Citizenship Required)
Databricks Data Engineer to develop/support new/existing data pipelines, and data analytics environments in Azure cloud-based data lake. As a data engineer, you will translate business requirements to data engineering solutions to support an enterprise scale Microsoft Azure based data analytics platform. You will support continued maintenance of ETL operations and development of new pipelines ensuring data quality and data management. The ideal candidate will bring deep expertise in Databricks, a solid foundation in advanced AI technologies, and applies critical thinking to create innovative functions and solve technical issues by cross-functional team collaboration.
In this role, you will:
- Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
- Manage ingestion routines for processing multi-terabyte datasets efficiently for multiple projects simultaneously, where each project may have multiple Databricks workspaces.
- Integrate data from various structured and unstructured sources to enable high-quality business insights. Proficiency in data analysis techniques for deriving insights from large datasets.
- Implement effective data management strategies to ensure data integrity, availability, and accessibility. Identify opportunities for cost optimization in data storage, processing, and analytics operations.
- Monitor and support user requests, addressing platform or performance issues, cluster stability, Spark optimization, and configuration management.
- Collaborate with the team to enable advanced AI-driven analytics and data science workflows.
- Integrate with various Azure services including Azure Functions, Storage Services, Data Factory, Log Analytics, and User Management for seamless data workflows.
- Experience with the above Azure services is a plus.
- Provision and manage infrastructure using Infrastructure-as-Code (IaC)
- Apply best practices for data security, data governance, and compliance, ensuring support for federal regulations and public trust standards.
- Proactively collaborate with technical and non-technical teams to gather requirements and translate business needs into data solutions.
For this position, you must possess:
- BS degree in Computer Science or related field and 3+ years or Masterβs degree with 2+ years of experience
- 3+ years of experience developing and designing Ingestion flows (structured, streaming, and unstructured data) using cloud platform services with data quality
- Databricks Data Engineer certification and 2+ years of experience maintaining Databricks platform and development in Spark
- Ability to work directly with clients and act as front line support for requests coming in from clients. Clearly document and express the solution in form of architecture and interface diagrams.
- Proficient at Python, Spark and R are essential. .NET based development is a plus.
- Knowledge and experience with data governance, including metadata management, enterprise data catalog, design standards, data quality governance, and data security.
- Experience with Agile process methodology, CI/CD automation, and cloud-based developments (Azure, AWS).
- Not required, but additional education, certifications, and/or experience are a plus: Certifications in Azure cloud, Knowledge of FinOps principles and cost management to be considered US Citizen to be considered






