

Catapult Federal Services
Databricks Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." It requires active Public Trust clearance, quarterly travel to Gaithersburg, MD, and expertise in Databricks, Azure services, Python, and Spark.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 12, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Yes
-
📍 - Location detailed
Washington DC-Baltimore Area
-
🧠 - Skills detailed
#Azure Data Factory #Data Security #"ETL (Extract #Transform #Load)" #Compliance #Data Ingestion #Data Engineering #Agile #R #Data Management #Computer Science #Storage #Microsoft Azure #Azure cloud #Metadata #Spark (Apache Spark) #Security #Data Integrity #Azure #Data Catalog #Datasets #Data Pipeline #Cloud #Scala #Databricks #Data Governance #.Net #AI (Artificial Intelligence) #Python #Automation #ADF (Azure Data Factory) #Infrastructure as Code (IaC)
Role description
Active Public Trust clearance required.
Quarterly Travel to Gaithersburg, MD required.
Job Description
We are seeking a Databricks Data Engineer to develop and support data pipelines and analytics environments within Azure cloud infrastructure. The engineer will translate business requirements into data solutions supporting an enterprise-scale Microsoft Azure-based data analytics platform. This includes maintaining ETL operations, developing new pipelines, ensuring data integrity, and enabling AI-driven analytics.
Responsibilities
• Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
• Manage ingestion routines for multi-terabyte datasets across multiple Databricks workspaces.
• Integrate structured and unstructured data to enable high-quality business insights.
• Implement data management and governance strategies ensuring security and compliance.
• Support user requests, platform stability, and Spark performance tuning.
• Collaborate across teams to integrate with Azure Functions, Data Factory, Log Analytics, and more.
• Manage infrastructure using Infrastructure-as-Code (IaC) principles.
• Apply best practices for data security, governance, and federal compliance.
Qualifications
• BS in Computer Science or related field with 3+ years of experience, or MS with 2+ years.
• 3+ years of experience building ingestion flows for structured and unstructured data in the cloud.
• Databricks Data Engineer certification and 2+ years maintaining Databricks platform/Spark development.
• Strong skills in Python, Spark, and R. (.NET a plus)
• Experience with Azure services (Data Factory, Storage, Functions, Log Analytics).
• Familiarity with data governance, metadata management, and enterprise data catalogs.
• Experience with Agile methodology, CI/CD automation, and cloud-based development.
• U.S. Citizenship and active Public Trust clearance required.
Skills:
• Databricks
• Azure Data Factory
• Python
• Spark
• R
• Data Ingestion
• ETL Development
• Medallion Architecture
• Infrastructure as Code (IaC)
• Data Governance
• CI/CD
• Cloud Data Engineering
• Metadata Management
• Azure Functions
• Log Analytics
Active Public Trust clearance required.
Quarterly Travel to Gaithersburg, MD required.
Job Description
We are seeking a Databricks Data Engineer to develop and support data pipelines and analytics environments within Azure cloud infrastructure. The engineer will translate business requirements into data solutions supporting an enterprise-scale Microsoft Azure-based data analytics platform. This includes maintaining ETL operations, developing new pipelines, ensuring data integrity, and enabling AI-driven analytics.
Responsibilities
• Design, build, and optimize scalable data solutions using Databricks and Medallion Architecture.
• Manage ingestion routines for multi-terabyte datasets across multiple Databricks workspaces.
• Integrate structured and unstructured data to enable high-quality business insights.
• Implement data management and governance strategies ensuring security and compliance.
• Support user requests, platform stability, and Spark performance tuning.
• Collaborate across teams to integrate with Azure Functions, Data Factory, Log Analytics, and more.
• Manage infrastructure using Infrastructure-as-Code (IaC) principles.
• Apply best practices for data security, governance, and federal compliance.
Qualifications
• BS in Computer Science or related field with 3+ years of experience, or MS with 2+ years.
• 3+ years of experience building ingestion flows for structured and unstructured data in the cloud.
• Databricks Data Engineer certification and 2+ years maintaining Databricks platform/Spark development.
• Strong skills in Python, Spark, and R. (.NET a plus)
• Experience with Azure services (Data Factory, Storage, Functions, Log Analytics).
• Familiarity with data governance, metadata management, and enterprise data catalogs.
• Experience with Agile methodology, CI/CD automation, and cloud-based development.
• U.S. Citizenship and active Public Trust clearance required.
Skills:
• Databricks
• Azure Data Factory
• Python
• Spark
• R
• Data Ingestion
• ETL Development
• Medallion Architecture
• Infrastructure as Code (IaC)
• Data Governance
• CI/CD
• Cloud Data Engineering
• Metadata Management
• Azure Functions
• Log Analytics





