

KamisPro
Databricks Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer on a 12-month hybrid contract in Adelphi, MD. Key skills include Databricks, ETL/ELT, Python/Spark, and dimensional modeling. A Bachelor’s degree and Databricks/Azure certifications are preferred. A background check is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 11, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Fixed Term
-
🔒 - Security
Unknown
-
📍 - Location detailed
College Park, MD
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Python #Dimensional Data Models #Data Accuracy #Data Quality #Azure #Databricks #Data Ingestion #Data Processing #NLP (Natural Language Processing) #Computer Science #Agile #Spark (Apache Spark) #Data Engineering #Documentation #Scala #Security #"ETL (Extract #Transform #Load)" #Monitoring #Data Governance #SQL (Structured Query Language) #Data Pipeline #Data Science #Compliance #SQL Queries
Role description
This is a long-term contract (approximately 12 months) and is hybrid to Adelphi, MD. A background check will be required.
The ideal candidate is a detail-oriented, analytical problem solver who enjoys tackling complex data challenges. They communicate clearly and collaborate effectively with cross-functional teams to deliver meaningful, data-driven solutions. They are adaptable, service-oriented, and curious, with a passion for modern data technologies and continuous improvement. Highly organized and proactive, they manage multiple priorities while maintaining a strong focus on quality, scalability, and innovation.
Key Responsibilities
• Implement and optimize data models within Databricks to support efficient querying, analytics, and reporting.
• Design, develop, and maintain scalable ETL/ELT pipelines, with a strong emphasis on dimensional modeling and data quality.
• Partner with engineering teams and business stakeholders to gather requirements and deliver reliable, production-ready analytics solutions.
• Develop, optimize, and maintain SQL queries, notebooks, and scripts for data ingestion, transformation, and processing.
• Ensure data accuracy, consistency, and integrity through validation, monitoring, and cleansing processes.
• Create and maintain clear documentation for data pipelines, data models, and analytics solutions.
• Monitor, troubleshoot, and optimize data pipelines and workloads to ensure performance, reliability, and scalability.
• Stay current with industry trends, including AI-driven analytics, semantic modeling, and emerging data engineering best practices.
Preferred Experience
• Hands-on experience designing, implementing, and operating solutions in Databricks.
• Strong understanding of ETL/ELT architectures, data ingestion patterns, and data pipeline orchestration.
• Proficiency in Python and/or Spark for large-scale data processing.
• Experience designing and implementing dimensional data models in lakehouse or modern data platform environments.
• Familiarity with AI-driven analytics platforms, semantic modeling concepts, and exposure to NLP techniques.
• Experience working in SAFe Agile or other scaled Agile frameworks.
• Solid understanding of data governance, security, and compliance best practices in global, multi-provider data environments.
Academic Credentials
• Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field preferred.
• Master’s degree is a plus.
• Databricks and Azure certifications strongly preferred.
This is a long-term contract (approximately 12 months) and is hybrid to Adelphi, MD. A background check will be required.
The ideal candidate is a detail-oriented, analytical problem solver who enjoys tackling complex data challenges. They communicate clearly and collaborate effectively with cross-functional teams to deliver meaningful, data-driven solutions. They are adaptable, service-oriented, and curious, with a passion for modern data technologies and continuous improvement. Highly organized and proactive, they manage multiple priorities while maintaining a strong focus on quality, scalability, and innovation.
Key Responsibilities
• Implement and optimize data models within Databricks to support efficient querying, analytics, and reporting.
• Design, develop, and maintain scalable ETL/ELT pipelines, with a strong emphasis on dimensional modeling and data quality.
• Partner with engineering teams and business stakeholders to gather requirements and deliver reliable, production-ready analytics solutions.
• Develop, optimize, and maintain SQL queries, notebooks, and scripts for data ingestion, transformation, and processing.
• Ensure data accuracy, consistency, and integrity through validation, monitoring, and cleansing processes.
• Create and maintain clear documentation for data pipelines, data models, and analytics solutions.
• Monitor, troubleshoot, and optimize data pipelines and workloads to ensure performance, reliability, and scalability.
• Stay current with industry trends, including AI-driven analytics, semantic modeling, and emerging data engineering best practices.
Preferred Experience
• Hands-on experience designing, implementing, and operating solutions in Databricks.
• Strong understanding of ETL/ELT architectures, data ingestion patterns, and data pipeline orchestration.
• Proficiency in Python and/or Spark for large-scale data processing.
• Experience designing and implementing dimensional data models in lakehouse or modern data platform environments.
• Familiarity with AI-driven analytics platforms, semantic modeling concepts, and exposure to NLP techniques.
• Experience working in SAFe Agile or other scaled Agile frameworks.
• Solid understanding of data governance, security, and compliance best practices in global, multi-provider data environments.
Academic Credentials
• Bachelor’s degree in Computer Science, Data Science, Engineering, or a related field preferred.
• Master’s degree is a plus.
• Databricks and Azure certifications strongly preferred.





