

F2Onsite
Data Bricks Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Bricks Engineer in Herndon, VA, on a long-term contract. Requires hands-on Databricks experience, proficiency in Python/Spark, and strong ETL/ELT knowledge. Preferred qualifications include a Bachelor's degree and Databricks/Azure certifications. US citizenship or green card required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Herndon, VA
-
🧠 - Skills detailed
#Data Processing #Scala #Python #Data Engineering #Computer Science #Agile #Data Accuracy #Documentation #Spark (Apache Spark) #Monitoring #Security #Data Science #Data Modeling #Databricks #SQL (Structured Query Language) #Data Pipeline #Data Bricks #AI (Artificial Intelligence) #NLP (Natural Language Processing) #Data Ingestion #Dimensional Data Models #Data Quality #Compliance #Data Lake #"ETL (Extract #Transform #Load)" #Data Cleansing #Data Lakehouse #Data Governance #SQL Queries #Azure
Role description
Position 2) Data bricks engineer
Location : Herndon, VA
Onsite
GC or USC Only
Long term contract
Key Responsibilities
• Implement and optimize data models and structures within Databricks to support efficient querying, analytics, and reporting.
• Design, develop, and maintain scalable data pipelines and ETL/ELT workflows, with a strong emphasis on dimensional data modeling and data quality.
• Partner with engineering teams and business stakeholders to gather requirements and deliver reliable analytics solutions.
• Develop, optimize, and maintain SQL queries, notebooks, and scripts for data ingestion, transformation, and processing.
• Ensure data accuracy, consistency, and integrity through validation, monitoring, and data cleansing processes.
• Create and maintain comprehensive documentation for data pipelines, models, and analytics solutions.
• Monitor, troubleshoot, and optimize data pipelines and analytics workloads to ensure performance and reliability.
• Stay current with industry trends, including AI-driven analytics tools, semantic modeling, and emerging data engineering best practices.
Preferred Experience
• Hands-on experience implementing and operating solutions in Databricks.
• Strong understanding of ETL/ELT architectures and data ingestion patterns.
• Proficiency in Python and/or Spark for large-scale data processing.
• Experience designing and implementing dimensional data models in data lakehouse environments.
• Familiarity with AI-driven analytics platforms, semantic modeling concepts, and NLP techniques.
• Experience working in SAFe Agile or other scaled Agile environments.
• Solid understanding of data governance, security, and compliance best practices in global environments with numerous data providers and consumers.
Qualifications:
• Bachelor's degree in computer science, Data Science, Engineering, or a related field is preferred.
• Master's degree is a plus.
• Databricks and Azure certifications strongly preferred.
Position 2) Data bricks engineer
Location : Herndon, VA
Onsite
GC or USC Only
Long term contract
Key Responsibilities
• Implement and optimize data models and structures within Databricks to support efficient querying, analytics, and reporting.
• Design, develop, and maintain scalable data pipelines and ETL/ELT workflows, with a strong emphasis on dimensional data modeling and data quality.
• Partner with engineering teams and business stakeholders to gather requirements and deliver reliable analytics solutions.
• Develop, optimize, and maintain SQL queries, notebooks, and scripts for data ingestion, transformation, and processing.
• Ensure data accuracy, consistency, and integrity through validation, monitoring, and data cleansing processes.
• Create and maintain comprehensive documentation for data pipelines, models, and analytics solutions.
• Monitor, troubleshoot, and optimize data pipelines and analytics workloads to ensure performance and reliability.
• Stay current with industry trends, including AI-driven analytics tools, semantic modeling, and emerging data engineering best practices.
Preferred Experience
• Hands-on experience implementing and operating solutions in Databricks.
• Strong understanding of ETL/ELT architectures and data ingestion patterns.
• Proficiency in Python and/or Spark for large-scale data processing.
• Experience designing and implementing dimensional data models in data lakehouse environments.
• Familiarity with AI-driven analytics platforms, semantic modeling concepts, and NLP techniques.
• Experience working in SAFe Agile or other scaled Agile environments.
• Solid understanding of data governance, security, and compliance best practices in global environments with numerous data providers and consumers.
Qualifications:
• Bachelor's degree in computer science, Data Science, Engineering, or a related field is preferred.
• Master's degree is a plus.
• Databricks and Azure certifications strongly preferred.






