Databricks Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Data Engineer on a 6-month contract in Seattle, WA, offering competitive pay. Candidates must have 5+ years in data engineering, 3+ years with Databricks, and expertise in ETL, cloud platforms, and data governance.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 22, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Automation #Storage #SQL (Structured Query Language) #Infrastructure as Code (IaC) #Strategy #Scrum #Scala #Monitoring #Data Architecture #Delta Lake #Microsoft Power BI #Data Access #Quality Assurance #Terraform #Cloud #Databases #Data Quality #Programming #Spark (Apache Spark) #Version Control #Agile #Data Lake #GCP (Google Cloud Platform) #DevOps #Azure DevOps #Azure #Databricks #Data Pipeline #Data Processing #Data Integration #Data Governance #Indexing #Data Modeling #"ETL (Extract #Transform #Load)" #Data Storage #BI (Business Intelligence) #NoSQL #Automated Testing #Data Engineering #SQL Server #Tableau #Data Extraction #GIT #Graph Databases #Deployment #Documentation #PySpark #Visualization #Python #MongoDB #PostgreSQL
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Purple Drive Technologies LLC, is seeking the following. Apply via Dice today! Position Summary We are seeking an experienced Data Engineer specializing in Databricks platform for a 6-month contract position based in Seattle, WA. This role focuses on designing, building, and deploying robust data pipelines and ETL processes within a cloud data platform environment. The ideal candidate will have strong expertise in Databricks, data modeling, and cross-functional collaboration to support enterprise data governance initiatives. Responsibilities DATA PIPELINE DEVELOPMENT β€’ Design, build, and deploy data extraction, transformation, and loading (ETL) processes and pipelines β€’ Extract data from various sources including databases, APIs, and data files β€’ Develop and maintain scalable data pipelines within Databricks Cloud Data Platform β€’ Ensure data quality, reliability, and performance across all pipeline processes β€’ Implement data validation and monitoring mechanisms DATABRICKS PLATFORM MANAGEMENT β€’ Build comprehensive data models that reflect domain expertise and meet current business needs β€’ Ensure data models remain flexible and adaptable as business strategy evolves β€’ Monitor and optimize Databricks cluster performance for cost-effective scaling and resource utilization β€’ Implement and maintain Delta Lake for optimized data storage, ensuring data reliability, performance, and versioning β€’ Leverage Databricks Unity Catalog for data governance and collaboration DEVOPS & AUTOMATION β€’ Automate CI/CD pipelines for data workflows using Azure DevOps β€’ Implement version control and deployment strategies for data pipelines β€’ Ensure automated testing and quality assurance for data processes β€’ Maintain infrastructure as code practices for data platform components DATA ARCHITECTURE & STORAGE β€’ Demonstrate expertise in database storage concepts including data lakes, relational databases, NoSQL, Graph databases, and data warehousing β€’ Design and implement efficient data storage solutions β€’ Optimize data access patterns and query performance β€’ Ensure proper data partitioning and indexing strategies COLLABORATION & COMMUNICATION β€’ Collaborate with cross-functional teams to support data governance initiatives β€’ Communicate technical concepts effectively to both technical and non-technical audiences β€’ Work with business stakeholders to understand data requirements β€’ Provide documentation and knowledge transfer for data solutions Qualifications REQUIRED EXPERIENCE β€’ 5+ years of experience in data engineering and ETL development β€’ 3+ years of hands-on experience with Databricks platform β€’ Strong experience with cloud data platforms (Azure, AWS, or Google Cloud Platform) β€’ Proven track record in building and maintaining data pipelines at scale Technical Skills β€’ Databricks Expertise: β€’ Databricks workspace and cluster management β€’ Delta Lake implementation and optimization β€’ Databricks Unity Catalog β€’ Spark and PySpark programming β€’ Programming & Development: β€’ Strong coding skills in Python, Scala, or SQL β€’ Experience with data transformation and data quality frameworks β€’ Knowledge of data integration patterns and best practices β€’ Database & Storage: β€’ Data lake architecture and design β€’ Relational databases (SQL Server, PostgreSQL, etc.) β€’ NoSQL databases (MongoDB, Cassandra, etc.) β€’ Graph databases and data warehousing concepts β€’ DevOps & Automation: β€’ Azure DevOps or similar CI/CD tools β€’ Infrastructure as Code (Terraform, ARM templates) β€’ Git version control and branching strategies Soft Skills β€’ Strong analytical and problem-solving abilities β€’ Excellent written and verbal communication skills β€’ Ability to explain complex technical concepts to diverse audiences β€’ Collaborative mindset for cross-functional team environments β€’ Detail-oriented with focus on data quality and reliability Preferred Qualifications β€’ Databricks certification (Data Engineer Associate/Professional) β€’ Experience with real-time data processing and streaming β€’ Knowledge of data governance frameworks and best practices β€’ Experience with data visualization tools (Power BI, Tableau) β€’ Background in Agile/Scrum development methodologies