Databricks Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Developer on a contract basis, requiring 5+ years in data engineering. Pay rate is "unknown." Remote work is available. Key skills include Databricks, Apache Spark, Delta Lake, and proficiency in cloud platforms (Azure, AWS, GCP).
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
June 4, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Tampa, FL
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Azure #GCP (Google Cloud Platform) #Security #Spark (Apache Spark) #Batch #Data Quality #Data Engineering #Delta Lake #Monitoring #Data Pipeline #Scala #Apache Spark #Data Ingestion #Data Processing #"ETL (Extract #Transform #Load)" #Python #Cloud #PySpark #Data Lakehouse #Big Data #MLflow #Logging #Data Science #Data Governance #SQL (Structured Query Language) #Data Lake #ML (Machine Learning) #Databricks #DevOps #GIT #Spark SQL
Role description
We are seeking an experienced Databricks Developer to join our data engineering team to build and optimize data pipelines and analytics solutions using the Databricks Lakehouse Platform. The ideal candidate will have strong hands-on experience with Apache Spark, Delta Lake, and cloud platforms (Azure, AWS, or GCP). Responsibilities: Β· Design, develop, and deploy scalable data pipelines using Databricks Notebooks, Spark, and Delta Lake. Β· Integrate data from diverse sources (structured/unstructured, batch/streaming) into the data lakehouse. Β· Develop ETL/ELT workflows and manage data ingestion and transformation processes. Β· Optimize Spark performance and troubleshoot large-scale data processing jobs. Β· Implement data quality checks, logging, monitoring, and alerting systems. Β· Collaborate with data scientists, analysts, and business stakeholders. Β· Use MLflow to track and deploy machine learning models (if applicable). Β· Maintain code repositories using Git and follow CI/CD best practices. Β· Implement and enforce data governance, security, and access control using Unity Catalog or equivalent tools. Required Skills: Β· 5+ years of experience in data engineering, analytics, or big data development. Β· Strong expertise in Databricks, Apache Spark, Delta Lake. Β· Proficiency in PySpark, SQL, and Python. Β· Experience with at least one cloud platform: Azure, AWS, or Google Cloud. Β· Familiarity with data warehousing, data lakes, and Lake House architecture. Β· Experience with CI/CD pipelines, Git, and DevOps for data projects. Β· Strong problem-solving, communication, and teamwork skills.