Effervescent Consulting

Databricks Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Databricks Engineer, offering a contract of over 6 months with a pay rate of $109,946.27 - $132,408.41. Key skills required include Databricks, SQL, Python, and cloud services (Azure/AWS). Quantum engineering experience is a plus.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
601
-
πŸ—“οΈ - Date
November 17, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Remote
-
🧠 - Skills detailed
#Java #Python #Data Management #ML (Machine Learning) #SQL (Structured Query Language) #Data Modeling #Cloud #Data Pipeline #Azure #Database Design #Distributed Computing #"ETL (Extract #Transform #Load)" #Scala #Data Ingestion #AWS (Amazon Web Services) #Data Quality #Security #Datasets #Data Engineering #Data Science #Databricks #Spark (Apache Spark) #Storage #Big Data #Apache Spark
Role description
Job SummaryWe are seeking a dynamic and highly skilled Databricks Engineer to join our innovative data team. In this role, you will be at the forefront of designing, developing, and maintaining scalable data pipelines and analytics solutions using Databricks Unified Analytics Platform. Your expertise will drive impactful insights and enable data-driven decision-making across the organization. If you thrive in a fast-paced environment, possess a passion for cutting-edge data engineering, and have a strong foundation in quantum engineering principles, this is your opportunity to make a significant impact! Duties Design, build, and optimize large-scale data pipelines leveraging Databricks' cloud-based platform to ensure efficient data ingestion, transformation, and storage. Develop and maintain robust ETL (Extract, Transform, Load) processes that support analytics and machine learning initiatives. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and translate them into scalable technical solutions. Implement best practices for data quality, security, and governance within the Databricks environment. Monitor system performance, troubleshoot issues promptly, and continuously improve pipeline reliability and efficiency. Integrate advanced quantum engineering concepts where applicable to enhance computational capabilities or optimize algorithms within the data platform. Stay current with emerging trends in big data technologies, cloud computing, and quantum computing to recommend innovative solutions. Document architecture designs, workflows, and procedures clearly for team knowledge sharing and future scalability. Qualifications Proven experience as a Data Engineer or similar role with hands-on expertise in Databricks platform development. Strong proficiency in SQL, Python, Scala or Java for building scalable data solutions. Deep understanding of cloud services such as Azure or AWS integrated with Databricks environments. Knowledge of distributed computing frameworks like Apache Spark is essential. Familiarity with data modeling, database design, and data warehousing concepts. Experience working with large datasets in complex environments with high performance requirements. Ability to implement security best practices for sensitive data management. Excellent problem-solving skills combined with a proactive approach to troubleshooting issues swiftly. Strong communication skills to effectively collaborate across technical teams and business units. Nice-to-have: Knowledge of quantum engineering principles and their application to computational problems or algorithm optimization within big data platforms. Join us if you’re eager to leverage your expertise in a role that combines cutting-edge technology with meaningful impact! We value innovation, continuous learning, and collaborationβ€”empowering you to grow professionally while shaping the future of data engineering solutions! Job Types: Full-time, Contract Pay: $109,946.27 - $132,408.41 per year Work Location: Remote