New Era Technology Europe

Sr Databricks Engineer(Only Locals - Georgia)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Databricks Engineer in Alpharetta, GA, lasting 12+ months, with a pay rate of "unknown." Requires 9+ years of experience, strong skills in Terraform, Python, Java, and Databricks, preferably in regulated environments.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 17, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#GitHub #Storage #Kanban #Agile #Databricks #Logging #Programming #Security #Code Reviews #Monitoring #Automation #Azure #Terraform #Scala #Compliance #DevOps #Computer Science #Data Science #Java #Prometheus #Scrum #REST API #Grafana #MLflow #Debugging #Infrastructure as Code (IaC) #Vault #Delta Lake #IAM (Identity and Access Management) #VPC (Virtual Private Cloud) #AI (Artificial Intelligence) #Python #GCP (Google Cloud Platform) #Linux #Cloud #Deployment #AWS (Amazon Web Services) #Observability #Data Governance #REST (Representational State Transfer) #ML (Machine Learning)
Role description
Role: Senior Databricks AI Platform SRE Location: Alpharetta, GA (Hybrid) Duration: 12+ Months Job Description We are looking for a Senior Databricks AI Platform SRE to join our Platform SRE team.Β This role will be critical in designing, building, and optimizing a scalable, secure, and developer-friendly Databricks platform to enable Machine Learning (ML) and Artificial Intelligence (AI) workloads at enterprise scale You will partner with ML engineer, data scientists, platform teams, and cloud architects to automate infrastructure, enforce best practices, and streamline the end-to-end ML lifecycle using modern cloud-native technologies. Qualifications - External Total Experience – 9+ Years. Bachelor’s or master’s degree in computer science, Engineering or a related field. Responsibilities: β€’ Design and implement secure, scalable, and automated Databricks environments to support AI/ML workloads. β€’ Develop infrastructure-as-code (IaC) solutions using Terraform for provisioning Databricks, cloud resources, and network configurations. β€’ Build automation and self-service capabilities using Python, Java and APIs for platform onboarding, workspace provisioning, orchestration and monitoring. β€’ Collaborate with data science and ML teams to define compute requirements, governance policies, and efficient workflows across dev/qa/prod environments. β€’ Integrate Databricks offering with cloud-native services on Azure/AWS β€’ Champion CI/CD and GitOps for managing ML infrastructure and configurations. β€’ Ensure compliance with enterprise security and data governance policies using RBAC, Audit Controls, Encryption, Network Isolation, and policies. β€’ Monitor platform performance, reliability, and usage, and drive improvements to optimize cost and resource utilizations Required Skills: β€’ Proven experience with Terraform for building and managing infrastructure. β€’ Strong programming skills in Python and Java β€’ Hands-on experience with cloud networking, identity and access management, key vaults, monitoring, and logging in Azure β€’ Hands on experience with Databricks (Workspace management, Clusters, Jobs, MLFlow, Delta Lake, Unity Catalog, Mosaic AI) β€’ Deep understanding of Azure or AWS infrastructure (e.g. IAM, VNets/VPC, Storage, Networks, Compute, Key management, monitoring) β€’ Strong experience in distributed system design, development and deployment using agile/devops practices. β€’ Experience with CI/CD pipelines (GitHub Actions, or similar) β€’ Experience implementing monitoring and observability using Prometheus, Grafana or Databricks-native solutions. β€’ Good communication skills, excellent teamwork experience, ability to mentor and develop more junior developers, including participating in constructive code reviews Preferred Skills: β€’ Experience in multi-cloud environments (AWS/GCP) is a bonus β€’ Experience in working in highly regulated environments (finance, healthcare, etc.) is desirable β€’ Experience with Databricks REST APIs and SDKs β€’ Knowledge of MLFlow, Mosaic AC, & MLOps tooling β€’ Working with teams using Scrum, Kanban or other agile practices β€’ Proficiency with standard Linux command line and debugging tools β€’ Azure or AWS Certifications