Apache Druid

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Apache Druid contractor based in Austin, TX or remote, offering a competitive pay rate. Key skills include Apache Druid management, IaC (Terraform, Ansible), Apache Airflow orchestration, and monitoring tools (Prometheus, Grafana).
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 12, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Deployment #Prometheus #Apache Airflow #Terraform #Airflow #Grafana #Infrastructure as Code (IaC) #Logging #Scala #Data Pipeline #Monitoring #Ansible #Automation #Data Ingestion #Data Orchestration
Role description
Position: Apache Druid Location: Austin, TX/ Remote Contract Job Description: Reliability and Availability: β€’ Ensure high availability and reliability of production systems. β€’ Implement and maintain robust monitoring and alerting systems. β€’ Participate in on-call rotations to respond to incidents and outages. β€’ Conduct post-incident reviews and implement preventative measures. Automation and Infrastructure as Code (IaC): β€’ Automate infrastructure provisioning, configuration, and deployment using IaC tools (e.g., Terraform, Ansible). β€’ Develop and maintain CI/CD pipelines to streamline software releases. β€’ Optimize and automate data pipelines and workflows. Apache Druid Management: β€’ Manage and optimize Apache Druid clusters for high performance and scalability. β€’ Troubleshoot Druid performance issues and implement solutions. β€’ Design and implement Druid data ingestion and query optimization strategies. Apache Airflow Orchestration: β€’ Design, develop, and maintain Airflow DAGs for data orchestration and workflow automation. β€’ Monitor Airflow performance and troubleshoot issues. β€’ Optimize Airflow workflows for efficiency and reliability. Monitoring and Logging: β€’ Implement and maintain comprehensive monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). β€’ Analyze metrics and logs to identify performance bottlenecks and potential issues. β€’ Create and maintain dashboards for visualizing system health and performance. Collaboration and Communication: β€’ Collaborate with development, data, and operations teams to ensure smooth operations. β€’ Communicate effectively with stakeholders regarding system status and incidents. β€’ Document processes and procedures