

Hadoop Developer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop Developer with a contract length of "unknown" and a pay rate of "unknown." Located in Austin, TX or Sunnyvale, CA, key skills include Apache Druid management, Airflow orchestration, and IaC tools like Terraform and Ansible.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 27, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
California, United States
-
π§ - Skills detailed
#Automation #Terraform #Scala #Logging #Grafana #Hadoop #Prometheus #Airflow #Data Pipeline #Apache Airflow #Monitoring #Infrastructure as Code (IaC) #Data Ingestion #Deployment #Ansible #Data Orchestration
Role description
Position: Apache Druid
Location: Austin, TX/ Sunnyvale, California
Contract
Job Description:
Reliability and Availability:
β’ Ensure high availability and reliability of production systems.
β’ Implement and maintain robust monitoring and alerting systems.
β’ Participate in on-call rotations to respond to incidents and outages.
β’ Conduct post-incident reviews and implement preventative measures.
Automation and Infrastructure as Code (IaC):
β’ Automate infrastructure provisioning, configuration, and deployment using IaC tools (e.g., Terraform, Ansible).
β’ Develop and maintain CI/CD pipelines to streamline software releases.
β’ Optimize and automate data pipelines and workflows.
Apache Druid Management:
β’ Manage and optimize Apache Druid clusters for high performance and scalability.
β’ Troubleshoot Druid performance issues and implement solutions.
β’ Design and implement Druid data ingestion and query optimization strategies.
Apache Airflow Orchestration:
β’ Design, develop, and maintain Airflow DAGs for data orchestration and workflow automation.
β’ Monitor Airflow performance and troubleshoot issues.
β’ Optimize Airflow workflows for efficiency and reliability.
Monitoring and Logging:
β’ Implement and maintain comprehensive monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack).
β’ Analyze metrics and logs to identify performance bottlenecks and potential issues.
β’ Create and maintain dashboards for visualizing system health and performance.
Collaboration and Communication:
β’ Collaborate with development, data, and operations teams to ensure smooth operations.
β’ Communicate effectively with stakeholders regarding system status and incidents.
β’ Document processes and procedures
Position: Apache Druid
Location: Austin, TX/ Sunnyvale, California
Contract
Job Description:
Reliability and Availability:
β’ Ensure high availability and reliability of production systems.
β’ Implement and maintain robust monitoring and alerting systems.
β’ Participate in on-call rotations to respond to incidents and outages.
β’ Conduct post-incident reviews and implement preventative measures.
Automation and Infrastructure as Code (IaC):
β’ Automate infrastructure provisioning, configuration, and deployment using IaC tools (e.g., Terraform, Ansible).
β’ Develop and maintain CI/CD pipelines to streamline software releases.
β’ Optimize and automate data pipelines and workflows.
Apache Druid Management:
β’ Manage and optimize Apache Druid clusters for high performance and scalability.
β’ Troubleshoot Druid performance issues and implement solutions.
β’ Design and implement Druid data ingestion and query optimization strategies.
Apache Airflow Orchestration:
β’ Design, develop, and maintain Airflow DAGs for data orchestration and workflow automation.
β’ Monitor Airflow performance and troubleshoot issues.
β’ Optimize Airflow workflows for efficiency and reliability.
Monitoring and Logging:
β’ Implement and maintain comprehensive monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack).
β’ Analyze metrics and logs to identify performance bottlenecks and potential issues.
β’ Create and maintain dashboards for visualizing system health and performance.
Collaboration and Communication:
β’ Collaborate with development, data, and operations teams to ensure smooth operations.
β’ Communicate effectively with stakeholders regarding system status and incidents.
β’ Document processes and procedures