Panzer Solutions LLC

DevOps Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Engineer in Richmond, VA (Hybrid) on a 12+ month contract, paying a competitive W2 rate. Requires strong DataDog experience, cloud observability expertise, and certifications in Datadog. Proficiency in Python, Terraform, and CI/CD tools is essential.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 13, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Richmond, VA
-
🧠 - Skills detailed
#Anomaly Detection #Cloud #AWS CloudWatch #Scripting #Visualization #Elasticsearch #Ansible #Observability #Kubernetes #Jenkins #GCP (Google Cloud Platform) #Logstash #Prometheus #Logging #AWS (Amazon Web Services) #Azure #Datadog #DevOps #Data Analysis #Splunk #Terraform #Python #Vulnerability Management #Security #Monitoring #Grafana #Infrastructure as Code (IaC) #Automation #Programming
Role description
Job Title: DevOps Engineer - Lead Location: Richmond, VA (Hybrid) Job Type: 12+ Months Contract This is a W2 Role. Must need to have Strong experience with DataDog This person is needed in order to create dashboards in DataDog to measure customer impact and severity for Discover applications. Key Responsibilities: β€’ Implement and manage full-stack observability using Datadog, ensuring seamless monitoring across infrastructure, applications, and services. β€’ Instrument agents for on-premise, cloud, and hybrid environments to enable comprehensive monitoring. β€’ Design and deploy key service monitoring, including dashboards, monitor creation, SLA/SLO definitions, and anomaly detection with alert notifications. β€’ Configure and integrate Datadog with third-party services such as ServiceNow, SSO enablement, and other ITSM tools. Core Responsibilities: β€’ Design & Implement Solutions: Build and maintain comprehensive observability platforms that provide deep insights into complex systems, incorporating logs, metrics, and traces. β€’ System Instrumentation: Instrument applications, infrastructure, and services to collect telemetry data using frameworks like OpenTelemetry. β€’ Data Analysis & Visualization: Develop dashboards, reports, and alerts using tools like Prometheus, Grafana, and Splunk to visualize system performance and detect issues. β€’ Collaboration: Work with development, SRE, and DevOps teams to integrate observability best practices and align monitoring with business and operational goals. β€’ Automation: Develop scripts and use Infrastructure as Code (IaC) tools like Ansible and Terraform to automate monitoring configurations and telemetry collection. Key Skills & Tools: β€’ Observability Tools: Proficiency in monitoring, logging, and tracing tools, including Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, Datadog, New Relic, and cloud-native solutions like AWS CloudWatch. β€’ Programming Languages: Expertise in languages such as Python and Go for scripting and automation. β€’ Infrastructure & Cloud Platforms: Experience with cloud platforms (AWS, GCP, Azure) and container orchestration systems like Kubernetes. β€’ Infrastructure as Code (IaC): Familiarity with Terraform and Ansible for managing infrastructure and configurations. β€’ CI/CD & Automation: Experience with CI/CD pipelines and automation tools like Jenkins. β€’ System & Software Engineering: A strong background in both system operations and software development. β€’ Optimize cloud agent instrumentation, with cloud certifications being a plus. β€’ Datadog Fundamental, APM and Distributed Tracing Fundamentals & Datadog Demo Certification (Mandatory) β€’ Strong understanding of Observability concepts (Logs, Metrics, Tracing) β€’ Expertise in security & vulnerability management in observability β€’ Possesses 2 years of experience in cloud-based observability solutions, specializing in monitoring, logging, and tracing across AWS, Azure, and GCP environments.