Infotree Global Solutions

DevOps Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Engineer with a contract length of "unknown," offering a pay rate of "$XX/hour." Candidates must be US Citizens or GC Holders, possess strong Python and Linux skills, and have experience with Jenkins, CI/CD, and infrastructure automation.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#BI (Business Intelligence) #Grafana #Ansible #Shell Scripting #Microsoft Power BI #Computer Science #Prometheus #Security #Observability #Visualization #GitHub #Monitoring #Databricks #Spark (Apache Spark) #Deployment #Linux #Python #DevOps #Documentation #Jenkins #Kubernetes #Scala #Kafka (Apache Kafka) #Scripting #"ETL (Extract #Transform #Load)" #Debugging #Automation #Docker
Role description
US Citizens or GC Holders only Role Summary: We are looking for a hands-on DevOps / Software Automation Engineer to design, build, and operate an end to end automated CPU performance benchmarking platform. This role will work closely with CPU performance engineers to automate manual benchmarking workflows, enable repeatable and scalable performance runs, and deliver fast, reliable performance insights across multiple benchmark suites. You will be a critical force multiplier for performance engineers—owning automation, CI/CD, infrastructure, execution workflows, monitoring, and troubleshooting—so performance teams can focus on analysis rather than operational overhead. Key Responsibilities: Performance Benchmarking Automation • Design and implement fully automated workflows for CPU performance benchmarks (setup, execution, data collection, validation, and reporting). • Translate manual performance engineering processes into scalable automation pipelines. • Enable one click or CI triggered benchmark execution with standardised, repeatable results. • Automate log parsing, metrics extraction, and data structuring for downstream analysis. CI/CD & Execution Orchestration • Build and maintain CI/CD pipelines (Jenkins/GitHub) for benchmark execution and infrastructure workflows. • Integrate automation with versioned benchmark configurations, scripts, and artifacts. • Ensure reproducibility, traceability, and auditability of performance runs. Infrastructure & Platform Engineering • Automate bare metal and virtual server provisioning, OS deployment, and system configuration at scale. • Manage Linux-based environments optimized for CPU performance testing. • Containerize services (Docker) and orchestrate where applicable (Kubernetes). Reliability, Monitoring & Support • Monitor platform health, benchmark execution, and infrastructure using observability tools. • Actively unblock performance engineers during automated runs by debugging failures, identifying root causes, and applying quick fixes or workarounds. • Perform capacity planning and scale systems to support increasing benchmark demand. Data & Insights Enablement • Process and structure benchmark data using Python, Spark, or Databricks. • Support dashboards and reporting (e.g., Power BI) that provide quick performance insights to engineering stakeholders. Collaboration & Documentation • Work day to day with CPU performance engineers to understand workflows and continuously improve automation. • Document architectures, workflows, execution guides, and troubleshooting procedures. • Partner with internal IT teams as needed for networking, hardware, and security alignment. Required Qualifications: • Bachelor’s degree in Computer Science, Engineering, or equivalent practical experience. • Strong Python and Linux shell scripting skills. • Hands on experience with Jenkins, CI/CD pipelines, and GitHub. • Solid understanding of Linux systems, OS tuning, and server environments. • Experience automating infrastructure using Ansible or similar tools. • Ability to debug complex system, automation, or execution issues independently. • Strong communication skills—able to work closely with non-software performance engineers. Preferred / Nice to Have: • Experience with CPU or system performance benchmarking (SPEC, internal benchmarks, stress tools, etc.). • Familiarity with Spark, Kafka, Databricks, or large-scale log processing. • Experience with Docker and Kubernetes. • Knowledge of monitoring and observability tools (Prometheus, Grafana, Zabbix, New Relic). • Exposure to data visualization and reporting tools (Power BI). What Success Looks Like: • Performance engineers run benchmarks through automation instead of manual steps. • Benchmark failures are quickly diagnosed and resolved with minimal downtime. • Benchmark results are consistent, repeatable, and easy to consume. • The automation platform scales seamlessly as new CPU platforms and benchmarks are added.