

DevOps Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a DevOps Engineer in Dallas, TX, on a 12-month contract with a pay rate of "unknown." Key skills include Python, CI/CD, Ansible, Terraform, and experience in HPC environments. Strong Linux and cloud service knowledge is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 2, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Kubernetes #Prometheus #Public Cloud #Observability #Consulting #Jenkins #Ansible #Azure #Grafana #Puppet #Storage #GCP (Google Cloud Platform) #Terraform #Automation #DevOps #Python #Scala #Linux #Monitoring #Cloud #AWS (Amazon Web Services) #Scripting #ML (Machine Learning) #Programming #Deployment #Docker #GitLab
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
At Radiant Digital, we provide IT solutions and consulting services to help government agencies and businesses in the USA, Canada, the Middle East, and Southeast Asia. On the federal side, we support agencies like NASA, the Department of State (DOS), the IRS, ACL, ACF,USDA and many others, along with numerous state and local government agencies.
We work with industries like telecom, healthcare, entertainment, oil and gas offering solutions designed to meet their specific needs. We focus on improving systems, making better use of data, and updating applications to keep up with changing markets.
DevOps Engineer - Infrastructure Automation
Location: Dallas, TX
Contract Duration: Initially 12 Months
This is a contractor role focused on building scalable, reliable, and automated infrastructure systems that power our high-performance computing (HPC) and storage environments.
The successful candidate will play a key role in automating the provisioning, configuration, monitoring, and management of our compute and storage infrastructure, which supports multimegawatt CPU and GPU farms used for cutting-edge quantitative research and machine learning workloads. This is an exciting opportunity for someone passionate about infrastructure at scale, automation, and performance, with a forward-thinking mindset and a collaborative attitude.
Key Responsibilities
Design, develop, and maintain automation frameworks for provisioning and managing HPC and storage infrastructure.
Implement infrastructure-as-code and configuration management best practices to ensure consistency and repeatability.
Collaborate with platform teams to improve scalability, reliability, and observability of systems.
Troubleshoot performance, reliability, and scale issues across a variety of infrastructure components.
Drive continuous improvement through automation, performance tuning, and capacity planning.
Support the deployment and operations of distributed systems and services used across the organization.
The Ideal Candidate Will Have
Extensive experience in infrastructure engineering, with a focus on compute and storage platforms in large-scale or high-performance environments.
A solid track record of leading and delivering successful technical infrastructure projects.
Strong experience with Python programming, particularly for automation, scripting, and systems integration.
Deep familiarity with CI/CD practices, pipelines, and tools (e.g., Jenkins, GitLab CI, ArgoCD).
Expertise in configuration management and infrastructure-as-code tools such as Ansible, Terraform, and Puppet.
Proven experience in monitoring and observability using tools such as Prometheus, Grafana, ELK stack, or similar.
Solid knowledge of Linux system administration and networking fundamentals.
Hands-on experience with containerization and orchestration platforms (Docker and Kubernetes).
Familiarity with public cloud services (AWS, Azure, GCP) and hybrid infrastructure models.
Exposure to HPC (High Performance Computing) environments and/or large-scale storage infrastructure is highly desirable.
A proactive and collaborative mindset, with a focus on continuous improvement and innovation