

CUDA Python
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a ML Performance Engineer specializing in CUDA Python, offering a 6-month remote contract with 30% travel. Key skills include strong pre-sales abilities, low-level GPU knowledge, and experience with CUDA debugging tools and distributed GPU training algorithms.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#ML (Machine Learning) #Storage #Python #Debugging
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, United IT Solutions, is seeking the following. Apply via Dice today!
ML Performance Engineer CUDA Python
Duration: 6-month contract with the likelihood to extend
Location: Remote but candidates must be willing to travel to different customer sites.
β’ Must be willing to travel
β’ Must have strong pre-sales abilities i.e. presentation skills, communication skills, etc.
β’ Must be willing to help train WWT employees and customers
we have 3 openings for this role. We are needing very strong technical CUDA Python ML engineers. They can sit anywhere in the US but must be willing to travel 30% of the time.
They will be very client facing so professionalism and presentation skills are very key to this role.
Your part here is optimizing the performance of our models both training and inference. We care about efficient large-scale training, low-latency inference in real-time systems, and high-throughput inference in research. Part of this is improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking, and host- and GPU-level considerations. Zooming in, we also want to ensure our platform makes sense even at the lowest level is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long? An understanding of modern ML techniques and toolsets
The experience and systems knowledge required to debug a training run s performance end to end
Low-level GPU knowledge of PTX, SASS, warps, cooperative groups, Tensor Cores, and the memory hierarchy
Debugging and optimization experience using tools like CUDA GDB, NSight Systems, NSight Compute
Library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN, and cuBLAS
Intuition about the latency and throughput characteristics of CUDA graph launch, tensor core arithmetic, warp-level synchronization, and asynchronous memory loads
Background in Infiniband, RoCE, GPUDirect, PXN, rail optimization, and NVLink, and how to use these networking technologies to link up GPU clusters
An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools
Best Regards,
Saaikumargoud
+1
Dice is the leading career destination for tech experts at every stage of their careers. Our client, United IT Solutions, is seeking the following. Apply via Dice today!
ML Performance Engineer CUDA Python
Duration: 6-month contract with the likelihood to extend
Location: Remote but candidates must be willing to travel to different customer sites.
β’ Must be willing to travel
β’ Must have strong pre-sales abilities i.e. presentation skills, communication skills, etc.
β’ Must be willing to help train WWT employees and customers
we have 3 openings for this role. We are needing very strong technical CUDA Python ML engineers. They can sit anywhere in the US but must be willing to travel 30% of the time.
They will be very client facing so professionalism and presentation skills are very key to this role.
Your part here is optimizing the performance of our models both training and inference. We care about efficient large-scale training, low-latency inference in real-time systems, and high-throughput inference in research. Part of this is improving straightforward CUDA, but the interesting part needs a whole-systems approach, including storage systems, networking, and host- and GPU-level considerations. Zooming in, we also want to ensure our platform makes sense even at the lowest level is all that throughput actually goodput? Does loading that vector from the L2 cache really take that long? An understanding of modern ML techniques and toolsets
The experience and systems knowledge required to debug a training run s performance end to end
Low-level GPU knowledge of PTX, SASS, warps, cooperative groups, Tensor Cores, and the memory hierarchy
Debugging and optimization experience using tools like CUDA GDB, NSight Systems, NSight Compute
Library knowledge of Triton, CUTLASS, CUB, Thrust, cuDNN, and cuBLAS
Intuition about the latency and throughput characteristics of CUDA graph launch, tensor core arithmetic, warp-level synchronization, and asynchronous memory loads
Background in Infiniband, RoCE, GPUDirect, PXN, rail optimization, and NVLink, and how to use these networking technologies to link up GPU clusters
An understanding of the collective algorithms supporting distributed GPU training in NCCL or MPI
An inventive approach and the willingness to ask hard questions about whether we're taking the right approaches and using the right tools
Best Regards,
Saaikumargoud
+1