

Jobs via Dice
AI Optimization Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Optimization Engineer in Jersey City, NJ, on a contract basis. Key skills include Python, deep learning frameworks, HPC environments, and model optimization. Experience with GPU clusters, SLURM, Docker, and performance monitoring tools is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#Data Analysis #Grafana #Jenkins #Linux #ML (Machine Learning) #NLP (Natural Language Processing) #API (Application Programming Interface) #Docker #Kubernetes #Scala #Neural Networks #Matplotlib #Prometheus #NumPy #MLflow #Visualization #Terraform #DevOps #Keras #PyTorch #TensorFlow #Plotly #Deployment #Unsupervised Learning #Deep Learning #AI (Artificial Intelligence) #Microservices #"ETL (Extract #Transform #Load)" #Model Optimization #Supervised Learning #Flask #REST (Representational State Transfer) #GitHub #Monitoring #Cloud #Python
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Simple Solutions, is seeking the following. Apply via Dice today!
Position: AI Optimization Engineer
Location: Jersey City, NJ (Onsite)
Employment mode: Contract position
We are seeking an experienced AI Optimization Engineer to support large-scale AI/ML and Generative AI workloads for an enterprise environment. This role focuses on optimizing, deploying, and managing machine learning and large language models (LLMs) on GPU-accelerated HPC infrastructure. The ideal candidate will have strong experience in Python-based machine learning, deep learning frameworks, model optimization techniques, and scalable AI infrastructure.
The engineer will work closely with AI, infrastructure, and DevOps teams to design efficient model training and inference pipelines, implement SLURM-based workload orchestration, and deploy containerized ML solutions in production environments. Responsibilities include optimizing model performance using techniques such as pruning, quantization, and knowledge distillation, managing inference workflows using Triton Inference Server, and monitoring system performance using Prometheus and Grafana.
This role requires hands-on experience with HPC environments, GPU clusters, containerization technologies, and Linux system administration, along with strong knowledge of machine learning algorithms, deep learning architectures, and modern AI development tools. Experience with cloud platforms, vector embeddings, and enterprise-scale AI deployments is highly preferred.
The AI Optimization Engineer must have strong experience in Python-based machine learning and deep learning, including NumPy, scikit-learn, TensorFlow, PyTorch, and Keras, with hands-on knowledge of supervised and unsupervised learning, neural networks, transformer-based models, NLP, CNNs, and Generative AI concepts. The role requires expertise in AI infrastructure and optimization, including HPC environments, GPU clusters, SLURM workload management, Triton Inference Server, TRTLLM, and model optimization techniques such as pruning, quantization, and distillation for scalable LLM deployment.
Candidates should also have experience with DevOps and deployment tools such as Docker, Kubernetes, MLFlow, Terraform, Jenkins, GitHub, and HuggingFace, along with strong skills in performance monitoring using Prometheus and Grafana. Additional requirements include Flask API development, Linux administration (RHEL/CentOS), container runtimes like Enroot, Pyxis, and Podman, and experience with data analysis and visualization tools such as Plotly, Seaborn, and Matplotlib.
Core Responsibilities
• Design and optimize AI/ML workloads on GPU-based HPC clusters.
• Deploy and manage large language models (LLMs) in scalable production environments.
• Implement model optimization techniques including pruning, quantization, and knowledge distillation.
• Develop and manage automated job scheduling using SLURM with REST and Flask APIs.
• Deploy ML models using containerized microservices architectures.
• Monitor system performance using Prometheus and Grafana.
• Optimize inference pipelines using Triton Inference Server and TRTLLM.
• Conduct exploratory data analysis and model performance evaluation.
• Collaborate with infrastructure and ML teams to improve scalability and efficiency.
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Simple Solutions, is seeking the following. Apply via Dice today!
Position: AI Optimization Engineer
Location: Jersey City, NJ (Onsite)
Employment mode: Contract position
We are seeking an experienced AI Optimization Engineer to support large-scale AI/ML and Generative AI workloads for an enterprise environment. This role focuses on optimizing, deploying, and managing machine learning and large language models (LLMs) on GPU-accelerated HPC infrastructure. The ideal candidate will have strong experience in Python-based machine learning, deep learning frameworks, model optimization techniques, and scalable AI infrastructure.
The engineer will work closely with AI, infrastructure, and DevOps teams to design efficient model training and inference pipelines, implement SLURM-based workload orchestration, and deploy containerized ML solutions in production environments. Responsibilities include optimizing model performance using techniques such as pruning, quantization, and knowledge distillation, managing inference workflows using Triton Inference Server, and monitoring system performance using Prometheus and Grafana.
This role requires hands-on experience with HPC environments, GPU clusters, containerization technologies, and Linux system administration, along with strong knowledge of machine learning algorithms, deep learning architectures, and modern AI development tools. Experience with cloud platforms, vector embeddings, and enterprise-scale AI deployments is highly preferred.
The AI Optimization Engineer must have strong experience in Python-based machine learning and deep learning, including NumPy, scikit-learn, TensorFlow, PyTorch, and Keras, with hands-on knowledge of supervised and unsupervised learning, neural networks, transformer-based models, NLP, CNNs, and Generative AI concepts. The role requires expertise in AI infrastructure and optimization, including HPC environments, GPU clusters, SLURM workload management, Triton Inference Server, TRTLLM, and model optimization techniques such as pruning, quantization, and distillation for scalable LLM deployment.
Candidates should also have experience with DevOps and deployment tools such as Docker, Kubernetes, MLFlow, Terraform, Jenkins, GitHub, and HuggingFace, along with strong skills in performance monitoring using Prometheus and Grafana. Additional requirements include Flask API development, Linux administration (RHEL/CentOS), container runtimes like Enroot, Pyxis, and Podman, and experience with data analysis and visualization tools such as Plotly, Seaborn, and Matplotlib.
Core Responsibilities
• Design and optimize AI/ML workloads on GPU-based HPC clusters.
• Deploy and manage large language models (LLMs) in scalable production environments.
• Implement model optimization techniques including pruning, quantization, and knowledge distillation.
• Develop and manage automated job scheduling using SLURM with REST and Flask APIs.
• Deploy ML models using containerized microservices architectures.
• Monitor system performance using Prometheus and Grafana.
• Optimize inference pipelines using Triton Inference Server and TRTLLM.
• Conduct exploratory data analysis and model performance evaluation.
• Collaborate with infrastructure and ML teams to improve scalability and efficiency.





