

Sibitalent Corp
Senior Machine Learning Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Machine Learning Engineer focused on LLM fine-tuning for Verilog/RTL applications in San Jose, California. The contract is hybrid, with a pay rate of $60/hr. Requires 10+ years engineering experience, 5+ in ML/AI, and deep AWS expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
480
-
ποΈ - Date
October 29, 2025
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
San Jose, CA
-
π§ - Skills detailed
#Leadership #EC2 #IAM (Identity and Access Management) #PyTorch #Compliance #Cloud #Hugging Face #AutoScaling #Batch #Transformers #Deployment #Data Security #ML (Machine Learning) #ECR (Elastic Container Registery) #IP (Internet Protocol) #Licensing #Scala #Regression #Security #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #SageMaker #VPC (Virtual Private Cloud)
Role description
Hi,
Hope you are doing well,
IMMEDIATE INTERVIEW = Staff Machine Learning Engineer β LLM Fine-Tuning (Verilog/RTL Applications) in San Jose, CALIFORNIA β HYBRID (NEED LOCAL CANDIDATE)- Rate: $60/hr. w2 max
Please find the Job details below and kindly revert if youβre interested in learning more about this job.
Job Title: Staff Machine Learning Engineer β LLM Fine-Tuning (Verilog/RTL Applications
Location: San Jose, CALIFORNIA β HYBRID (NEED LOCAL CANDIDATE)
Minimum Qualifications
β’ 10+ years engineering experience; 5+ in ML/AI or large-scale distributed systems
β’ 3+ years with transformers/LLMs in production
β’ Strong hands-on skills: PyTorch, Hugging Face (Transformers/PEFT/TRL), distributed training (DeepSpeed/FSDP), quantization-aware fine-tuning
β’ Deep AWS expertise: Bedrock, SageMaker, S3, EC2/EKS/ECR, IAM, VPC, CloudWatch/CloudTrail, PrivateLink, Secrets Manager, Step Functions, Batch
β’ Solid engineering fundamentals: testing, CI/CD, performance tuning
β’ Excellent communication and leadership skills
Preferred Qualifications
β’ Familiarity with Verilog/SystemVerilog/RTL workflows and EDA tools
β’ Experience with AST-aware or grammar-constrained code modeling
β’ RAG over code/specs, tool-use/function-calling
β’ Inference optimization (TensorRT-LLM, KV-cache, speculative decoding)
β’ Experience with model governance, SOC2/ISO/NIST frameworks
β’ Data anonymization, DLP, code de-identification for IP protection
Responsibilities
β’ Lead technical roadmap for Verilog/RTL-based LLM features: model selection, adaptation, evaluation, deployment
β’ Guide a hands-on team of ML engineers and applied scientists
β’ Fine-tune/customize LLMs using modern techniques (LoRA/QLoRA, PEFT, instruction tuning, RLAIF)
β’ Build robust HDL-specific evals (compile/simulate pass rates, pass@k, constrained decoding, synthesis checks)
β’ Design privacy-first ML pipelines in AWS (Bedrock, SageMaker, VPC, KMS, PrivateLink, IAM, CloudTrail)
β’ Deploy secure, scalable inference (vLLM, TensorRT-LLM, autoscaling, canary rollouts)
β’ Develop automated regression and evaluation suites (hallucination checks, constraint validation)
β’ Integrate LLMs into developer tools, CI/CD, retrieval over internal HDL repositories
β’ Manage data security, compliance, anonymization, and licensing workflows
β’ Mentor team in LLM best practices and secure-by-default systems
Hi,
Hope you are doing well,
IMMEDIATE INTERVIEW = Staff Machine Learning Engineer β LLM Fine-Tuning (Verilog/RTL Applications) in San Jose, CALIFORNIA β HYBRID (NEED LOCAL CANDIDATE)- Rate: $60/hr. w2 max
Please find the Job details below and kindly revert if youβre interested in learning more about this job.
Job Title: Staff Machine Learning Engineer β LLM Fine-Tuning (Verilog/RTL Applications
Location: San Jose, CALIFORNIA β HYBRID (NEED LOCAL CANDIDATE)
Minimum Qualifications
β’ 10+ years engineering experience; 5+ in ML/AI or large-scale distributed systems
β’ 3+ years with transformers/LLMs in production
β’ Strong hands-on skills: PyTorch, Hugging Face (Transformers/PEFT/TRL), distributed training (DeepSpeed/FSDP), quantization-aware fine-tuning
β’ Deep AWS expertise: Bedrock, SageMaker, S3, EC2/EKS/ECR, IAM, VPC, CloudWatch/CloudTrail, PrivateLink, Secrets Manager, Step Functions, Batch
β’ Solid engineering fundamentals: testing, CI/CD, performance tuning
β’ Excellent communication and leadership skills
Preferred Qualifications
β’ Familiarity with Verilog/SystemVerilog/RTL workflows and EDA tools
β’ Experience with AST-aware or grammar-constrained code modeling
β’ RAG over code/specs, tool-use/function-calling
β’ Inference optimization (TensorRT-LLM, KV-cache, speculative decoding)
β’ Experience with model governance, SOC2/ISO/NIST frameworks
β’ Data anonymization, DLP, code de-identification for IP protection
Responsibilities
β’ Lead technical roadmap for Verilog/RTL-based LLM features: model selection, adaptation, evaluation, deployment
β’ Guide a hands-on team of ML engineers and applied scientists
β’ Fine-tune/customize LLMs using modern techniques (LoRA/QLoRA, PEFT, instruction tuning, RLAIF)
β’ Build robust HDL-specific evals (compile/simulate pass rates, pass@k, constrained decoding, synthesis checks)
β’ Design privacy-first ML pipelines in AWS (Bedrock, SageMaker, VPC, KMS, PrivateLink, IAM, CloudTrail)
β’ Deploy secure, scalable inference (vLLM, TensorRT-LLM, autoscaling, canary rollouts)
β’ Develop automated regression and evaluation suites (hallucination checks, constraint validation)
β’ Integrate LLMs into developer tools, CI/CD, retrieval over internal HDL repositories
β’ Manage data security, compliance, anonymization, and licensing workflows
β’ Mentor team in LLM best practices and secure-by-default systems





