Vision-Language Models (VLMs)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Scientist specializing in Vision-Language Models (VLMs) with 10+ years of experience. The contract is onsite in San Ramon, CA or Waukesha, WI, offering a competitive pay rate. Key skills include Python, AWS, and healthcare domain expertise.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 20, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
On-site
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Waukesha, WI
-
🧠 - Skills detailed
#Deep Learning #MLflow #OpenCV (Open Source Computer Vision Library) #SageMaker #Leadership #Keras #Deployment #AI (Artificial Intelligence) #ML (Machine Learning) #Object Detection #Python #Computer Science #Docker #Azure #TensorFlow #Microservices #AWS (Amazon Web Services) #Cloud #Image Processing #IoT (Internet of Things) #Compliance #Data Science #Classification #PyTorch #Datasets #"ETL (Extract #Transform #Load)" #Data Extraction #AWS SageMaker #Scala
Role description
Role: Senior Data Scientist with expertise in Vision-Language Models (VLMs) Experience: -10+ Years Location: -San Ramon, CA or Waukesha, WI (Onsite) Key Responsibilities β€’ VLM Development, Pose estimation & Deployment: β€’ Design, train, and deploy efficient Vision-Language Models (e.g., VILA, Isaac Sim) for multimodal applications including image captioning, visual search, and document understanding, pose understanding, pose comparison. β€’ Develop and manage Digital Twin frameworks using AWS IoT TwinMaker, SiteWise, and Greengrass to simulate and optimize real-world systems. β€’ Develop Digital Avatars using AWS services integrated with 3D rendering engines, animation pipelines, and real-time data feeds. β€’ Explore cost-effective methods such as knowledge distillation, modal-adaptive pruning, and LoRA fine-tuning to optimize training and inference. β€’ Implement scalable pipelines for training/testing VLMs on cloud platforms (AWS services such as SageMaker, Bedrock, Rekognition, Comprehend, and Textract.). Β· NVIDIA Platforms : Candidate should develop a blend of technical expertise, tool proficiency, and domain- specific knowledge on below NVIDIA Platforms: Β· NIM (NVIDIA Inference Microservices): Containerized VLM deployment. Β· NeMo Framework: Training and scaling VLMs across thousands of GPUs. Β· Supported Models: LLaVA, LLaMA 3.2, Nemotron Nano VL, Qwen2-VL, Gemma 3. Β· DeepStream SDK: Integrates pose models like TRTPose and OpenPose, Real-time video analytics and multi-stream processing. β€’ Multimodal AI Solutions: β€’ Develop solutions that integrate vision and language capabilities for applications like image-text matching, visual question answering (VQA), and document data extraction. β€’ Leverage interleaved image-text datasets and advanced techniques (e.g., cross-attention layers) to enhance model performance. β€’ Image Processing and Computer Vision Β· Develop solutions that integrate Vision based deep learning models for applications like live video streaming integration and processing, object detection, image segmentation, pose Estimation, Object Tracking and Image Classification and defect detection on medical Xray images Β· Knowledge of real-time video analytics, multi-camera tracking, and object detection. Β· Training and testing the deep learning models on customized data β€’ Healthcare Domain Expertise: β€’ Apply VLMs to healthcare-specific use cases such as medical imaging analysis, position detection, motion detection and measurements. β€’ Ensure compliance with healthcare standards while handling sensitive data. β€’ Efficiency Optimization: β€’ Evaluate trade-offs between model size, performance, and cost using techniques like elastic visual encoders or lightweight architectures. β€’ Benchmark different VLMs (e.g., GPT-4V, Claude 3.5, Nova Lite) for accuracy, speed, and cost-effectiveness on specific tasks. β€’ Benchmarking on GPU vs CPU β€’ Collaboration & Leadership: β€’ Collaborate with cross-functional teams including engineers and domain experts to define project requirements. β€’ Mentor junior team members and provide technical leadership on complex projects. Qualifications β€’ Education: Master’s or Ph.D. in Computer Science, Data Science, Machine Learning, or a related field. β€’ Experience: β€’ Minimum of 10+ years of experience in machine learning or data science roles with a focus on vision-language models. β€’ Proven expertise in deploying production-grade multimodal AI solutions. β€’ Experience in self driving cars and self navigating robots. β€’ Technical Skills: β€’ Proficiency in Python and ML frameworks (e.g., PyTorch, TensorFlow). β€’ Hands-on experience with VLMs such as VILA, Isaac Sim, or VSS. β€’ Familiarity with cloud platforms like AWS SageMaker or Azure ML Studio for scalable AI deployment. β€’ OpenCV, PIL, scikit-image β€’ Frameworks: PyTorch, TensorFlow, Keras β€’ CUDA, cuDNN β€’ 3D vision: point clouds, depth estimation, LiDAR β€’ Domain Knowledge: β€’ Understanding of medical datasets (e.g., imaging data) and healthcare regulations. β€’ Soft Skills: β€’ Strong problem-solving skills with the ability to optimize models for real-world constraints. β€’ Excellent communication skills to explain technical concepts to diverse stakeholders. Preferred Technologies β€’ Vision-Language Models: VILA, Isaac Sim, EfficientVLM β€’ Cloud Platforms: AWS SageMaker, Bedrock β€’ Optimization Techniques: LoRA fine-tuning, modal-adaptive pruning β€’ Multimodal Techniques: Cross-attention layers, interleaved image-text datasets β€’ MLOps Tools: Docker, MLflow