SRS Consulting Inc

AI/ML Engineer - Direct Client

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AI/ML Engineer, remote for 6 months, with a pay rate of "$XX/hour." Key skills include Python, ML frameworks, API development, and experience with LLM systems. A bachelor's degree and 5+ years in platform engineering are required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 23, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Data Engineering #Computer Science #Data Science #Monitoring #Docker #AI (Artificial Intelligence) #Scala #ML (Machine Learning) #Statistics #Security #Azure #Model Deployment #Python #Datasets #Microservices #REST (Representational State Transfer) #SageMaker #AWS SageMaker #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #Deployment #Libraries #PyTorch #NLP (Natural Language Processing) #Kubernetes #Automation #Cloud #TensorFlow #Databases
Role description
Job Title: AI/ML Engineer Location: Remote JOB Duration: 6 Months Level: Senior/Tech Lead Reports To: Director, Data Engineering Job Summary: We’re looking for an AI/ML Platform Engineer to help build and scale our next-generation AI platform. This role sits at the intersection of data/AI platform and MLOps, with a focus on enabling traditional AI and LLM based systems, retrieval/search infrastructure and integrating models with enterprise tools. You will work closely with the Data/AI Platform team and our applied AI engineer . You’ll help build, deploy and maintain machine-learning models that drive meaningful outcomes β€” within our behavioral-health tech and EHR platform. You will partner across product, engineering, data and clinical teams to bring high-quality, scalable, and ethical AI into real-world use. You will create best practices, patterns and architecture for the rest of the team to inherit. Duties/Responsibilities: ● Design and implement MCP servers that expose internal data/services to LLMs ● Build secure, structured endpoints for model context access ● Integrate MCP services with model inference APIs ● Implement and operate a vector search engine ● Deploy models into production (cloud, on-premise or hybrid) and integrate with upstream/downstream systems (EHR modules, APIs, micro-services, dashboards) ● Monitor model performance in live settings (accuracy, drift, bias, fairness, reproducibility), and iterate on models to maintain or improve reliability and relevance ● Build/maintain machine learning pipelines and work with the data platform team to connect AI workloads to core datasets ● Ensure security, permissions and monitoring of AI systems ● Implement cost monitoring and usage tracking for AI workloads across internal teams ● Partner with cross-functional stakeholders (data scientists, data engineers, SDEs) to deploy these capabilities ● Stay informed about emerging AI/ML techniques, tools and best practices (including AI ethics, bias mitigation, interpretability), and proactively bring forward improvements or innovation ● Contribute to a culture of continuous improvement, knowledge-sharing and mentoring of junior team members Required Skills: ● Proficiency in Python (or analogous language) and strong familiarity with ML frameworks/libraries (ex: TensorFlow, PyTorch, scikit-learn) This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Job Description ● Experience building APIs, services or microservices ● Knowledge of vector databases or search systems ● Experience with LLM application patterns: RAG, embeddings, prompt orchestration and tool calling. ● Experience with basic MLOps practices : model deployment, monitoring, pipeline automation, CI/CD ● Demonstrated ability to deploy models into production or near-production environments (cloud environments like AWS, Azure, GCP or containerised/micro-services infrastructure). GCP experience is strongly preferred ● A collaborative mindset, dependable execution, drive to reflect and improve, and humility to ask questions and learn. Education & Experience ● Bachelor’s degree (or equivalent) in Computer Science, Data Science, Statistics, Engineering or a related field ● 5+ years of platform/infrastructure engineering experience, with demonstrable recent work on LLM-based systems Preferred: β—‹ Experience in healthcare, behavioral health, EHR systems or regulated industries β—‹ Familiarity with MLOps practices: CI/CD for models, model monitoring, drift detection, model governance. β—‹ Experience with NLP (clinical text) or computer vision (imaging) tasks β—‹ Familiarity with cloud-native services for ML (e.g., AWS SageMaker, Azure ML, GCP AI Platform) and related infrastructure (Docker, Kubernetes) β—‹ Awareness of AI ethics, bias/fairness issues, model interpretability techniques β—‹ Experience mentoring others or leading small technical initiatives Physical Requirements ● Prolonged periods sitting at a desk and working on a computer. ● Must be able to frequently communicate with others through virtual meeting applications such as Zoom and Google Meet. ● Must be able to observe and communicate information on company provided laptop. ● Move up to 10 pounds on occasion.