

SRS Consulting Inc
AI/ML Engineer - Direct Client
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AI/ML Engineer, remote for 6 months, with a pay rate of "$XX/hour." Key skills include Python, ML frameworks, API development, and experience with LLM systems. A bachelor's degree and 5+ years in platform engineering are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 23, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Data Engineering #Computer Science #Data Science #Monitoring #Docker #AI (Artificial Intelligence) #Scala #ML (Machine Learning) #Statistics #Security #Azure #Model Deployment #Python #Datasets #Microservices #REST (Representational State Transfer) #SageMaker #AWS SageMaker #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #Deployment #Libraries #PyTorch #NLP (Natural Language Processing) #Kubernetes #Automation #Cloud #TensorFlow #Databases
Role description
Job Title: AI/ML Engineer
Location: Remote JOB
Duration: 6 Months
Level: Senior/Tech Lead
Reports To: Director, Data Engineering
Job Summary:
Weβre looking for an AI/ML Platform Engineer to help build and scale our next-generation AI
platform. This role sits at the intersection of data/AI platform and MLOps, with a focus on
enabling traditional AI and LLM based systems, retrieval/search infrastructure and integrating
models with enterprise tools.
You will work closely with the Data/AI Platform team and our applied AI engineer . Youβll help
build, deploy and maintain machine-learning models that drive meaningful outcomes β within
our behavioral-health tech and EHR platform. You will partner across product, engineering, data
and clinical teams to bring high-quality, scalable, and ethical AI into real-world use. You will
create best practices, patterns and architecture for the rest of the team to inherit.
Duties/Responsibilities:
β Design and implement MCP servers that expose internal data/services to LLMs
β Build secure, structured endpoints for model context access
β Integrate MCP services with model inference APIs
β Implement and operate a vector search engine
β Deploy models into production (cloud, on-premise or hybrid) and integrate with
upstream/downstream systems (EHR modules, APIs, micro-services, dashboards)
β Monitor model performance in live settings (accuracy, drift, bias, fairness, reproducibility),
and iterate on models to maintain or improve reliability and relevance
β Build/maintain machine learning pipelines and work with the data platform team to
connect AI workloads to core datasets
β Ensure security, permissions and monitoring of AI systems
β Implement cost monitoring and usage tracking for AI workloads across internal teams
β Partner with cross-functional stakeholders (data scientists, data engineers, SDEs) to
deploy these capabilities
β Stay informed about emerging AI/ML techniques, tools and best practices (including AI
ethics, bias mitigation, interpretability), and proactively bring forward improvements or
innovation
β Contribute to a culture of continuous improvement, knowledge-sharing and mentoring of
junior team members
Required Skills:
β Proficiency in Python (or analogous language) and strong familiarity with ML
frameworks/libraries (ex: TensorFlow, PyTorch, scikit-learn)
This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that
are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without
notice.
Job Description
β Experience building APIs, services or microservices
β Knowledge of vector databases or search systems
β Experience with LLM application patterns: RAG, embeddings, prompt orchestration and
tool calling.
β Experience with basic MLOps practices : model deployment, monitoring, pipeline
automation, CI/CD
β Demonstrated ability to deploy models into production or near-production environments
(cloud environments like AWS, Azure, GCP or containerised/micro-services
infrastructure). GCP experience is strongly preferred
β A collaborative mindset, dependable execution, drive to reflect and improve, and humility
to ask questions and learn.
Education & Experience
β Bachelorβs degree (or equivalent) in Computer Science, Data Science, Statistics,
Engineering or a related field
β 5+ years of platform/infrastructure engineering experience, with demonstrable recent
work on LLM-based systems
Preferred:
β Experience in healthcare, behavioral health, EHR systems or regulated industries
β Familiarity with MLOps practices: CI/CD for models, model monitoring, drift
detection, model governance.
β Experience with NLP (clinical text) or computer vision (imaging) tasks
β Familiarity with cloud-native services for ML (e.g., AWS SageMaker, Azure ML,
GCP AI Platform) and related infrastructure (Docker, Kubernetes)
β Awareness of AI ethics, bias/fairness issues, model interpretability techniques
β Experience mentoring others or leading small technical initiatives
Physical Requirements
β Prolonged periods sitting at a desk and working on a computer.
β Must be able to frequently communicate with others through virtual meeting applications
such as Zoom and Google Meet.
β Must be able to observe and communicate information on company provided laptop.
β Move up to 10 pounds on occasion.
Job Title: AI/ML Engineer
Location: Remote JOB
Duration: 6 Months
Level: Senior/Tech Lead
Reports To: Director, Data Engineering
Job Summary:
Weβre looking for an AI/ML Platform Engineer to help build and scale our next-generation AI
platform. This role sits at the intersection of data/AI platform and MLOps, with a focus on
enabling traditional AI and LLM based systems, retrieval/search infrastructure and integrating
models with enterprise tools.
You will work closely with the Data/AI Platform team and our applied AI engineer . Youβll help
build, deploy and maintain machine-learning models that drive meaningful outcomes β within
our behavioral-health tech and EHR platform. You will partner across product, engineering, data
and clinical teams to bring high-quality, scalable, and ethical AI into real-world use. You will
create best practices, patterns and architecture for the rest of the team to inherit.
Duties/Responsibilities:
β Design and implement MCP servers that expose internal data/services to LLMs
β Build secure, structured endpoints for model context access
β Integrate MCP services with model inference APIs
β Implement and operate a vector search engine
β Deploy models into production (cloud, on-premise or hybrid) and integrate with
upstream/downstream systems (EHR modules, APIs, micro-services, dashboards)
β Monitor model performance in live settings (accuracy, drift, bias, fairness, reproducibility),
and iterate on models to maintain or improve reliability and relevance
β Build/maintain machine learning pipelines and work with the data platform team to
connect AI workloads to core datasets
β Ensure security, permissions and monitoring of AI systems
β Implement cost monitoring and usage tracking for AI workloads across internal teams
β Partner with cross-functional stakeholders (data scientists, data engineers, SDEs) to
deploy these capabilities
β Stay informed about emerging AI/ML techniques, tools and best practices (including AI
ethics, bias mitigation, interpretability), and proactively bring forward improvements or
innovation
β Contribute to a culture of continuous improvement, knowledge-sharing and mentoring of
junior team members
Required Skills:
β Proficiency in Python (or analogous language) and strong familiarity with ML
frameworks/libraries (ex: TensorFlow, PyTorch, scikit-learn)
This job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that
are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without
notice.
Job Description
β Experience building APIs, services or microservices
β Knowledge of vector databases or search systems
β Experience with LLM application patterns: RAG, embeddings, prompt orchestration and
tool calling.
β Experience with basic MLOps practices : model deployment, monitoring, pipeline
automation, CI/CD
β Demonstrated ability to deploy models into production or near-production environments
(cloud environments like AWS, Azure, GCP or containerised/micro-services
infrastructure). GCP experience is strongly preferred
β A collaborative mindset, dependable execution, drive to reflect and improve, and humility
to ask questions and learn.
Education & Experience
β Bachelorβs degree (or equivalent) in Computer Science, Data Science, Statistics,
Engineering or a related field
β 5+ years of platform/infrastructure engineering experience, with demonstrable recent
work on LLM-based systems
Preferred:
β Experience in healthcare, behavioral health, EHR systems or regulated industries
β Familiarity with MLOps practices: CI/CD for models, model monitoring, drift
detection, model governance.
β Experience with NLP (clinical text) or computer vision (imaging) tasks
β Familiarity with cloud-native services for ML (e.g., AWS SageMaker, Azure ML,
GCP AI Platform) and related infrastructure (Docker, Kubernetes)
β Awareness of AI ethics, bias/fairness issues, model interpretability techniques
β Experience mentoring others or leading small technical initiatives
Physical Requirements
β Prolonged periods sitting at a desk and working on a computer.
β Must be able to frequently communicate with others through virtual meeting applications
such as Zoom and Google Meet.
β Must be able to observe and communicate information on company provided laptop.
β Move up to 10 pounds on occasion.





