

UP.Labs
Dev/LLMOps Engineer - Stealth Aviation Safety Intelligence Startup (Contract)
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Dev/LLMOps Engineer on a short-term contract (less than a month) with a pay rate of "unknown," focusing on building AI systems with LLMs. Key skills include 7+ years in ML solutions, LLM integration, and cloud platforms (AWS, Azure, GCP). Remote location.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 13, 2026
π - Duration
Less than a month
-
ποΈ - Location
Remote
-
π - Contract
Fixed Term
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#AWS (Amazon Web Services) #Agile #AI (Artificial Intelligence) #Data Cleansing #Langchain #Databricks #Data Engineering #Computer Science #DevOps #Deployment #GCP (Google Cloud Platform) #Azure #Lean #Monitoring #Cloud #Automation #Strategy #ML (Machine Learning) #Security
Role description
Note: This is a short-term contract position with potential for extension
Overview:
UP.Labs is a dynamic venture studio dedicated to building innovative startup companies from the ground up. Our team thrives on solving complex problems, driving technological advancements, and creating impactful digital products.
Weβre seeking a highly skilled Dev/LLMOps Engineer to join our growing team and contribute to our mission of launching the next wave of successful startups.
Technical Challenge:
We're looking for a Dev/LLMOps Engineer with a deep technical foundation and a passion for building agentic AI systems that integrate directly with Large Language Models (LLMs). This is an opportunity to drive the development of intelligent, autonomous agents that reason, plan, and interact with users in real time.
As an experienced Dev/LLMOps Engineer, you will design, deploy, and maintain the infrastructure that supports our Large Language Model (LLM) platforms. This role combines DevOps best practices with AI/ML infrastructure expertise, ensuring reliable model training, fine-tuning, and deployment at scale. The ideal candidate is highly skilled in cloud automation, container orchestration, and model-serving pipelines.
In this role you will:
β’ Own the strategy and execution of agentic AI systems using LLMs and autonomous decision-making components.
β’ Develop and scale prototypes into MVPs that demonstrate the real-world viability of complex AI ideas.
β’ Collaborate cross-functionally with engineers and product managers to embed LLMs and decision agents into user-facing features.
β’ Build and manage CI/CD pipelines for LLM training and deployment workflows.
β’ Provide technical guidance on DevOps and deployment strategies for AI models.
You should have:
β’ 7+ years of experience deploying and managing diverse machine learning and GenAI solutions in production, with a strong focus on architecture, end-to-end ownership and systems-level thinking.
β’ Proven experience with LLM integration, LLM-based agents, or related autonomous reasoning systems is required.
β’ Experience building and deploying end-to-end AI systems: from unstructured data cleansing to model training, evaluation, deployment, and monitoring.
β’ Databricks experience, familiarity with Langchain and Databricks GenAI capabilities is strongly recommended.
β’ Cloud Platforms: Experience with Azure, AWS or GCP for access control, multi-tenant deployments and CI/CD pipelines, DevOps tools, deployment patterns and infrastructure management.
β’ Ability to move fast and learn fast, strong proficiency with agile, lean product development is a must.
Preferred Expertise:
β’ Advanced degree (Masterβs or PhD) in Machine Learning, Computer Science, or related field.
β’ Expert-level proficiency with DevOps and MLOps practices and tools for managing data, GenAI and machine learning resources and workflows.
β’ Fundamental and operational knowledge of designing and operating security and access controls and ability to scale it up from prototype to enterprise.
β’ Extensive knowledge of the Data Engineering stack, specifically Databricks, is a strong plus.
β’ Experience in the aviation industry is a bonus.
Location: Remote
Note: This is a short-term contract position with potential for extension
Overview:
UP.Labs is a dynamic venture studio dedicated to building innovative startup companies from the ground up. Our team thrives on solving complex problems, driving technological advancements, and creating impactful digital products.
Weβre seeking a highly skilled Dev/LLMOps Engineer to join our growing team and contribute to our mission of launching the next wave of successful startups.
Technical Challenge:
We're looking for a Dev/LLMOps Engineer with a deep technical foundation and a passion for building agentic AI systems that integrate directly with Large Language Models (LLMs). This is an opportunity to drive the development of intelligent, autonomous agents that reason, plan, and interact with users in real time.
As an experienced Dev/LLMOps Engineer, you will design, deploy, and maintain the infrastructure that supports our Large Language Model (LLM) platforms. This role combines DevOps best practices with AI/ML infrastructure expertise, ensuring reliable model training, fine-tuning, and deployment at scale. The ideal candidate is highly skilled in cloud automation, container orchestration, and model-serving pipelines.
In this role you will:
β’ Own the strategy and execution of agentic AI systems using LLMs and autonomous decision-making components.
β’ Develop and scale prototypes into MVPs that demonstrate the real-world viability of complex AI ideas.
β’ Collaborate cross-functionally with engineers and product managers to embed LLMs and decision agents into user-facing features.
β’ Build and manage CI/CD pipelines for LLM training and deployment workflows.
β’ Provide technical guidance on DevOps and deployment strategies for AI models.
You should have:
β’ 7+ years of experience deploying and managing diverse machine learning and GenAI solutions in production, with a strong focus on architecture, end-to-end ownership and systems-level thinking.
β’ Proven experience with LLM integration, LLM-based agents, or related autonomous reasoning systems is required.
β’ Experience building and deploying end-to-end AI systems: from unstructured data cleansing to model training, evaluation, deployment, and monitoring.
β’ Databricks experience, familiarity with Langchain and Databricks GenAI capabilities is strongly recommended.
β’ Cloud Platforms: Experience with Azure, AWS or GCP for access control, multi-tenant deployments and CI/CD pipelines, DevOps tools, deployment patterns and infrastructure management.
β’ Ability to move fast and learn fast, strong proficiency with agile, lean product development is a must.
Preferred Expertise:
β’ Advanced degree (Masterβs or PhD) in Machine Learning, Computer Science, or related field.
β’ Expert-level proficiency with DevOps and MLOps practices and tools for managing data, GenAI and machine learning resources and workflows.
β’ Fundamental and operational knowledge of designing and operating security and access controls and ability to scale it up from prototype to enterprise.
β’ Extensive knowledge of the Data Engineering stack, specifically Databricks, is a strong plus.
β’ Experience in the aviation industry is a bonus.
Location: Remote





