

AI Engineer - 100% Remote
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Engineer with a 6-month remote contract, potentially extendable or convertible to permanent. Key skills include Python, AI/ML engineering, LLMs, and RESTful API integration. Experience with cloud platforms and MLOps is preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
150
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#TypeScript #Redis #MLflow #SageMaker #API (Application Programming Interface) #Transformers #Langchain #Databases #Documentation #JavaScript #Deployment #AI (Artificial Intelligence) #ML (Machine Learning) #Python #Docker #Azure #TensorFlow #Lambda (AWS Lambda) #Model Evaluation #Monitoring #Kubernetes #AWS (Amazon Web Services) #Cloud #Libraries #Data Science #Hugging Face #Automation #GitLab #Programming #PyTorch #GitHub #"ETL (Extract #Transform #Load)" #Version Control #CircleCI #AWS SageMaker #Data Privacy #GCP (Google Cloud Platform) #R #PostgreSQL #Scala
Role description
Direct candidates only β NO THIRD PARTIES PLEASE. Resource 1 is seeking an AI Engineer for a remote contract with our client in the United States. The initial contract is for 6 months with strong likelihood of extensions. Due to the long-term need, position is also open for conversion to perm. Individual will join a team responsible for the development and integration of next-generation AI solutions. Individual will work closely with Product Leaders, Researchers, and Engineers to design, develop, and deploy AI-driven systems.
This role is ideal for someone who thrives in a fast-paced, R&D-oriented environment and is passionate about delivering scalable, intelligent solutions using the latest advancements in AI.
RESPONSIBILITIES:
β’ Design and implement AI/ML models and pipelines using state-of-the-art tools and frameworks (e.g., PyTorch, TensorFlow, JAX).
β’ Optimize LLMs, vision models, or multimodal systems (using Hugging Face, LangChain, OpenAI API, etc.).
β’ Build, test, and deploy AI apps and APIs in prod environments (AWS, GCP, Azure, or similar).
β’ Integrate AI services (OpenAI, Anthropic, Cohere, or open-source models like Mistral, LLaMA).
β’ Collaborate with stakeholders to translate business needs into scalable AI solutions.
β’ Stay current with the latest research, publications, and tools in the AI ecosystem.
β’ Contribute to documentation, model evaluations, and ethical/secure deployment practices.
TECH STACK & TOOLS:
β’ Languages: Python (preferred), JavaScript/ TypeScript (for APIs or tooling)
β’ Frameworks/ Libraries: PyTorch, TensorFlow, Hugging Face Transformers, LangChain, Ray
β’ Cloud & MLOps: AWS (SageMaker, Lambda), Docker, Kubernetes, MLflow
β’ AI/LLM Providers: OpenAI, Anthropic, Cohere, Google Vertex AI, Meta, Mistral
β’ Databases: PostgreSQL, Redis, Pinecone, Weaviate (or similar vector DBs)
β’ Version Control & CI/CD: GitHub, GitLab, CircleCI
QUALIFICATIONS:
Must-Have:
β’ Strong experience in AI/ML engineering or applied data science
β’ Experience developing and deploying ML models in production
β’ Experience with LLMs (e.g., GPT-4, Claude, Mistral) or foundation models
β’ Familiarity with prompt engineering, RAG, fine-tuning, or distillation techniques
β’ Python programming skills and software engineering practices
β’ RESTful API design and integration of AI services
Nice-to-Have:
β’ Experience in MLOps or model monitoring in production environments
β’ Contributions to open-source AI/ML projects or publications
β’ Familiarity with data privacy and model governance best practices
β’ Experience building AI-powered agents or automation pipelines
Direct candidates only β NO THIRD PARTIES PLEASE. Resource 1 is seeking an AI Engineer for a remote contract with our client in the United States. The initial contract is for 6 months with strong likelihood of extensions. Due to the long-term need, position is also open for conversion to perm. Individual will join a team responsible for the development and integration of next-generation AI solutions. Individual will work closely with Product Leaders, Researchers, and Engineers to design, develop, and deploy AI-driven systems.
This role is ideal for someone who thrives in a fast-paced, R&D-oriented environment and is passionate about delivering scalable, intelligent solutions using the latest advancements in AI.
RESPONSIBILITIES:
β’ Design and implement AI/ML models and pipelines using state-of-the-art tools and frameworks (e.g., PyTorch, TensorFlow, JAX).
β’ Optimize LLMs, vision models, or multimodal systems (using Hugging Face, LangChain, OpenAI API, etc.).
β’ Build, test, and deploy AI apps and APIs in prod environments (AWS, GCP, Azure, or similar).
β’ Integrate AI services (OpenAI, Anthropic, Cohere, or open-source models like Mistral, LLaMA).
β’ Collaborate with stakeholders to translate business needs into scalable AI solutions.
β’ Stay current with the latest research, publications, and tools in the AI ecosystem.
β’ Contribute to documentation, model evaluations, and ethical/secure deployment practices.
TECH STACK & TOOLS:
β’ Languages: Python (preferred), JavaScript/ TypeScript (for APIs or tooling)
β’ Frameworks/ Libraries: PyTorch, TensorFlow, Hugging Face Transformers, LangChain, Ray
β’ Cloud & MLOps: AWS (SageMaker, Lambda), Docker, Kubernetes, MLflow
β’ AI/LLM Providers: OpenAI, Anthropic, Cohere, Google Vertex AI, Meta, Mistral
β’ Databases: PostgreSQL, Redis, Pinecone, Weaviate (or similar vector DBs)
β’ Version Control & CI/CD: GitHub, GitLab, CircleCI
QUALIFICATIONS:
Must-Have:
β’ Strong experience in AI/ML engineering or applied data science
β’ Experience developing and deploying ML models in production
β’ Experience with LLMs (e.g., GPT-4, Claude, Mistral) or foundation models
β’ Familiarity with prompt engineering, RAG, fine-tuning, or distillation techniques
β’ Python programming skills and software engineering practices
β’ RESTful API design and integration of AI services
Nice-to-Have:
β’ Experience in MLOps or model monitoring in production environments
β’ Contributions to open-source AI/ML projects or publications
β’ Familiarity with data privacy and model governance best practices
β’ Experience building AI-powered agents or automation pipelines