

AI/ML Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AI/ML Engineer specializing in NLP and Generative AI, based in Malvern, PA. It is a contract position requiring 4-5+ years of experience, proficiency in AWS Bedrock and LangChain, and a degree in Computer Science or related field.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 29, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Malvern, PA
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Langchain #PyTorch #Data Science #AutoScaling #Athena #NLP (Natural Language Processing) #DynamoDB #Computer Science #AI (Artificial Intelligence) #A/B Testing #Databases #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Monitoring #OpenSearch #Classification #NLG (Natural Language Generation) #Kafka (Apache Kafka) #Transformers #Data Processing #Python #ML (Machine Learning) #Scala #Cloud #AWS Kinesis #SageMaker
Role description
Lead AI/ML Engineer β NLP & LLM
Malvern,PA(Onsite)
Contract
JOB DESCRIPTION:
We are seeking a highly experienced Senior AI/ML Engineer with deep expertise in Natural Language Processing (NLP), Generative AI, and cloud-native ML systems. This role is ideal for someone who has built production-ready intent detection models, NLG systems, and has strong experience with AWS Bedrock, LangChain, and LangGraph. Youβll play a key role in architecting and scaling AI-first applications that leverage the latest in LLM, orchestration, and AWS-native services.
Key Responsibilities:
β’ Design, develop, and deploy intent classification and intent detection models using LLMs and traditional NLP methods.
β’ Build and optimize Natural Language Generation (NLG) pipelines for chatbot responses, summarization, content creation, or knowledge grounding.
β’ Architect and implement LangChain and LangGraph based applications for LLM-driven workflows (e.g., autonomous agents, RAG systems).
β’ Develop scalable machine learning pipelines using the AWS tech stack (e.g., Sagemaker, Lambda, Bedrock, Step Functions, DynamoDB, Athena).
β’ Integrate and fine-tune foundation models via AWS Bedrock, including Amazon Titan, Anthropic Claude, or Meta Llama.
β’ Collaborate closely with product managers, ML researchers, and backend engineers to translate business requirements into robust AI solutions.
β’ Lead experimentation efforts, conduct A/B testing, and ensure continuous evaluation of deployed ML models.
β’ Mentor junior ML engineers and contribute to best practices in MLOps, model governance, and responsible AI.
Required Qualifications:
β’ Total 4 to 5 + years of experience in machine learning, with a focus on NLP and Generative AI.
β’ Strong experience building and deploying intent detection, text classification, sequence tagging, and entity recognition models.
β’ Proficient in LangChain, LangGraph, vector databases (e.g., FAISS, Pinecone), and orchestration of LLM workflows.
β’ Deep knowledge of AWS Bedrock, Amazon SageMaker, Lambda, DynamoDB, Step Functions, etc.
β’ Experience working with open-source LLMs (LLaMA, Mistral, Falcon) or commercial APIs (Claude, GPT-4, etc.).
β’ Proficient in Python, with a solid grasp of ML frameworks such as PyTorch, HuggingFace Transformers, scikit-learn.
β’ Strong understanding of MLOps practices including model versioning, CI/CD for ML, monitoring, and auto-scaling.
β’ Bachelorβs or Masterβs in Computer Science, Data Science, or a related field.
Nice to Have:
β’ Experience integrating RAG (Retrieval-Augmented Generation) systems at scale.
β’ Familiarity with vector search using Amazon OpenSearch, Pinecone, or Weaviate.
β’ Experience with streaming data processing (e.g., AWS Kinesis, Kafka).
β’ Contributions to open-source AI/ML or NLP projects
Lead AI/ML Engineer β NLP & LLM
Malvern,PA(Onsite)
Contract
JOB DESCRIPTION:
We are seeking a highly experienced Senior AI/ML Engineer with deep expertise in Natural Language Processing (NLP), Generative AI, and cloud-native ML systems. This role is ideal for someone who has built production-ready intent detection models, NLG systems, and has strong experience with AWS Bedrock, LangChain, and LangGraph. Youβll play a key role in architecting and scaling AI-first applications that leverage the latest in LLM, orchestration, and AWS-native services.
Key Responsibilities:
β’ Design, develop, and deploy intent classification and intent detection models using LLMs and traditional NLP methods.
β’ Build and optimize Natural Language Generation (NLG) pipelines for chatbot responses, summarization, content creation, or knowledge grounding.
β’ Architect and implement LangChain and LangGraph based applications for LLM-driven workflows (e.g., autonomous agents, RAG systems).
β’ Develop scalable machine learning pipelines using the AWS tech stack (e.g., Sagemaker, Lambda, Bedrock, Step Functions, DynamoDB, Athena).
β’ Integrate and fine-tune foundation models via AWS Bedrock, including Amazon Titan, Anthropic Claude, or Meta Llama.
β’ Collaborate closely with product managers, ML researchers, and backend engineers to translate business requirements into robust AI solutions.
β’ Lead experimentation efforts, conduct A/B testing, and ensure continuous evaluation of deployed ML models.
β’ Mentor junior ML engineers and contribute to best practices in MLOps, model governance, and responsible AI.
Required Qualifications:
β’ Total 4 to 5 + years of experience in machine learning, with a focus on NLP and Generative AI.
β’ Strong experience building and deploying intent detection, text classification, sequence tagging, and entity recognition models.
β’ Proficient in LangChain, LangGraph, vector databases (e.g., FAISS, Pinecone), and orchestration of LLM workflows.
β’ Deep knowledge of AWS Bedrock, Amazon SageMaker, Lambda, DynamoDB, Step Functions, etc.
β’ Experience working with open-source LLMs (LLaMA, Mistral, Falcon) or commercial APIs (Claude, GPT-4, etc.).
β’ Proficient in Python, with a solid grasp of ML frameworks such as PyTorch, HuggingFace Transformers, scikit-learn.
β’ Strong understanding of MLOps practices including model versioning, CI/CD for ML, monitoring, and auto-scaling.
β’ Bachelorβs or Masterβs in Computer Science, Data Science, or a related field.
Nice to Have:
β’ Experience integrating RAG (Retrieval-Augmented Generation) systems at scale.
β’ Familiarity with vector search using Amazon OpenSearch, Pinecone, or Weaviate.
β’ Experience with streaming data processing (e.g., AWS Kinesis, Kafka).
β’ Contributions to open-source AI/ML or NLP projects