

Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AI/ML Engineer focused on NLP and Generative AI, offering a 6-month hybrid contract in Malvern, PA. Requires 10+ years in IT, 4-5+ years in ML, AWS expertise, and proficiency in Python and MLOps practices.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 6, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Malvern, PA
-
π§ - Skills detailed
#AWS (Amazon Web Services) #ML (Machine Learning) #Scala #SageMaker #A/B Testing #Databases #NLP (Natural Language Processing) #Kafka (Apache Kafka) #Monitoring #Cloud #"ETL (Extract #Transform #Load)" #AWS Kinesis #Computer Science #Data Processing #AI (Artificial Intelligence) #Transformers #DynamoDB #Langchain #Athena #Python #OpenSearch #Data Science #PyTorch #AutoScaling #Lambda (AWS Lambda) #Classification #NLG (Natural Language Generation)
Role description
Job Title: Senior AI/ML Engineer β NLP, Generative AI, AWS
Location: Malvern, PA (Hybrid)- 6 months contract can be extended to a year
About the Role:
We are seeking a highly experienced Senior AI/ML Engineer with deep expertise in Natural Language Processing (NLP), Generative AI, and cloud-native ML systems. This role is ideal for someone who has built production-ready intent detection models, NLG systems, and has strong experience with AWS Bedrock, LangChain, and LangGraph. Youβll play a key role in architecting and scaling AI-first applications that leverage the latest in LLM, orchestration, and AWS-native services.
Key Responsibilities:
β’ Design, develop, and deploy intent classification and intent detection models using LLMs and traditional NLP methods
β’ Build and optimize Natural Language Generation (NLG) pipelines for chatbot responses, summarization, content creation, or knowledge grounding
β’ Architect and implement LangChain and LangGraph based applications for LLM-driven workflows (e.g., autonomous agents, RAG systems)
β’ Develop scalable machine learning pipelines using the AWS tech stack (e.g., Sagemaker, Lambda, Bedrock, Step Functions, DynamoDB, Athena)
β’ Integrate and fine-tune foundation models via AWS Bedrock, including Amazon Titan, Anthropic Claude, or Meta Llama
β’ Collaborate closely with product managers, ML researchers, and backend engineers to translate business requirements into robust AI solutions
β’ Lead experimentation efforts, conduct A/B testing, and ensure continuous evaluation of deployed ML models
β’ Mentor junior ML engineers and contribute to best practices in MLOps, model governance, and responsible AI
Required Qualifications:
β’ Total 10+ years in IT with 4 to 5+ years of experience in machine learning, with a focus on NLP and Generative AI
β’ Strong experience building and deploying intent detection, text classification, sequence tagging, and entity recognition models
β’ Proficient in LangChain, LangGraph, vector databases (e.g., FAISS, Pinecone), and orchestration of LLM workflows
β’ Deep knowledge of AWS Bedrock, Amazon SageMaker, Lambda, DynamoDB, Step Functions, etc.
β’ Experience working with open-source LLMs (LLaMA, Mistral, Falcon) or commercial APIs (Claude, GPT-4, etc.)
β’ Proficient in Python, with a solid grasp of ML frameworks such as PyTorch, HuggingFace Transformers, scikit-learn
β’ Strong understanding of MLOps practices including model versioning, CI/CD for ML, monitoring, and auto-scaling
β’ Bachelorβs or Masterβs in Computer Science, Data Science, or a related field
Nice to Have:
β’ Experience integrating RAG (Retrieval-Augmented Generation) systems at scale
β’ Familiarity with vector search using Amazon OpenSearch, Pinecone, or Weaviate
β’ Experience with streaming data processing (e.g., AWS Kinesis, Kafka)
β’ Contributions to open-source AI/ML or NLP projects
Job Title: Senior AI/ML Engineer β NLP, Generative AI, AWS
Location: Malvern, PA (Hybrid)- 6 months contract can be extended to a year
About the Role:
We are seeking a highly experienced Senior AI/ML Engineer with deep expertise in Natural Language Processing (NLP), Generative AI, and cloud-native ML systems. This role is ideal for someone who has built production-ready intent detection models, NLG systems, and has strong experience with AWS Bedrock, LangChain, and LangGraph. Youβll play a key role in architecting and scaling AI-first applications that leverage the latest in LLM, orchestration, and AWS-native services.
Key Responsibilities:
β’ Design, develop, and deploy intent classification and intent detection models using LLMs and traditional NLP methods
β’ Build and optimize Natural Language Generation (NLG) pipelines for chatbot responses, summarization, content creation, or knowledge grounding
β’ Architect and implement LangChain and LangGraph based applications for LLM-driven workflows (e.g., autonomous agents, RAG systems)
β’ Develop scalable machine learning pipelines using the AWS tech stack (e.g., Sagemaker, Lambda, Bedrock, Step Functions, DynamoDB, Athena)
β’ Integrate and fine-tune foundation models via AWS Bedrock, including Amazon Titan, Anthropic Claude, or Meta Llama
β’ Collaborate closely with product managers, ML researchers, and backend engineers to translate business requirements into robust AI solutions
β’ Lead experimentation efforts, conduct A/B testing, and ensure continuous evaluation of deployed ML models
β’ Mentor junior ML engineers and contribute to best practices in MLOps, model governance, and responsible AI
Required Qualifications:
β’ Total 10+ years in IT with 4 to 5+ years of experience in machine learning, with a focus on NLP and Generative AI
β’ Strong experience building and deploying intent detection, text classification, sequence tagging, and entity recognition models
β’ Proficient in LangChain, LangGraph, vector databases (e.g., FAISS, Pinecone), and orchestration of LLM workflows
β’ Deep knowledge of AWS Bedrock, Amazon SageMaker, Lambda, DynamoDB, Step Functions, etc.
β’ Experience working with open-source LLMs (LLaMA, Mistral, Falcon) or commercial APIs (Claude, GPT-4, etc.)
β’ Proficient in Python, with a solid grasp of ML frameworks such as PyTorch, HuggingFace Transformers, scikit-learn
β’ Strong understanding of MLOps practices including model versioning, CI/CD for ML, monitoring, and auto-scaling
β’ Bachelorβs or Masterβs in Computer Science, Data Science, or a related field
Nice to Have:
β’ Experience integrating RAG (Retrieval-Augmented Generation) systems at scale
β’ Familiarity with vector search using Amazon OpenSearch, Pinecone, or Weaviate
β’ Experience with streaming data processing (e.g., AWS Kinesis, Kafka)
β’ Contributions to open-source AI/ML or NLP projects