Agent Framework
Moss + LangChain: Real-Time Retrieval for LangChain Agents
LangChain is the most popular framework for building LLM-powered applications. Moss provides a MossRetriever class that implements LangChain's BaseRetriever interface, plus a get_moss_tool() function for agentic workflows. Replace cloud vector database calls with sub-10ms local search without changing your chain architecture.
Benefits
Why Use Moss with LangChain
MossRetriever implements BaseRetriever - drop into any LangChain RAG chain or LCEL pipeline
get_moss_tool() wraps Moss as a LangChain Tool for ReAct and function-calling agents
Sub-10ms retrieval replaces 300-900ms cloud vector DB round-trips
Hybrid search via alpha parameter: blend semantic similarity with BM25 keyword matching
Async-first: ainvoke() for Jupyter notebooks and async agent loops
Integration
Quick Start
from moss_langchain import MossRetriever, get_moss_tool
# Use Moss as a LangChain retriever
retriever = MossRetriever(
project_id="your-project-id",
project_key="your-project-key",
index_name="support-docs",
top_k=3,
alpha=0.5, # hybrid search: semantic + keyword
)
# Async retrieval (recommended)
docs = await retriever.ainvoke("What is the return policy?")
for doc in docs:
print(doc.page_content, doc.metadata["score"])
# Or wrap as an agent tool
tool = get_moss_tool(retriever)
# tool.name == "moss_search"
# Use with create_openai_functions_agent or ReActSetup
Get Started in 3 Steps
Install dependencies
Run pip install moss langchain langchain-openai to set up your environment.
Create a MossRetriever
Initialize MossRetriever with your project credentials and index name. The retriever auto-loads the index on first query.
Use in your chain or agent
Call retriever.ainvoke(query) in RAG chains, or use get_moss_tool() to give agents autonomous search capability.
FAQ
Frequently asked questions
Explore