LangChain Integration
đ Vidurai seamlessly integrates with LangChain, providing intelligent memory management for your LangChain applications.
Overviewâ
The LangChain integration provides:
- ViduraiMemory - Drop-in replacement for ConversationBufferMemory
- Three-Layer Architecture - Working, Episodic, and Semantic memory
- Strategic Forgetting - Automatic removal of less important context
- Cost Optimization - 30-50% token reduction (varies by workload and RL agent maturity)
Installationâ
pip install vidurai[langchain]
This installs Vidurai with LangChain support.
Quick Startâ
Basic Usageâ
from langchain.llms import OpenAI
from vidurai.integrations.langchain import ViduraiConversationChain
# Create LLM
llm = OpenAI(temperature=0.7, openai_api_key="your-key")
# Create conversation chain with Vidurai memory
chain = ViduraiConversationChain.create(llm)
# Have a conversation
response = chain.predict(input="Hi, my name is Alice")
print(response)
response = chain.predict(input="What's my name?")
print(response) # Remembers: "Your name is Alice"
Using ViduraiMemory Directlyâ
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from vidurai.integrations.langchain import ViduraiMemory
# Create Vidurai memory
memory = ViduraiMemory()
# Create conversation chain
llm = OpenAI(temperature=0.7)
chain = ConversationChain(
llm=llm,
memory=memory,
verbose=True
)
# Use the chain
chain.predict(input="I'm building an AI application")
chain.predict(input="It uses LangChain and Vidurai")
chain.predict(input="What am I building?")
Use Casesâ
Customer Support Chatbotâ
from vidurai.integrations.langchain import ViduraiConversationChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.7, openai_api_key="your-key")
chatbot = ViduraiConversationChain.create(llm)
# Long conversation with automatic forgetting
chatbot.predict(input="Hi, my name is Sarah")
chatbot.predict(input="I have an issue with order #12345")
chatbot.predict(input="It was supposed to arrive yesterday")
chatbot.predict(input="The weather is nice today") # Low importance
chatbot.predict(input="Can you help track my order?")
# Vidurai keeps: name, order issue
# Vidurai forgets: weather small talk
Multi-User Applicationâ
from vidurai.integrations.langchain import ViduraiMemory
class ChatService:
def __init__(self):
self.user_memories = {}
def get_memory(self, user_id: str):
if user_id not in self.user_memories:
self.user_memories[user_id] = ViduraiMemory()
return self.user_memories[user_id]
def chat(self, user_id: str, message: str):
memory = self.get_memory(user_id)
# Each user has isolated memory
memory.save_context(
{"input": message},
{"output": "AI response"}
)
service = ChatService()
service.chat("user_1", "I love Python")
service.chat("user_2", "I prefer JavaScript")
# Memories are completely isolated
Configurationâ
Memory Capacityâ
from vidurai import create_memory_system
from vidurai.integrations.langchain import ViduraiMemory
# Custom memory configuration
memory = ViduraiMemory()
memory.vidurai_memory = create_memory_system(
working_capacity=15, # Keep 15 recent items
episodic_capacity=500, # Store up to 500 items
aggressive_forgetting=True # More aggressive cleanup
)
Importance Scoringâ
Vidurai automatically assigns importance scores:
- Human messages: 0.7 (high importance)
- AI responses: 0.6 (medium-high importance)
- System messages: 0.5 (medium importance)
Less important memories are forgotten over time.
API Referenceâ
ViduraiMemoryâ
Class: vidurai.integrations.langchain.ViduraiMemory
Drop-in replacement for LangChain's ConversationBufferMemory with intelligent forgetting.
Attributes:
memory_key(str): Key for storing chat history (default: "chat_history")input_key(str): Key for input messages (default: "input")output_key(str): Key for output messages (default: "output")vidurai_memory: Underlying Vidurai memory system
Methods:
load_memory_variables(inputs: Dict) -> Dictâ
Load relevant memories for the current context.
save_context(inputs: Dict, outputs: Dict) -> Noneâ
Save conversation context to memory with automatic importance scoring.
clear() -> Noneâ
Clear all memories and reset the system.
ViduraiConversationChainâ
Class: vidurai.integrations.langchain.ViduraiConversationChain
Helper class for creating conversation chains with Vidurai memory.
Methods:
create(llm, verbose: bool = False) -> ConversationChainâ
Create a LangChain ConversationChain with Vidurai memory.
Parameters:
llm: LangChain LLM instanceverbose(bool): Enable verbose output
Returns:
- ConversationChain with ViduraiMemory
Performance & Cost Savingsâ
Token Reductionâ
Standard ConversationBufferMemory:
- 100 messages à 500 tokens = 50,000 tokens per request
ViduraiMemory:
- 10 messages à 500 tokens = 5,000 tokens per request
Result: 30-50% token reduction (varies by workload and RL agent maturity)
Cost Savings Exampleâ
At $0.03 per 1K tokens (GPT-4):
- Standard: $1.50 per request
- Vidurai: $0.95 per request (36.6% reduction)
- Savings: $0.55 per request
For 10,000 users daily:
- Daily savings: $5,500
- Monthly savings: $165,000
Note: Actual savings vary by workload and RL agent maturity.
At scale with COST_FOCUSED profile, savings approach $16,182/day.
Comparison with Standard Memoryâ
| Feature | ConversationBufferMemory | ViduraiMemory |
|---|---|---|
| Memory capacity | Unlimited (grows forever) | Intelligent limits |
| Token usage | High (all history) | Low (relevant only) |
| Forgetting | Never | Strategic |
| Cost | High for long conversations | 30-50% lower (36.6%+ average) |
| Response speed | Slower (more context) | Faster (less context) |
| Focus | May lose focus | Maintains relevance |
Best Practicesâ
1. Set Appropriate Capacityâ
# For short conversations (customer support)
memory = create_memory_system(working_capacity=10)
# For long-running agents
memory = create_memory_system(working_capacity=20)
# For archival applications
memory = create_memory_system(episodic_capacity=10000)
2. Use Importance Scoringâ
# High importance - never forget
memory.remember("User's email: alice@example.com", importance=0.95)
# Medium importance - keep for a while
memory.remember("User prefers email notifications", importance=0.7)
# Low importance - forget soon
memory.remember("User checked weather", importance=0.3)
3. Clear Memory When Appropriateâ
# Start fresh conversation
memory.clear()
# Or reset for new user session
if session_ended:
memory.clear()
Troubleshootingâ
Issue: Memories not being recalledâ
Solution: Check importance scores. Very low importance items are forgotten quickly.
# Increase importance for critical information
memory.remember("Important data", importance=0.9)
Issue: Too much being rememberedâ
Solution: Enable aggressive forgetting or reduce capacity.
memory = create_memory_system(
working_capacity=5, # Smaller capacity
aggressive_forgetting=True
)
Issue: Context not persisting across sessionsâ
Solution: Use episodic memory for longer-term storage.
memory = create_memory_system(
episodic_capacity=1000 # More long-term storage
)
Next Stepsâ
- GitHub: View source code
- PyPI: Install package
- Examples: See code examples throughout this page
Supportâ
- Discord: Join our community
- GitHub: Report issues
- Documentation: docs.vidurai.ai
ā¤ĩā¤ŋ⤏āĨā¤ŽāĨ⤤ā¤ŋ ā¤āĨ ā¤ĩā¤ŋā¤ĻāĨā¤¯ā¤ž ā¤šāĨ (Forgetting too is knowledge)