SPADE-LLM : SPADE with Large Language Models¶

Extension for SPADE that integrates Large Language Models into multi-agent systems.
Features¶
- Multi-Provider Support: OpenAI, Ollama, LM Studio, vLLM
- Tool System: Function calling with async execution
- Memory System: Dual memory architecture for agent learning and conversation continuity
- Context Management: Multi-conversation support with automatic cleanup
- Message Routing: Conditional routing based on LLM responses
- Guardrails System: Content filtering and safety controls for input/output
- MCP Integration: Model Context Protocol server support
- Production Ready: Comprehensive error handling and logging
Architecture¶
graph LR
A[LLMAgent] --> C[ContextManager]
A --> D[LLMProvider]
A --> E[LLMTool]
A --> G[Guardrails]
A --> M[Memory]
D --> F[OpenAI/Ollama/etc]
G --> H[Input/Output Filtering]
E --> I[Human-in-the-Loop]
E --> J[MCP]
E --> P[CustomTool/LangchainTool]
J --> K[STDIO]
J --> L[HTTP Streaming]
M --> N[Agent-based]
M --> O[Agent-thread]
Quick Start¶
import spade
from spade_llm import LLMAgent, LLMProvider
async def main():
provider = LLMProvider.create_openai(
api_key="your-api-key",
model="gpt-4o-mini"
)
agent = LLMAgent(
jid="assistant@example.com",
password="password",
provider=provider,
system_prompt="You are a helpful assistant"
)
await agent.start()
if __name__ == "__main__":
spade.run(main())
Documentation Structure¶
Getting Started¶
- Installation - Setup and requirements
- Quick Start - Basic usage examples
Core Guides¶
- Architecture - SPADE_LLM general structure
- Providers - LLM provider configuration
- Tools System - Function calling capabilities
- Memory System - Agent learning and conversation continuity
- Context Management - Context control and message management
- Conversations - Conversation lifecycle and management
- Guardrails - Content filtering and safety controls
- Message Routing - Conditional message routing
Reference¶
- API Reference - Complete API documentation
- Examples - Working code examples
Examples¶
Explore the examples directory for complete working examples:
multi_provider_chat_example.py
- Chat with different LLM providersollama_with_tools_example.py
- Local models with tool callingguardrails_example.py
- Content filtering and safety controlslangchain_tools_example.py
- LangChain tool integrationvalencia_multiagent_trip_planner.py
- Multi-agent workflow