SPADE-LLM
Build distributed based-XMPP multi-agent systems powered by Large Language Models. Extends SPADE multi-agent platform with many LLM providers for distributed AI applications, intelligent chatbots, and collaborative agent systems.
Key Features¶
Built-in XMPP Server
No external server setup required with SPADE 4.0+. Get started instantly with zero configuration.
Multi-Provider Support
OpenAI GPT, Ollama, LM Studio, vLLM, Anthropic Claude and more. Switch providers seamlessly.
Advanced Tool System
Function calling with async execution, human-in-the-loop workflows, and LangChain integration.
Dual Memory Architecture
Agent learning and conversation continuity with SQLite persistence and contextual retrieval.
Context Management
Multi-conversation support with automatic cleanup and intelligent context window management.
Guardrails System
Content filtering and safety controls for input/output with customizable rules and policies.
Message Routing
Conditional routing based on LLM responses with flexible workflows and decision trees.
MCP Integration
Model Context Protocol server support for external tool integration and service connectivity.
Architecture Overview¶
graph LR
A[LLMAgent] --> C[ContextManager]
A --> D[LLMProvider]
A --> E[LLMTool]
A --> G[Guardrails]
A --> M[Memory]
D --> F[OpenAI/Ollama/etc]
G --> H[Input/Output Filtering]
E --> I[Human-in-the-Loop]
E --> J[MCP]
E --> P[CustomTool/LangchainTool]
J --> K[STDIO]
J --> L[HTTP Streaming]
M --> N[Agent-based]
M --> O[Agent-thread]
Quick Start¶
import spade
from spade_llm import LLMAgent, LLMProvider
async def main():
# First, start SPADE's built-in server:
# spade run
provider = LLMProvider.create_openai(
api_key="your-api-key",
model="gpt-4o-mini"
)
agent = LLMAgent(
jid="assistant@localhost",
password="password",
provider=provider,
system_prompt="You are a helpful assistant"
)
await agent.start()
if __name__ == "__main__":
spade.run(main())
Documentation Structure¶
Getting Started¶
- Installation - Setup and requirements
- Quick Start - Basic usage examples
Core Guides¶
- Architecture - SPADE_LLM general structure
- Providers - LLM provider configuration
- Tools System - Function calling capabilities
- Memory System - Agent learning and conversation continuity
- Context Management - Context control and message management
- Conversations - Conversation lifecycle and management
- Guardrails - Content filtering and safety controls
- Message Routing - Conditional message routing
Reference¶
- API Reference - Complete API documentation
- Examples - Working code examples
Examples¶
Explore the examples directory for complete working examples:
multi_provider_chat_example.py
- Chat with different LLM providersollama_with_tools_example.py
- Local models with tool callingguardrails_example.py
- Content filtering and safety controlslangchain_tools_example.py
- LangChain tool integrationvalencia_multiagent_trip_planner.py
- Multi-agent workflow