Installation¶
Requirements¶
- Python 3.10+
- 4GB+ RAM (8GB+ for local models)
Install¶
Verify Installation¶
import spade_llm
from spade_llm import LLMAgent, LLMProvider
print(f"SPADE_LLM version: {spade_llm.__version__}")
XMPP Server Setup¶
Built-in SPADE Server (Recommended)¶
SPADE 4.0+ includes a built-in XMPP server - no external server setup required!
The built-in server provides everything you need to run SPADE-LLM agents locally. Simply start the server in one terminal and run your agents in another.Advanced XMPP Server Configuration¶
For custom setups, you can specify different ports and hosts:
# Custom host and ports
spade run --host localhost --client_port 6222 --server_port 6269
# Use IP address instead of localhost
spade run --host 127.0.0.1 --client_port 6222 --server_port 6269
# Custom ports if defaults are in use
spade run --client_port 6223 --server_port 6270
Alternative: External XMPP Servers¶
For production or if you prefer external servers:
- Public servers:
jabber.at
,jabber.org
(require manual account creation) - Local Prosody: Install Prosody for local hosting
- Other servers: Any XMPP-compliant server
LLM Provider Setup¶
Choose one provider:
OpenAI¶
Ollama (Local)¶
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Download a model
ollama pull llama3.1:8b
ollama serve
LM Studio (Local)¶
- Download LM Studio
- Download a model through the GUI
- Start the local server
Development Install¶
Troubleshooting¶
SPADE server not starting:
# Check if default ports are already in use
netstat -an | grep 5222
# Try different ports if needed
spade run --client_port 6222 --server_port 6269
Agent connection issues: Ensure SPADE server is running first
Import errors: Ensure you're in the correct Python environment
SSL errors: For development with built-in server, disable SSL verification:
Ollama connection: Check if Ollama is running: