Installation¶
Requirements¶
- Python 3.10+
- 4GB+ RAM (8GB+ for local models)
Install¶
Verify Installation¶
import spade_llm
from spade_llm import LLMAgent, LLMProvider
print(f"SPADE_LLM version: {spade_llm.__version__}")
LLM Provider Setup¶
Choose one provider:
OpenAI¶
Ollama (Local)¶
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Download a model
ollama pull llama3.1:8b
ollama serve
LM Studio (Local)¶
- Download LM Studio
- Download a model through the GUI
- Start the local server
XMPP Server¶
For development, use a public XMPP server like jabber.at
or set up Prosody locally.
Development Install¶
Troubleshooting¶
Import errors: Ensure you're in the correct Python environment
SSL errors: For development only, disable SSL verification:
Ollama connection: Check if Ollama is running: