API Reference¶
Complete API documentation for SPADE_LLM components.
Core Components¶
- Agent - LLMAgent and ChatAgent classes
- Behaviour - LLMBehaviour implementation
- Providers - LLM provider interfaces
- Tools - Tool system and LLMTool class
- Memory - Memory system API for agent learning and persistence
- Human Interface - Human-in-the-loop API and integration
- Guardrails - Content filtering and safety controls
- Context - Context and conversation management
- Routing - Message routing system
Quick Reference¶
Creating Agents¶
from spade_llm import LLMAgent, LLMProvider
provider = LLMProvider.create_openai(api_key="key", model="gpt-4o-mini")
agent = LLMAgent(jid="agent@server.com", password="pass", provider=provider)
Creating Tools¶
from spade_llm import LLMTool
async def my_function(param: str) -> str:
return f"Result: {param}"
tool = LLMTool(
name="my_function",
description="Description of function",
parameters={"type": "object", "properties": {"param": {"type": "string"}}, "required": ["param"]},
func=my_function
)
Message Routing¶
def router(msg, response, context):
if "technical" in response.lower():
return "tech@example.com"
return str(msg.sender)
agent = LLMAgent(..., routing_function=router)
Examples¶
See Examples for complete working code examples.
Type Definitions¶
Common Types¶
# Message context
ContextMessage = Union[SystemMessage, UserMessage, AssistantMessage, ToolResultMessage]
# Routing result
RoutingResult = Union[str, List[str], RoutingResponse, None]
# Tool parameters
ToolParameters = Dict[str, Any] # JSON Schema format
Error Handling¶
All SPADE_LLM components use standard Python exceptions:
ValueError
- Invalid parameters or configurationConnectionError
- Network or provider connection issuesTimeoutError
- Operations that exceed timeout limitsRuntimeError
- General runtime errors
Configuration¶
Environment Variables¶
OPENAI_API_KEY=your-api-key
OLLAMA_BASE_URL=http://localhost:11434/v1
LM_STUDIO_BASE_URL=http://localhost:1234/v1
Provider Configuration¶
# OpenAI
provider = LLMProvider.create_openai(api_key="key", model="gpt-4o-mini")
# Ollama
provider = LLMProvider.create_ollama(model="llama3.1:8b")
# LM Studio
provider = LLMProvider.create_lm_studio(model="local-model")
For detailed API documentation, see the individual component pages.