Behaviour API¶
API reference for SPADE_LLM behaviour classes.
LLMBehaviour¶
Core behaviour that handles LLM interaction loop.
Constructor¶
LLMBehaviour(
llm_provider: LLMProvider,
reply_to: Optional[str] = None,
routing_function: Optional[RoutingFunction] = None,
context_manager: Optional[ContextManager] = None,
termination_markers: Optional[List[str]] = None,
max_interactions_per_conversation: Optional[int] = None,
on_conversation_end: Optional[Callable[[str, str], None]] = None,
tools: Optional[List[LLMTool]] = None
)
Parameters:
llm_provider
- LLM provider instancereply_to
- Fixed reply destination (optional)routing_function
- Custom routing functioncontext_manager
- Context manager instancetermination_markers
- Conversation end markersmax_interactions_per_conversation
- Interaction limiton_conversation_end
- End callbacktools
- Available tools
Methods¶
register_tool(tool: LLMTool)¶
Register a tool with the behaviour.
tool = LLMTool(name="func", description="desc", parameters={}, func=my_func)
behaviour.register_tool(tool)
get_tools() -> List[LLMTool]¶
Get registered tools.
reset_conversation(conversation_id: str) -> bool¶
Reset conversation state.
get_conversation_state(conversation_id: str) -> Optional[Dict[str, Any]]¶
Get conversation state.
Processing Loop¶
The behaviour automatically:
- Receives XMPP messages
- Updates conversation context
- Calls LLM provider
- Executes requested tools
- Routes responses
Conversation States¶
class ConversationState:
ACTIVE = "active"
COMPLETED = "completed"
ERROR = "error"
TIMEOUT = "timeout"
MAX_INTERACTIONS_REACHED = "max_interactions_reached"
Example¶
from spade_llm.behaviour import LLMBehaviour
from spade_llm.context import ContextManager
context = ContextManager(system_prompt="You are helpful")
behaviour = LLMBehaviour(
llm_provider=provider,
context_manager=context,
termination_markers=["DONE", "END"],
max_interactions_per_conversation=10
)
# Used internally by LLMAgent
agent.add_behaviour(behaviour)
Internal Architecture¶
Message Processing Flow¶
async def run(self):
"""Main processing loop."""
msg = await self.receive(timeout=10)
if not msg:
return
# Update context
self.context.add_message(msg, conversation_id)
# Process with LLM
await self._process_message_with_llm(msg, conversation_id)
Tool Execution Loop¶
async def _process_message_with_llm(self, msg, conversation_id):
"""Process message with tool execution."""
max_iterations = 20
current_iteration = 0
while current_iteration < max_iterations:
response = await self.provider.get_llm_response(self.context, self.tools)
tool_calls = response.get('tool_calls', [])
if not tool_calls:
# Final response
break
# Execute tools
for tool_call in tool_calls:
await self._execute_tool(tool_call)
current_iteration += 1
Conversation Lifecycle¶
def _end_conversation(self, conversation_id: str, reason: str):
"""End conversation and cleanup."""
self._active_conversations[conversation_id]["state"] = reason
if self.on_conversation_end:
self.on_conversation_end(conversation_id, reason)
Advanced Usage¶
Custom Behaviour¶
class CustomLLMBehaviour(LLMBehaviour):
async def run(self):
"""Custom processing logic."""
# Pre-processing
await self.custom_preprocessing()
# Standard processing
await super().run()
# Post-processing
await self.custom_postprocessing()
async def custom_preprocessing(self):
"""Custom preprocessing."""
pass
async def custom_postprocessing(self):
"""Custom postprocessing."""
pass
Direct Usage¶
# Rarely used directly - usually through LLMAgent
from spade.agent import Agent
from spade.template import Template
class MyAgent(Agent):
async def setup(self):
behaviour = LLMBehaviour(llm_provider=provider)
template = Template()
self.add_behaviour(behaviour, template)
Error Handling¶
The behaviour handles various error conditions:
- Provider Errors: LLM service failures
- Tool Errors: Tool execution failures
- Timeout Errors: Response timeouts
- Conversation Limits: Max interaction limits
try:
await behaviour._process_message_with_llm(msg, conv_id)
except Exception as e:
logger.error(f"Processing error: {e}")
await behaviour._end_conversation(conv_id, ConversationState.ERROR)
Performance Considerations¶
- Tool Iteration Limit: Prevents infinite tool loops
- Conversation Cleanup: Removes completed conversations
- Message Deduplication: Prevents duplicate processing
- Context Management: Efficient memory usage
Best Practices¶
- Let
LLMAgent
manage behaviour lifecycle - Use appropriate termination markers
- Set reasonable interaction limits
- Handle conversation end events
- Monitor conversation states