Agent API¶
API reference for SPADE_LLM agent classes.
LLMAgent¶
Main agent class that extends SPADE Agent with LLM capabilities.
Constructor¶
LLMAgent(
jid: str,
password: str,
provider: LLMProvider,
reply_to: Optional[str] = None,
routing_function: Optional[RoutingFunction] = None,
system_prompt: Optional[str] = None,
mcp_servers: Optional[List[MCPServerConfig]] = None,
tools: Optional[List[LLMTool]] = None,
termination_markers: Optional[List[str]] = None,
max_interactions_per_conversation: Optional[int] = None,
on_conversation_end: Optional[Callable[[str, str], None]] = None,
verify_security: bool = False
)
Parameters:
jid
- Jabber ID for the agentpassword
- Agent passwordprovider
- LLM provider instancereply_to
- Optional fixed reply destinationrouting_function
- Custom routing functionsystem_prompt
- System instructions for LLMtools
- List of available toolstermination_markers
- Conversation end markersmax_interactions_per_conversation
- Conversation length limiton_conversation_end
- Callback when conversation endsverify_security
- Enable SSL verification
Methods¶
add_tool(tool: LLMTool)¶
Add a tool to the agent.
tool = LLMTool(name="function", description="desc", parameters={}, func=my_func)
agent.add_tool(tool)
get_tools() -> List[LLMTool]¶
Get all registered tools.
reset_conversation(conversation_id: str) -> bool¶
Reset conversation limits.
get_conversation_state(conversation_id: str) -> Optional[Dict[str, Any]]¶
Get conversation state information.
state = agent.get_conversation_state("user1_session")
if state:
print(f"Interactions: {state['interaction_count']}")
Example¶
from spade_llm import LLMAgent, LLMProvider
provider = LLMProvider.create_openai(api_key="key", model="gpt-4o-mini")
agent = LLMAgent(
jid="assistant@example.com",
password="password",
provider=provider,
system_prompt="You are a helpful assistant",
max_interactions_per_conversation=10
)
await agent.start()
ChatAgent¶
Interactive chat agent for human-computer communication.
Constructor¶
ChatAgent(
jid: str,
password: str,
target_agent_jid: str,
display_callback: Optional[Callable[[str, str], None]] = None,
on_message_sent: Optional[Callable[[str, str], None]] = None,
on_message_received: Optional[Callable[[str, str], None]] = None,
verbose: bool = False,
verify_security: bool = False
)
Parameters:
target_agent_jid
- JID of agent to communicate withdisplay_callback
- Custom response display functionon_message_sent
- Callback after sending messageon_message_received
- Callback after receiving responseverbose
- Enable detailed logging
Methods¶
send_message(message: str)¶
Send message to target agent.
send_message_async(message: str)¶
Send message asynchronously.
wait_for_response(timeout: float = 10.0) -> bool¶
Wait for response from target agent.
run_interactive()¶
Start interactive chat session.
Example¶
from spade_llm import ChatAgent
def display_response(message: str, sender: str):
print(f"Response: {message}")
chat_agent = ChatAgent(
jid="human@example.com",
password="password",
target_agent_jid="assistant@example.com",
display_callback=display_response
)
await chat_agent.start()
await chat_agent.run_interactive() # Interactive chat
await chat_agent.stop()
Agent Lifecycle¶
Starting Agents¶
Stopping Agents¶
Running with SPADE¶
import spade
async def main():
agent = LLMAgent(...)
await agent.start()
# Agent runs until stopped
await agent.stop()
if __name__ == "__main__":
spade.run(main())
Error Handling¶
try:
await agent.start()
except ConnectionError:
print("Failed to connect to XMPP server")
except ValueError:
print("Invalid configuration")
Best Practices¶
- Always call
start()
before using agents - Use
stop()
for proper cleanup - Handle connection errors gracefully
- Set appropriate conversation limits
- Use callbacks for monitoring