Tools System¶
Enable LLM agents to execute functions and interact with external services.
Tool Calling Flow¶
flowchart TD
A[User sends message] --> B{Does LLM need tools?}
B -->|No| C[Generate direct response]
B -->|Yes| D[Decide which tool to use]
D --> E[Determine required arguments]
E --> F[Send tool_calls in response]
F --> G[System executes tool]
G --> H[Add results to context]
H --> I[Second LLM query with results]
I --> J[Generate final response]
C --> K[Send response to user]
J --> K
Overview¶
The Tools System empowers LLM agents to extend beyond conversation by executing real functions. This enables agents to:
- 🔧 Execute Python functions with dynamic parameters
- 🌐 Access external APIs and databases
- 📁 Process files and perform calculations
- 🔗 Integrate with third-party services
How Tool Calling Works¶
When an LLM agent receives a message, it can either respond directly or decide to use tools. The process involves:
- Intelligence Decision: The LLM analyzes if it needs external data or functionality
- Tool Selection: It chooses the appropriate tool from available options
- Parameter Generation: The LLM determines what arguments the tool needs
- Execution: The system runs the tool function asynchronously
- Context Integration: Results are added back to the conversation
- Final Response: The LLM processes results and provides a complete answer
Basic Tool Definition¶
from spade_llm import LLMTool
async def get_weather(city: str) -> str:
"""Get weather for a city."""
return f"Weather in {city}: 22°C, sunny"
weather_tool = LLMTool(
name="get_weather",
description="Get current weather for a city",
parameters={
"type": "object",
"properties": {
"city": {"type": "string", "description": "City name"}
},
"required": ["city"]
},
func=get_weather
)
Using Tools with Agents¶
from spade_llm import LLMAgent, LLMProvider
agent = LLMAgent(
jid="assistant@example.com",
password="password",
provider=provider,
tools=[weather_tool] # Register tools
)
When the LLM needs weather information, it will automatically detect the need and call the tool.
Common Tool Categories¶
🌐 API Integration¶
Connect to external web services for real-time data.
import aiohttp
async def web_search(query: str) -> str:
"""Search the web for information."""
async with aiohttp.ClientSession() as session:
async with session.get(f"https://api.duckduckgo.com/?q={query}&format=json") as response:
data = await response.json()
return str(data)
search_tool = LLLTool(
name="web_search",
description="Search the web for current information",
parameters={
"type": "object",
"properties": {
"query": {"type": "string"}
},
"required": ["query"]
},
func=web_search
)
📁 File Operations¶
Read, write, and process files on the system.
import aiofiles
async def read_file(filepath: str) -> str:
"""Read a text file."""
try:
async with aiofiles.open(filepath, 'r') as f:
content = await f.read()
return f"File content:\n{content}"
except Exception as e:
return f"Error reading file: {e}"
file_tool = LLMTool(
name="read_file",
description="Read contents of a text file",
parameters={
"type": "object",
"properties": {
"filepath": {"type": "string"}
},
"required": ["filepath"]
},
func=read_file
)
📊 Data Processing¶
Perform calculations and data analysis.
import json
async def calculate_stats(numbers: list) -> str:
"""Calculate statistics for a list of numbers."""
if not numbers:
return "Error: No numbers provided"
stats = {
"count": len(numbers),
"mean": sum(numbers) / len(numbers),
"min": min(numbers),
"max": max(numbers)
}
return json.dumps(stats, indent=2)
stats_tool = LLMTool(
name="calculate_stats",
description="Calculate basic statistics",
parameters={
"type": "object",
"properties": {
"numbers": {
"type": "array",
"items": {"type": "number"}
}
},
"required": ["numbers"]
},
func=calculate_stats
)
🧠 Human Expert Consultation¶
Connect LLM agents with human experts for real-time guidance and decision support.
from spade_llm.tools import HumanInTheLoopTool
# Create human expert consultation tool
human_expert = HumanInTheLoopTool(
human_expert_jid="expert@company.com",
timeout=300.0, # 5 minutes
name="ask_human_expert",
description="""Ask a human expert for help when you need:
- Current information not in your training data
- Human judgment or subjective opinions
- Company-specific policies or procedures
- Clarification on ambiguous requests"""
)
# Use with agent
agent = LLMAgent(
jid="assistant@company.com",
password="password",
provider=provider,
tools=[human_expert],
system_prompt="""You are an AI assistant with access to human experts.
When you encounter questions requiring human judgment, current information,
or company-specific knowledge, consult the human expert."""
)
Key Features:
- ⚡ Real-time consultation via XMPP messaging
- 🌐 Web interface for human experts to respond
- 🔄 Message correlation using XMPP thread IDs
- ⏱️ Configurable timeouts with graceful error handling
- 🔒 Template-based filtering prevents message conflicts
When the LLM uses this tool:
- Question sent to human expert via XMPP
- Expert receives notification in web interface
- Human provides response through browser
- Response returns to LLM via XMPP
- Agent continues with human-informed answer
Example consultation flow:
User: "What's our company policy on remote work?"
Agent: [Uses ask_human_expert tool]
→ Human Expert: "We allow 3 days remote per week with manager approval"
Agent: "According to our HR expert, our policy allows up to 3 days
remote work per week with manager approval."
Setup Required
Human-in-the-loop requires XMPP server with WebSocket support and web interface.
See the working example in examples/human_in_the_loop_example.py
for complete setup instructions.
LangChain Integration¶
Seamlessly use existing LangChain tools with SPADE_LLM:
from langchain_community.tools import DuckDuckGoSearchRun
from spade_llm.tools import LangChainToolAdapter
# Create LangChain tool
search_lc = DuckDuckGoSearchRun()
# Adapt for SPADE_LLM
search_tool = LangChainToolAdapter(search_lc)
# Use with agent
agent = LLMAgent(
jid="assistant@example.com",
password="password",
provider=provider,
tools=[search_tool]
)
✅ Best Practices¶
- Single Purpose: Each tool should do one thing well
- Clear Naming: Use descriptive tool names that explain functionality
- Rich Descriptions: Help the LLM understand when and how to use tools
- Input Validation: Always validate and sanitize inputs for security
- Meaningful Errors: Return clear error messages for troubleshooting
- Async Functions: Use async/await for non-blocking execution
Next Steps¶
- MCP Integration - Connect to external MCP servers
- Architecture - Understanding system design
- Providers - LLM provider configuration