Skip to content

SPADE-LLM : SPADE with Large Language Models

SPADE-LLM Logo

Extension for SPADE that integrates Large Language Models into multi-agent systems.

Features

  • Multi-Provider Support: OpenAI, Ollama, LM Studio, vLLM
  • Tool System: Function calling with async execution
  • Memory System: Dual memory architecture for agent learning and conversation continuity
  • Context Management: Multi-conversation support with automatic cleanup
  • Message Routing: Conditional routing based on LLM responses
  • Guardrails System: Content filtering and safety controls for input/output
  • MCP Integration: Model Context Protocol server support
  • Production Ready: Comprehensive error handling and logging

Architecture

graph LR
    A[LLMAgent] --> C[ContextManager]
    A --> D[LLMProvider]
    A --> E[LLMTool]
    A --> G[Guardrails]
    A --> M[Memory]
    D --> F[OpenAI/Ollama/etc]
    G --> H[Input/Output Filtering]
    E --> I[Human-in-the-Loop]
    E --> J[MCP]
    E --> P[CustomTool/LangchainTool]
    J --> K[STDIO]
    J --> L[HTTP Streaming]
    M --> N[Agent-based]
    M --> O[Agent-thread]

Quick Start

import spade
from spade_llm import LLMAgent, LLMProvider

async def main():
    provider = LLMProvider.create_openai(
        api_key="your-api-key",
        model="gpt-4o-mini"
    )

    agent = LLMAgent(
        jid="assistant@example.com",
        password="password",
        provider=provider,
        system_prompt="You are a helpful assistant"
    )

    await agent.start()

if __name__ == "__main__":
    spade.run(main())

Documentation Structure

Getting Started

Core Guides

Reference

Examples

Explore the examples directory for complete working examples:

  • multi_provider_chat_example.py - Chat with different LLM providers
  • ollama_with_tools_example.py - Local models with tool calling
  • guardrails_example.py - Content filtering and safety controls
  • langchain_tools_example.py - LangChain tool integration
  • valencia_multiagent_trip_planner.py - Multi-agent workflow