Skip to main content
While LLMAdapter talks to the model, a ToolResponseAdapter decides how to surface tool outputs to the ongoing conversation and exposes a plain-text view when the host needs one. Every tool invocation returns a Response (TextResponse, JsonResponse, FileResponse, LinkResponse, etc.). The adapter translates that object into the concrete payloads that should be appended to the chat context and, optionally, into a readable string.

Core responsibilities

  • Chat context updates: Return message objects from create_messages that the LLM adapter will append to the conversation. The structure is provider-specific (for OpenAI Responses the payloads are {"type": "function_call_output", ...} objects rather than assistant-role messages).
  • Plain-text representation: Provide a best-effort human-readable string via to_plain_text. Callers decide if and where to display it.
  • Attachment handling: Convert FileResponse, LinkResponse, or other rich responses into whatever metadata the LLM provider expects (e.g., base64-encoded blobs for OpenAI Responses).

Base interface

Python
class ToolResponseAdapter(ABC):
    @abstractmethod
    async def to_plain_text(self, *, room: RoomClient, response: Response) -> str:
        ...

    @abstractmethod
    async def create_messages(
        self,
        *,
        context: AgentChatContext,
        tool_call: Any,
        room: RoomClient,
        response: Response,
    ) -> list:
        ...
  • to_plain_text – convert the response into a single string (used by loggers or providers that expect plain text).
  • create_messages – return the list of messages to append to the chat context.

When to customize

  • Provider expectations: Different APIs have their own schema for tool results. Implement a custom adapter when you need to emit non-default roles, event types, or headers.
  • UI formatting: If you want to display richer summaries (markdown tables, shortened JSON, etc.), override to_plain_text or wrap the adapter so the host displays exactly what users need.

Usage in the chat loop

During each tool call the LLM adapter:
  1. Executes the requested tool via the toolkit.
  2. Passes the tool’s Response object to the configured ToolResponseAdapter.
  3. Appends the returned messages to the chat context so the LLM can see the outcome.
  4. Optionally uses to_plain_text if the hosting application wants to log or display a summary. Nothing is automatically pushed to the UI unless the caller does so.
If no adapter is provided, agents default to the OpenAI-focused implementation described next.