Skip to main content

Overview

ChatBot is the standard agent for building text-based conversational experiences in MeshAgent. The ChatBot class builds on SingleRoomAgent. It joins a MeshAgent Room, communicates with participants via text, streams responses from your chosen LLM, and persists conversation history so users can pick up where they left off. The ChatBot automatically handles conversation flow, thread storage, auto-greeting, and tool integration, making it the fastest path to building interactive agents like customer support bots, research assistants, and more. ChatBot supports both static toolkits (always available) and dynamic toolkits (built per message), so you can add or remove tools on demand as a conversation evolves.

Two ways to build a ChatBot

  1. CLI: Run production-ready agents with a single command. Configure tools, rules, and behavior using command-line flags. Ideal for most use cases.
  2. SDK: Extend the base ChatBot class with custom code when you need deeper integrations or specialized behaviors. Best for full control or more complex logic.
Both approaches deploy the same way and can operate together in the same Rooms. We recommend starting with the CLI, it’s fastest and covers most scenarios, then moving to the SDK when you need further customization.

In this guide you will learn

  • When to use ChatBot
  • How to run and deploy a ChatBot with the MeshAgent CLI
  • How to build and deploy a ChatBot with the MeshAgent SDK
  • How the ChatBot works including the constructor parameters, lifecycle, conversation flow, important behaviors, hooks, and methods.

When to use ChatBot

Use the ChatBot when you need an agent that:
  • Responds conversationally to participants via text messages and is seamlessly connected to the Room
  • Maintains conversation history across sessions (threads, documents, or Room state)
  • Can use tools to get work done (either built-in tools or custom tools you define)
  • Supports multiple participants in the same conversation (1:1 or group chats)
  • Evolves during a conversation and supports dynamic toolkits, UI actions, or other agent-to-room interactions.
  • Lets you focus on prompts, tools, and business logic for your agent instead of plumbing for storage, messaging, and scale
Don’t use ChatBot if:
  • Your agent needs voice/speech interaction, instead use VoiceBot
  • Your agent runs background tasks that don’t require conversing with users, instead use Worker or TaskRunner
  • Your agent processes email, instead use MailBot

Run and deploy a ChatBot with the CLI

Step 1: Run a ChatBot from the CLI

Let’s run a ChatBot from the CLI with the web search tool, access to room storage so it can read and write files in the Room, a custom rule, and room-rules which can be modified per conversation turn.
bash
# Authenticate to MeshAgent if not already signed in
meshagent setup

# Call a chatbot into your room
meshagent chatbot join --room quickstart --agent-name chatbot --web-search --require-storage --room-rules "agents/chatbot/rules.txt" --rule "You are a helpful assistant"
When you add the --room-rules "agents/chatbot/rules.txt" flag and supply a file path for the rules, the file will be created if it does not already exist, this file is relative to the room storage.
Tip: Use the --help flag to see all available tools and options: meshagent chatbot join --help. The CLI ChatBot has built in tools you can turn on and off for things like image generation, web search, local shell, storage, and mcp.

Step 2: Interact with the agent in MeshAgent Studio

  1. Go to MeshAgent Studio and log in
  2. Enter your room quickstart
  3. Select the agent chatbot and begin chatting!
If you’ve turned on any tools or added the --room-rules flag to your agent you can toggle tools on and off and modify the agent’s rules.txt file to refine the agent’s behavior. Changes to the tools and rules.txt will be applied per message.

Step 3: Package and deploy the agent

Once your agent works locally to make it always available you’ll need to package and deploy it as a project or room service. You can do this using the CLI, by creating a YAML file, or from MeshAgent Studio. Both options below deploy the same ChatBot - choose based on your workflow:
  • Option 1 (meshagent chatbot deploy): One command that deploys immediately (fastest/easiest approach)
  • Option 2 (meshagent chatbot spec + meshagent service create): Generates a yaml file you can review, or further customize before deploying
Option 1: Deploy directly Use the CLI to automatically deploy the ChatBot to your room.
bash
meshagent chatbot deploy --service-name chatbot --room quickstart --agent-name chatbot --web-search --require-storage --room-rules "agents/chatbot/rules.txt" --rule "You are a helpful assistant"
Option 2: Generate a YAML spec Create a meshagent.yaml file that defines how our service should run, then deploy the agent to our room. The service spec can be dynamically generated from the CLI by running:
bash
meshagent chatbot spec --service-name chatbot --agent-name chatbot --web-search --require-storage --room-rules "agents/chatbot/rules.txt" --rule "You are a helpful assistant"
Next, copy the output to a meshagent.yaml file
kind: Service # switch to service Template if installing from link for Powerboards
version: v1
metadata:
  name: chatbot
  description: "A simple chatbot that responds to users"
  annotations:
    meshagent.service.id: "meshagent.chatbot"
agents:
  - name: chatbot
    description: "A conversational agent with tools"
    annotations:
      meshagent.agent.type: "ChatBot"
ports:
- num: "*"                    # automatically assign an available MESHAGENT_PORT for the agent to run on 
  type: http
  liveness: "/"               # ensure the service is alive before connecting to the room
  endpoints:
  - path: /agent              # service path to call and run the agent on
    meshagent:
      identity: chatbot       # name of the agent as it shows up in the Room
container:
  image: "us-central1-docker.pkg.dev/meshagent-public/images/cli:{SERVER_VERSION}-esgz"
  command: "/usr/bin/meshagent chatbot service --agent-name=chatbot --image-generation=gpt-image-1 --require-storage --web-search --room-rules='agents/chatbot/rules.txt' --rule='You are a helpful assistant'"
  storage:
    room:
      - path: /data             # mount room storage path for the agent to write files to
        read_only: false        # allow write access 
Then, deploy it to your Room.
bash
# Deploy as a room service (specific room only)
meshagent service create --file meshagent.yaml --room quickstart
The ChatBot is now deployed to the quickstart room! Now the agent will always be available inside the room for us to chat with. You can interact with the agent directly from the Studio or from Powerboards. With Powerboards you can easily share your agents with others from an AI native application. MeshAgent Studio, Powerboards, and the Flutter SDK include built-in chat UI components (such as ChatThreadLoader) that understand ChatBot threads. This means any conversation started in the Studio or Powerboards can be displayed properly or continued inside your own application without extra integration work.

Build and deploy a ChatBot with the SDK

Step 1: Create a ChatBot agent

This example shows a ChatBot with a custom rule to guide the agent’s behavior and access to built-in MeshAgent Tools for web search, document authoring, and storage. For an agent this simple the CLI ChatBot would be sufficient. The Python SDK code here demonstrates how to get similar functionality as the CLI. To run the ChatBot we’ll use the MeshAgent ServiceHost. The ServiceHost is a lightweight HTTP server that allows you to register one or more tools or agents on their own path (e.g., /agent). The host automatically exposes each path as a webhook. When a room makes a call to that path, ServiceHost handles the handshake, connects the agent to the room, and forwards requests and responses between your code and the MeshAgent infrastructure.
import asyncio
from meshagent.api import RequiredSchema
from meshagent.agents.chat import ChatBot
from meshagent.openai import OpenAIResponsesAdapter
from meshagent.openai.tools.responses_adapter import WebSearchToolkitBuilder
from meshagent.api.services import ServiceHost
from meshagent.tools.storage import StorageToolkit
from meshagent.tools.document_tools import (
    DocumentAuthoringToolkit,
    DocumentTypeAuthoringToolkit,
)
from meshagent.agents.schemas.document import document_schema

from meshagent.otel import otel_config

service = ServiceHost()  # port defaults to an available port if not assigned

otel_config(
    service_name="my-service"
)  # automatically enables telemetry data collection for your agents and tools


@service.path(path="/agent", identity="mesh-tools-chatbot")
class SimpleChatbot(ChatBot):
    def __init__(self):
        super().__init__(
            title="mesh-tools-chatbot",
            description="a simple chatbot",
            rules=[
                "Always respond to the user and include a fun fact at the end of your response.",
                "use the ask_user tool to pick the name of a document, pick a document name if the tool is not available.",
                "the document names MUST have the extension .document, automatically add the extension if it is not provided",
                "you MUST always write content to a document",
                "first open a document, then use tools to write the document content before closing the document",
                "before closing the document, ask the user if they would like any additional modifications to be made to the document, and if so, make them. continue to ask the user until they are happy with the contents. you are not finished until the user is happy.",
                "blob URLs MUST not be added to documents, they must be saved as files first",
            ],
            llm_adapter=OpenAIResponsesAdapter(),
            toolkits=[
                StorageToolkit(),
                DocumentAuthoringToolkit(),
                DocumentTypeAuthoringToolkit(
                    schema=document_schema, document_type="document"
                ),
            ],
        )
    
    def get_toolkit_builders(self):
        builders = super().get_toolkit_builders()
        builders.append(WebSearchToolkitBuilder())  # add web search
        return builders

asyncio.run(service.run())

Step 2: Call the agent into a room

Run the ChatBot locally and connect it to a Room:
meshagent setup # authenticate to MeshAgent
meshagent service run "main.py" --room=quickstart
This command will start the chatbot on an available port at the path /agent. If you are running multiple agents or tools, you can use the same ServiceHost and set different paths for each of the agents. The service run command automatically detects the different agent paths and identities (this is the recommended way to test your agents and tools). As an alternative to the service run command you can also run the service locally by setting a the port parameter on ServiceHost and running python main.py in one tab in your terminal, then from another tab you can call individual agents into the room using the meshagent call agent command. For example:
bash
meshagent setup # authenticate to meshagent
meshagent call agent --url=http://localhost:8081/agent --room=quickstart --participant-name=chatbot
In either case, the agent joins the room as participant chatbot. Once the agent joins the room, you can converse with it in MeshAgent Studio.

Step 3: Interact with the agent in MeshAgent Studio

  1. Go to MeshAgent Studio and login
  2. Enter your room quickstart
  3. Select the agent chatbot and begin chatting!
If you want to modify and restart the agent run Ctrl+C from the terminal to stop the agent then re-run the meshagent service run command.
Note: Building an agent will likely take multiple rounds of iterating through writing different versions of the system prompt and crafting the best tools for the agent before it’s ready for deployment.

Step 4: Package and deploy the agent

To deploy your SDK ChatBot permanently, you’ll package your code with a meshagent.yaml file that defines the service configuration and a container image that MeshAgent can run. For full details on the service spec and deployment flow, see Packaging Services and Deploying Services. MeshAgent supports two deployment patterns for containers:
  1. Runtime image + code mount (recommended): Use a pre-built MeshAgent runtime image (like python-sdk-slim) that contains Python and all MeshAgent dependencies. Mount your lightweight code-only image on top. This keeps your code image tiny (~KB), eliminates dependency installation time, and allows your service to start quickly.
  2. Single Image: Bundle your code and all dependencies into one image. This is good when you need to install additional libraries, but can result in larger images and slower pulls. If you build your own images we recommend optimizing them with eStargz.
This example demonstrates approach #1 with a code-only image. The default YAML points at the public python-docs-examples image so you can still run the documentation examples without building your own images. If you want to build and push your own code image, follow the steps below and update the image mount section of the meshagent.yaml file. Prepare your project structure This example organizes the agent code and configuration in the same folder, making each agent self-contained:
your-project/
├── Dockerfile                    # Shared by all samples
├── mesh_tools_chatbot/
   ├── mesh_tools_chatbot.py
   └── meshagent.yaml           # Config specific to this sample
└── another_sample/              # Other samples follow same pattern
    ├── another_sample.py
    └── meshagent.yaml
Note: If you’re building a single agent, you only need the mesh_tools_chatbot/ folder. The structure shown supports multiple samples sharing one Dockerfile.
Step 4a: Build a Docker container Create a scratch Dockerfile and copy the files you want to run. This creates a minimal image containing only your code files.
FROM scratch

COPY . /
Build and push the image with docker buildx:
bash
docker buildx build . \
  -t "<REGISTRY>/<NAMESPACE>/<IMAGE_NAME>:<TAG>" \
  --platform linux/amd64 \
  --push
Note: Building from the project root copies your entire project structure into the image. For a single agent, this is fine - your image will just contain one folder. For multi-agent projects, all agents will be in one image, but each can deploy independently using its own meshagent.yaml.
Step 4b: Package the agent Define the service configuration in a meshagent.yaml file. This service will have a container section that references:
  • Runtime image: The MeshAgent Python SDK image with all dependencies
  • Code mount: Your code-only image mounted at /src
  • Command path: Points to your sample’s specific location
kind: Service
version: v1
metadata:
  name: mesh-tools-chatbot
  description: "A chatbot with tools that can interact with users"
  annotations:
    meshagent.service.id: "meshtools.chatbot"
agents:
  - name: mesh-tools-chatbot
    description: "A conversational agent with web search, storage, and document authoring tools"
    annotations:
      meshagent.agent.type: "ChatBot"
ports:
- num: "*"
  endpoints:
  - path: /agent
    meshagent:
      identity: mesh-tools-chatbot
container:
  image: "us-central1-docker.pkg.dev/meshagent-public/images/python-sdk:{SERVER_VERSION}-esgz"
  command: python /src/mesh_tools_chatbot/mesh_tools_chatbot.py
  storage:
    images:
      # Replace this image tag with your own code-only image if you build one.
      - image: "us-central1-docker.pkg.dev/meshagent-public/images/python-docs-examples:{SERVER_VERSION}"
        path: /src
        read_only: true
How the paths work:
  • Your code image contains /mesh_tools_chatbot/mesh_tools_chatbot.py
  • It’s mounted at /src in the runtime container
  • The command runs python /src/mesh_tools_chatbot/mesh_tools_chatbot.py
Note: The default YAML in the docs uses us-central1-docker.pkg.dev/meshagent-public/images/python-docs-examples so you can test this example immediately without building your own image first. Replace this with your actual image tag when deploying your own code.
Step 4c: Deploy the agent Next from the CLI in the directory where your meshagent.yaml file is run:
meshagent service create --file "meshagent.yaml" --room=quickstart
The ChatBot is now deployed to the quickstart room! Now the agent will always be available inside the room for us to chat with. You can interact with the agent directly from the Studio or from Powerboards.

How ChatBot Works

Constructor Parameters

ChatBot accepts everything that SingleRoomAgent does (title, description, requires, labels). The name constructor argument is deprecated; agent identity comes from its participant token.
ParameterTypeDescription
namestr | NoneDeprecated. Agent identity comes from the participant token; if provided, it is only used to default title.
titlestr | NoneHuman-friendly name shown in UX. If omitted and you set name, it defaults to that value.
descriptionstr | NoneOptional short description.
requireslist[Requirement] | NoneDependencies such as RequiredSchema or RequiredToolkit. Automatically ensures the "thread" schema exists.
llm_adapterLLMAdapterRequired. Adapter that talks to your model provider (for example OpenAIResponsesAdapter). It supplies chat contexts and translates responses into MeshAgent events.
tool_adapterToolResponseAdapter | NoneOptional adapter for translating tool call outputs.
toolkitslist[Toolkit] | NoneExtra toolkits that are always available to the ChatBot beyond what requires installs. Defaults to [].
ruleslist[str] | NoneList of system or behavior rules appended to each chat context.
auto_greet_messagestr | NoneOptional greeting message sent automatically by the agent when a new thread starts.
empty_state_titlestr | NoneTitle text shown in the Studio chat pane before the first user message. Defaults to "How can I help you?".
labelslist[str] | NoneOptional tags for discovery and filtering.

Lifecycle Overview

ChatBot inherits the lifecycle from SingleRoomAgent but extends it to manage ongoing conversations and message threads.
  • await start(room: RoomClient): Connects to the room, installs requirements, and registers message handlers. Spawns per-thread tasks to handle participant messages, LLM responses, and tool invocations.
  • await stop(): Cancels active chat threads and clears cached state before disconnecting.
  • room property: Accesses the active RoomClient just like in SingleRoomAgent.

Conversation Flow

When a participant sends a message:
  1. The ChatBot receives the message via room.messaging.on("message").
  2. It identifies or spawns a chat thread corresponding to that conversation path.
  3. A chat context (AgentChatContext) is initialized using the llm_adapter.
  4. The message history is synchronized with the room’s "thread" document.
  5. ChatBot resolves this turn’s toolkits: it collects ToolkitBuilders from get_toolkit_builders(...), merges per-turn ToolkitConfigs (from MeshAgent Studio/CLI/App UI), calls make_tools(...) to create any dynamic toolkits, and combines the result with any static toolkits.
  6. The LLM is invoked with the prepared chat context and the resolved toolkits; tool calls are routed back into those toolkits, tokens are streamed, and the results are appended to the thread document.
All of this happens automatically once your bot is started.

Key Behaviors and Hooks

  • Thread management: Every conversation lives inside a MeshDocument thread. The ChatBot opens the thread as soon as a user sends a message and keeps it in sync with the document store.
  • Context building: init_chat_context() asks the LLM adapter for a fresh context and appends any rules you provided. Override this if you need to preload the context with extra data.
  • Static tool resolution: get_thread_toolkits() gathers toolkits declared in requires, and any always-on toolkits passed through toolkits.
  • Dynamic tool providers: get_toolkit_builders() lists the ToolkitBuilder factories the ChatBot is willing to spin up on demand (MCP, storage uploads, local shell, image generation, etc.). The chat UI calls this hook first to learn which tool toggles to display, and later sends back matching configs so the ChatBot can hydrate those toolkits with make_tools().
  • Message-specific toolkits: When the user sends a message with a tools payload, make_tools() combines the selected configs with the provider list to hydrate just the toolkits needed for that turn before the LLM is invoked. (For example, in MeshAgent Studio, when the user selects which tools to add to an agent, these tools are added to the tools payload so the agent can use them.)
  • Thread helpers: Use open_thread() and close_thread() to manage ThreadAdapter instances, and call ThreadAdapter.append_messages(...) to replay stored messages into an existing chat context (for example, when resuming a conversation).
  • LLM customization: Override prepare_llm_context() if you need to mutate the context right before calling the LLM (for example to provide additional background context for the agent outside of the system prompt).
  • Participant utilities: get_online_participants() returns the participants included in the thread so you can tailor responses or access their attributes.

Key Methods

MethodDescription
async def init_chat_context()Creates a new chat context using the configured llm_adapter and applies rules.
async def get_thread_toolkits(thread_context, participant)Resolves all always-on toolkits for a participant, including built-in reasoning tools.
def get_toolkit_builders()Returns the ToolkitBuilder list the UI can offer for message-level tool selection.
async def open_thread(path) / async def close_thread(path)Opens or closes a thread adapter that persists the thread document.
ThreadAdapter.append_messages(thread, chat_context)Loads historical messages into the current chat context from the thread document.
async def prepare_llm_context(thread_context)Hook for modifying the chat context before invoking the LLM. Override as needed.

Built-in Toolkits and Behaviors

ChatBot automatically provides and manages several toolkits and utilities:
  • Reasoning Toolkit: Allows structured “chain-of-thought” reasoning within the chat context.
  • Built-in Tool Support (OpenAI-style): May include web_search, local_shell, or image_gen, depending on the LLM adapter.
  • UI Toolkit Integration: Supports sending messages, showing toasts, or interacting with user interfaces via the “ui” toolkit.
These can be extended or replaced with your own toolkit factories.

Multi-user threads and reply logic

ChatBot is multi-user aware out of the box. Conversation state is stored in a thread document (a room-synced XML-like structure) with two key sections:
  • <members>: participant names for this thread. The client UI (e.g., MeshAgent Studio, Powerboards, or your app) writes this list, the ChatBot only reads it.
  • <messages>: conversation history, attachments, tool output. The ChatBot appends messages here as it responds.
When a message arrives:
  1. The UI opens a thread path (via room.sync.open(path, create=true)) and writes the <members> list for that conversation.
  2. ChatBot opens the same document and syncs changes.
  3. ChatBot cross-references <members> with the current online roster from room.messaging.get_participants() to see who is actually present.
  4. If more than one human is in the thread, ChatBot runs the should_reply function to determine whether the assistant is the intended next speaker. If not, it stays quiet. With a single human, the ChatBot always replies.
Private vs group chats:
  • If the thread has one human in <members>, the chatbot treats it like a 1:1 and replies.
  • If the thread has more than one human in <members>, it automatically switches to multiplayer mode and uses should_reply to decide whether to speak.

Next Steps

ChatBot builds directly on SingleRoomAgent, inheriting its lifecycle management, requirement installation, and toolkit resolution. While SingleRoomAgent focuses on room connection and environment setup, ChatBot handles conversation orchestration, managing threads, participants, messages, and reasoning. To continue learning about MeshAgent agents check out:
  • Adapters: Understand how LLM adapters and tool response adapters plug into the chat loop.
  • VoiceBot: Adds streaming audio input/output for speech interactions.
  • Worker and TaskRunner: Perform background or task-based actions.
  • MailBot: Interact with an agent via email.
To learn more about deploying agents with MeshAgent