Skip to main content
A TaskRunner is the bridge that makes any existing AI Agent built in your framework of choice work seamlessly with MeshAgent. It is a base agent class designed with one core principle: bring your own logic. It builds on SingleRoomAgent to let you wrap any existing agent or functionality and run it inside a MeshAgent room. Think of TaskRunner as a thin execution adapter: it handles room connection, registration, validation, toolkit resolution, and request routing so you can focus on your logic. Whether that logic comes from Pydantic AI, LangChain, CrewAI, or plain Python, TaskRunner enables you to use existing code in MeshAgent easily. A TaskRunner defines three core things:
  • Input Schema: JSON schema describing what arguments your agent accepts
  • Output Schema (optional): JSON schema describing what your agent returns
  • ask() Method: Your custom logic that processes inputs and returns results
Once you call the agent into a room, your TaskRunner becomes an invokable participant. You can discover and run it directly from the MeshAgent Studio via the “Run Task…” room menu option, from the CLI meshagent agents ask, or programatically via room.agents.ask(...) in your code.

Why TaskRunner Exists

Many developers already have working agents written in their framework of choice. TaskRunner eliminates the need to rewrite them for MeshAgent — just wrap your logic in a TaskRunner subclass, define schemas for inputs and outputs, and your agent becomes callable, shareable, and manageable within a MeshAgent room. This design lets you integrate any AI framework, or even simple business logic, into a unified, multi-agent environment that supports shared toolkits, structured communication, and rich UI integrations.

When to Use It

  • You already have agent logic and want to run it in a room without rewriting it.
  • You need a typed boundary for requests/results (JSON Schema).
  • You want to expose agents as tools that other agents (like ChatBots) can use.
  • You’re building higher-level specializations (e.g., LLMTaskRunner, DynamicLLMTaskRunner).
If you need a conversational, message-based assistant, use ChatBot. For real-time speech, see VoiceBot.

Constructor Parameters

TaskRunner accepts everything from SingleRoomAgent (name, title, description, requires, labels) plus task-specific parameters.
ParameterTypeDescription
supports_toolsbool | NoneWhether callers can pass ad-hoc toolkits at runtime (default False).
input_schemadictRequired. JSON Schema for request arguments. If None, defaults to a “no-arguments” schema.
output_schemadict | NoneOptional JSON Schema for responses; if set, responses are validated.
toolkitslist[Toolkit] | NoneLocal toolkits always available to this TaskRunner (in addition to any requires).

Lifecycle Overview

TaskRunner inherits lifecycle hooks from SingleRoomAgent and adds task registration and routing.
  • await start(room: RoomClient): Registers the agent for agent.ask requests, installs requirements, and enables message routing.
  • await stop(): Unregisters the agent if the protocol is open, then disconnects cleanly.
  • room property: Access the active RoomClient as usual.

TaskRunner Flow

When a task is invoked (from Studio or code):
  1. The room delivers an agent.ask message to your TaskRunner.
  2. _ask(…) creates an AgentCallContext:
    • Builds/initializes a chat context via init_chat_context().
    • Resolves required/local toolkits (and any toolkits supplied by the caller when supports_tools=True).
    • Identifies the caller and (optionally) the on_behalf_of participant.
  3. Arguments are validated against input_schema.
  4. Your ask(context, arguments) method runs your logic and returns a dict.
  5. If output_schema is set, the response is validated before being returned to the caller.
  6. The result (or error) is sent back to the room; Studio and SDKs render/store it appropriately.
All the registration, routing, and validation plumbing is handled for you.

Key Behaviors and Hooks

  • Schema validation: validate_arguments() and validate_response() enforce input_schema and output_schema respectively.
  • Request handler: start() wires _ask(...) to the room’s protocol. You implement the public ask(...) with your business logic.
  • Context & toolkits: _ask(…) assembles an AgentCallContext with chat, caller, on_behalf_of, and a merged set of toolkits: local (toolkits), required (requires), and optional caller-supplied sets when supports_tools=True.
  • Discovery & invocation: _register() exposes your agent’s name, title, schemas, and capabilities to the room, so Studio and room.agents.ask can find and invoke it.
  • Agents as tools: With RunTaskTool and the “agents” toolkit factory, other agents (e.g., ChatBot) can call your TaskRunner as a tool.

Key Methods

MethodDescription
async def ask(context, arguments) -> dictImplement your task logic Return a JSON-serializable dict.
async def validate_arguments(arguments)Validates inputs against input_schema.
async def validate_response(response)Validates outputs against output_schema when provided.
async def start(room) / async def stop()Registers/unregisters the runner and protocol handlers.
def to_json()Serializes metadata (schemas, flags, labels) used during registration.

Minimal Example

This minimal example shows the structure of a TaskRunner. There’s no LLM, tool integration, or external logic yet — it simply demonstrates how the base class comes together. See the TaskRunner examples for more insights into plugging existing agents from other frameworks into a TaskRunner, using an LLM based TaskRunner, and more.
import asyncio
from meshagent.api.services import ServiceHost
from meshagent.agents import TaskRunner, AgentCallContext

service = ServiceHost()


@service.path(path="/task", identity="hello-taskrunner")
class HelloTask(TaskRunner):
    def __init__(self):
        super().__init__(
            name="hello-taskrunner",
            title="Hello Task",
            description="Returns a friendly greeting.",
            input_schema=None,  # defaults to no-args schema
            output_schema={
                "type": "object",
                "properties": {"result": {"type": "string"}},
                "required": ["result"],
            },
        )

    async def ask(self, *, context: AgentCallContext, arguments: dict) -> dict:
        return {"result": "Hello from TaskRunner!"}


asyncio.run(service.run())

Next from the CLI run the following commands to call the TaskRunner into the room and invoke it with the ask method.
meshagent setup # authenticate to meshagent
meshagent service run "main.py" --room=task # run the task runner locally and call it into the room 
meshagent agents ask --room=task --agent="hello-taskrunner" --input={} # call the ask method on the agent
This example is intentionally minimal. In real projects, you’ll typically subclass TaskRunner to integrate an LLM, framework-based agent, or other custom business logic. See TaskRunner examples for more.

Next Steps

Examples of TaskRunners in action Understand other MeshAgent agents
  • Chatbot for conversation based agents
  • Voicebot for voice based agents
  • Worker for background queue based agents
Learn more about deploying agents with MeshAgent