Overview
ATaskRunner is a type of agent that accepts structured input, performs work, and can return structured output. TaskRunners can be invoked directly (e.g., via CLI) or joined to a room where they register as a tool that other agents and humans can invoke.
A TaskRunner defines:
- Input schema: JSON Schema describing the arguments your tool accepts
- Output schema (optional): JSON Schema describing what your tool returns
ask()method: the code that performs the task and returns the result
TaskRunner class is intentionally minimal, allowing you to bring-your-own-logic. You can extend the TaskRunner by implementing the ask() method to wrap custom code. For example, you can use it to run agents from agent frameworks (e.g., Pydantic AI, LangChain, CrewAI), make API calls, run custom workflows, etc.
The LLMTaskRunner is the default TaskRunner implementation that uses an LLM (and optional tools) to produce a response. The meshagent task-runner CLI uses an LLMTaskRunner by default.
This guide focuses on LLMTaskRunner. See using agents from other frameworks for an example of extending the base TaskRunner class.
Two ways to build an LLMTaskRunner
- CLI: Run production-ready agents with a single command. Configure tools, rules, and behavior using command-line flags. Ideal for most use cases.
- SDK: Extend the base
LLMTaskRunnerclass with custom code when you need deeper integrations or specialized behaviors. Best for full control or more complex logic.
In this guide you will learn
- When to use
TaskRunner(specificallyLLMTaskRunner) - How to run and join TaskRunners with the MeshAgent CLI
- How to invoke joined TaskRunners using the CLI, SDK, or MeshAgent Studio
- How to build and deploy an
LLMTaskRunnerwith the MeshAgent SDK - How
TaskRunnerworks, including lifecycle, task flow, and core hooks
When to use TaskRunner
Use a TaskRunner when you need an agent that:- Exposes a callable tool with structured input and output
- Accepts JSON schema input, and optionally enforces an output schema
- Can be invoked by other agents or humans (for example a
ChatBotcalling a TaskRunner tool)
- Inputs and outputs are well-defined
- You want to expose an agent as a callable tool for agents or humans
- You want to run an agent in the background, not have a conversation with it
- You want a clean context window for each invocation (you can persist conversation state across runs by enabling thread selection and providing a
pathfor the runner to load and update the thread document)
Working with TaskRunners
TaskRunners can be invoked in three ways:- One-off execution:
meshagent task-runner runexecutes a task and exits (no toolkit registration) - Joined to a room:
meshagent task-runner joinconnects to a room and registers a toolkit you can invoke from the CLI, Studio, SDK, or from other agents. - SDK implementation: Extend
LLMTaskRunneror implementTaskRunnerfor custom logic.
run_<agent_name>_task. Once registered, you can invoke it from:
- MeshAgent Studio (Room menu → Toolkits…)
- CLI (
meshagent room agents invoke-toolto invoke a task runner you’ve joined to the room usingmeshagent task-runner join) - SDK (
room.agents.invoke_tool(...))
Run TaskRunners from the CLI
task-runner run vs task-runner join
task-runner runexecutes a single task and exits. It does not register a toolkit in the room. This is ideal for scripts, tests, or startup automation.task-runner joinconnects to a room and registers a toolkit (run_<agent_name>_task) so you can invoke it from MeshAgent Studio or from another agent. It stays connected to the room until you stop the process.
Step 1: Run a TaskRunner once
This runs a localLLMTaskRunner with optional tools/rules and prints the result.
Tip:--input -reads JSON from stdin, which is handy for piping data from other commands or scripts. Thread persistence: Add--allow-thread-selectionand include apathin your input JSON to read/write a thread document between runs.
Step 2: Join a TaskRunner to a room
This starts a localLLMTaskRunner that registers as a toolkit in the room.
bash
--room-rules "agents/mytaskrunner/rules.txt" flag and supply a file path for the rules, the file will be created if it does not already exist. This file is relative to room storage.
Tip: Use meshagent task-runner join --help to see all available tools and options.
Step 3: Invoke a joined TaskRunner
You can invoke a joined TaskRunner from MeshAgent Studio or from the CLI.- Go to MeshAgent Studio and log in
- Enter your room
quickstart - Open the Toolkits… menu, select
mytaskrunner, and submit a prompt
bash
Step 4: Package and deploy the agent
Once your agent works locally to make it always available you’ll need to package and deploy it as a project or room service. You can do this using the CLI, by creating a YAML file, or from MeshAgent Studio. Both options below deploy the same TaskRunner - choose based on your workflow:- Option 1 (
meshagent task-runner deploy): One command that deploys immediately (fastest/easiest approach) - Option 2 (
meshagent task-runner spec+meshagent service create): Generates a yaml file you can review, or further customize before deploying
bash
bash
meshagent.yaml file
bash
Build and deploy a TaskRunner with the SDK
Step 1: Create a TaskRunner
This example shows how to create your ownLLMTaskRunner instances using ServiceHost. It also supports the web search, storage, and local shell tools.
Step 2: Call the agent into a room
Run the agent locally and connect it to a room:Step 3: Invoke the TaskRunner
From a different tab in your terminal invoke theLLMTaskRunner from the CLI or Python:
Step 4: Package and deploy the agent
To deploy your SDK TaskRunner permanently, you’ll package your code with ameshagent.yaml file that defines the service configuration and a container image that MeshAgent can run.
For full details on the service spec and deployment flow, see Packaging Services and Deploying Services.
MeshAgent supports two deployment patterns for containers:
- Runtime image + code mount (recommended): Use a pre-built MeshAgent runtime image (like
python-sdk-slim) that contains Python and all MeshAgent dependencies. Mount your lightweight code-only image on top. This keeps your code image tiny (~KB), eliminates dependency installation time, and allows your service to start quickly. - Single Image: Bundle your code and all dependencies into one image. This is good when you need to install additional libraries, but can result in larger images and slower pulls. If you build your own images we recommend optimizing them with eStargz.
python-docs-examples code image so you can run the documentation sample without building your own image. If you want to build and push your own code image, follow the steps below and update the storage.images entry in meshagent.yaml.
Prepare your project structure
This example organizes the agent code and configuration in the same folder, making each agent self-contained:
Note: If you’re building a single agent, you only need the taskrunner/ folder. The structure shown supports multiple samples sharing one Dockerfile.
Step 4a: Build a Docker container
If you want a code-only image, create a scratch Dockerfile and copy the files you want to run. This creates a minimal image that pairs with the runtime image + code mount pattern.
docker buildx:
bash
Note: Building from the project root copies your entire project structure into the image. For a single agent, this is fine - your image will just contain one folder. For multi-agent projects, all agents will be in one image, but each can deploy independently using its own meshagent.yaml.
Step 4b: Package the agent
Define the service configuration in a meshagent.yaml file.
- Your code image contains
taskrunner/llm_taskrunners_service.py - It’s mounted at
/srcin the runtime container - The command runs
python /src/taskrunner/llm_taskrunners_service.py
Note: The default YAML in the docs uses us-central1-docker.pkg.dev/meshagent-public/images/python-docs-examples so you can test this example immediately without building your own image first. Replace this with your actual image tag when deploying your own code.
Step 4c: Deploy the agent
Next from the CLI in the directory where your meshagent.yaml file is run:
quickstart room! Now the agent will always be available inside the room for us to run tasks.
How TaskRunner Works
TaskRunner vs LLMTaskRunner
TaskRunneris the base class. You implementask(...)and define input/output schemas. This allows you to run agents from other frameworks or any other custom logic inside theask()method.LLMTaskRunneris a concrete implementation that calls an LLM adapter, resolves toolkits, and returns the model output. This is useful for creating a tool that uses an LLM to accomplish a task.- The CLI
task-runnercommand builds anLLMTaskRunnerfor you.
Constructor Parameters
TaskRunner accepts everything from SingleRoomAgent (name, title, description, requires, labels) plus task-specific configuration.
| Parameter | Type | Description |
|---|---|---|
supports_tools | bool | None | Whether callers can pass ad-hoc toolkits at runtime (default False). |
input_schema | dict | Required. JSON Schema for request arguments. If None, defaults to a no-args schema. |
output_schema | dict | None | Optional JSON Schema for responses; if set, responses are validated. |
toolkits | list[Toolkit] | None | Local toolkits always available to this TaskRunner (in addition to any requires). |
LLMTaskRunner adds LLM-specific parameters like llm_adapter, tool_adapter, rules, client_rules, and input_prompt.
Lifecycle Overview
TaskRunner inherits lifecycle hooks from SingleRoomAgent and adds toolkit registration and routing.
await start(room: RoomClient): Registers the TaskRunner toolkit and tool, installs requirements, and enables routing.await run(room: RoomClient, arguments: dict, ...): Executes a single task without registering a toolkit (used bymeshagent task-runner run).await stop(): Unregisters the toolkit if the protocol is open, then disconnects cleanly.roomproperty: Access the active RoomClient as usual.
Task Flow
When a task is invoked (from Studio or code):- The room delivers a tool invocation for
run_<agent_name>_task. - Arguments are validated against
input_schema. - Your
ask(context, arguments)method runs your logic and returns a dict. - If
output_schemais set, the response is validated before returning to the caller.
Key Behaviors and Hooks
- Schema validation:
validate_arguments()andvalidate_response()enforceinput_schemaandoutput_schema. - Tool support: When
supports_tools=True, callers can include tool configs per request.LLMTaskRunneruses these to build toolkits for the LLM. - Thread persistence (optional): When
LLMTaskRunneris configured withinput_path=True(CLI:--allow-thread-selection), the input schema accepts apath. If provided, the runner opens that thread document, rehydrates the chat context from prior messages, appends the new prompt, and streams the assistant output back into the thread. Ifpathis omitted, each run starts with a clean context. - Discovery: Registration exposes your TaskRunner’s schema and capabilities to Studio, CLI, and other agents so they are discoverable and invokable.
Key Methods
| Method | Description |
|---|---|
async def ask(context, arguments) -> dict | Implement your task logic and return a JSON-serializable dict. |
async def validate_arguments(arguments) | Validates inputs against input_schema. |
async def validate_response(response) | Validates outputs against output_schema when provided. |
async def start(room) / async def stop() | Registers/unregisters the runner and protocol handlers. |
def to_json() | Serializes metadata (schemas, flags, labels) used during registration. |
Next Steps
- Use an existing agent with a TaskRunner
- ChatBot for conversation-based agents
- VoiceBot for voice-based agents
- Worker for background queue processing