Skip to main content

Overview

A TaskRunner is a type of agent that accepts structured input, performs work, and can return structured output. TaskRunners can be invoked directly (e.g., via CLI) or joined to a room where they register as a tool that other agents and humans can invoke. A TaskRunner defines:
  • Input schema: JSON Schema describing the arguments your tool accepts
  • Output schema (optional): JSON Schema describing what your tool returns
  • ask() method: the code that performs the task and returns the result
The base TaskRunner class is intentionally minimal, allowing you to bring-your-own-logic. You can extend the TaskRunner by implementing the ask() method to wrap custom code. For example, you can use it to run agents from agent frameworks (e.g., Pydantic AI, LangChain, CrewAI), make API calls, run custom workflows, etc. The LLMTaskRunner is the default TaskRunner implementation that uses an LLM (and optional tools) to produce a response. The meshagent task-runner CLI uses an LLMTaskRunner by default. This guide focuses on LLMTaskRunner. See using agents from other frameworks for an example of extending the base TaskRunner class.

Two ways to build an LLMTaskRunner

  1. CLI: Run production-ready agents with a single command. Configure tools, rules, and behavior using command-line flags. Ideal for most use cases.
  2. SDK: Extend the base LLMTaskRunner class with custom code when you need deeper integrations or specialized behaviors. Best for full control or more complex logic.

In this guide you will learn

  • When to use TaskRunner (specifically LLMTaskRunner)
  • How to run and join TaskRunners with the MeshAgent CLI
  • How to invoke joined TaskRunners using the CLI, SDK, or MeshAgent Studio
  • How to build and deploy an LLMTaskRunner with the MeshAgent SDK
  • How TaskRunner works, including lifecycle, task flow, and core hooks

When to use TaskRunner

Use a TaskRunner when you need an agent that:
  • Exposes a callable tool with structured input and output
  • Accepts JSON schema input, and optionally enforces an output schema
  • Can be invoked by other agents or humans (for example a ChatBot calling a TaskRunner tool)
TaskRunners excel when:
  • Inputs and outputs are well-defined
  • You want to expose an agent as a callable tool for agents or humans
  • You want to run an agent in the background, not have a conversation with it
  • You want a clean context window for each invocation (you can persist conversation state across runs by enabling thread selection and providing a path for the runner to load and update the thread document)
If you need a conversational agent, use ChatBot. For real-time speech, use VoiceBot. For background queue work, use Worker and for email use MailBot.

Working with TaskRunners

TaskRunners can be invoked in three ways:
  1. One-off execution: meshagent task-runner run executes a task and exits (no toolkit registration)
  2. Joined to a room: meshagent task-runner join connects to a room and registers a toolkit you can invoke from the CLI, Studio, SDK, or from other agents.
  3. SDK implementation: Extend LLMTaskRunner or implement TaskRunner for custom logic.
When a TaskRunner joins a room, it registers a toolkit with a single tool named run_<agent_name>_task. Once registered, you can invoke it from:
  • MeshAgent Studio (Room menu → Toolkits…)
  • CLI (meshagent room agents invoke-tool to invoke a task runner you’ve joined to the room using meshagent task-runner join)
  • SDK (room.agents.invoke_tool(...))

Run TaskRunners from the CLI

task-runner run vs task-runner join

  • task-runner run executes a single task and exits. It does not register a toolkit in the room. This is ideal for scripts, tests, or startup automation.
  • task-runner join connects to a room and registers a toolkit (run_<agent_name>_task) so you can invoke it from MeshAgent Studio or from another agent. It stays connected to the room until you stop the process.

Step 1: Run a TaskRunner once

This runs a local LLMTaskRunner with optional tools/rules and prints the result.
# Authenticate to MeshAgent if not already signed in
meshagent setup

# Execute a one-off task (no toolkit registration)
meshagent task-runner run \
  --room quickstart \
  --agent-name mytaskrunner \
  --web-search \
  --storage \
  --input '{"prompt":"Write a product description for a bluetooth speaker","tools":[],"model":null}'
Tip: --input - reads JSON from stdin, which is handy for piping data from other commands or scripts. Thread persistence: Add --allow-thread-selection and include a path in your input JSON to read/write a thread document between runs.

Step 2: Join a TaskRunner to a room

This starts a local LLMTaskRunner that registers as a toolkit in the room.
bash
# Authenticate to MeshAgent if not already signed in
meshagent setup

# Call a task runner into your room
meshagent task-runner join --room quickstart --agent-name mytaskrunner --web-search --storage --room-rules "agents/mytaskrunner/rules.txt" --rule "You are a helpful assistant"
When you add the --room-rules "agents/mytaskrunner/rules.txt" flag and supply a file path for the rules, the file will be created if it does not already exist. This file is relative to room storage.
Tip: Use meshagent task-runner join --help to see all available tools and options.

Step 3: Invoke a joined TaskRunner

You can invoke a joined TaskRunner from MeshAgent Studio or from the CLI.
  1. Go to MeshAgent Studio and log in
  2. Enter your room quickstart
  3. Open the Toolkits… menu, select mytaskrunner, and submit a prompt
You can also invoke the task runner from the CLI:
bash
meshagent room agents invoke-tool --room quickstart --toolkit mytaskrunner --tool run_mytaskrunner_task --arguments '{"prompt":"Draft a short product description for a bluetooth speaker", "tools":[], "model":null}'

Step 4: Package and deploy the agent

Once your agent works locally to make it always available you’ll need to package and deploy it as a project or room service. You can do this using the CLI, by creating a YAML file, or from MeshAgent Studio. Both options below deploy the same TaskRunner - choose based on your workflow:
  • Option 1 (meshagent task-runner deploy): One command that deploys immediately (fastest/easiest approach)
  • Option 2 (meshagent task-runner spec + meshagent service create): Generates a yaml file you can review, or further customize before deploying
Option 1: Deploy directly Use the CLI to automatically deploy the TaskRunner to your room.
bash
meshagent task-runner deploy --service-name mytaskrunner --room quickstart --agent-name mytaskrunner --web-search --require-storage --room-rules "agents/mytaskrunner/rules.txt" --rule "You are a helpful assistant"
Option 2: Generate a YAML spec Create a meshagent.yaml file that defines how our service should run, then deploy the agent to our room. The service spec can be dynamically generated from the CLI by running:
bash
meshagent task-runner spec --service-name mytaskrunner --agent-name mytaskrunner --web-search --require-storage --room-rules "agents/mytaskrunner/rules.txt" --rule "You are a helpful assistant"
Next, copy the output to a meshagent.yaml file
version: v1
kind: Service
metadata:
  name: mytaskrunner
  annotations:
    meshagent.service.id: mytaskrunner
agents:
- name: mytaskrunner
  annotations:
    meshagent.agent.type: TaskRunner
ports:
- num: '*'
  type: http
  endpoints:
  - path: /agent
    meshagent:
      identity: mytaskrunner
container:
  image: us-central1-docker.pkg.dev/meshagent-public/images/cli:{SERVER_VERSION}-esgz
  command: meshagent task-runner service --agent-name mytaskrunner --web-search --require-storage
    --room-rules agents/mytaskrunner/rules.txt --rule 'You are a helpful assistant'
Then deploy it to your room:
bash
meshagent service create --file meshagent.yaml --room quickstart

Build and deploy a TaskRunner with the SDK

Step 1: Create a TaskRunner

This example shows how to create your own LLMTaskRunner instances using ServiceHost. It also supports the web search, storage, and local shell tools.
import asyncio
from meshagent.agents.llmrunner import LLMTaskRunner
from meshagent.otel import otel_config
from meshagent.api.services import ServiceHost
from meshagent.openai import OpenAIResponsesAdapter
from meshagent.openai.tools.responses_adapter import WebSearchToolkitBuilder, LocalShellToolkitBuilder
from meshagent.tools.storage import StorageToolkitBuilder

otel_config(service_name="llm-taskrunners")
service = ServiceHost()


@service.path(path="/llmtaskrunner", identity="llmtaskrunner")
class LLMRunner(LLMTaskRunner):
    def __init__(self):
        super().__init__(
            title="LLM Task Runner",
            description="Returns {result: string} unless overridden.",
            llm_adapter=OpenAIResponsesAdapter(),
            toolkits=None,
            supports_tools=True,
            input_prompt=True,
            output_schema={
                "type": "object",
                "required": ["result"],
                "additionalProperties": False,
                "properties": {"result": {"type": "string"}},
            },
        )
    def get_toolkit_builders(self):
        return [
            WebSearchToolkitBuilder(),
            StorageToolkitBuilder(),
            LocalShellToolkitBuilder(),
        ]

asyncio.run(service.run())

Step 2: Call the agent into a room

Run the agent locally and connect it to a room:
meshagent setup # authenticate to MeshAgent
meshagent service run "main.py" --room=quickstart

Step 3: Invoke the TaskRunner

From a different tab in your terminal invoke the LLMTaskRunner from the CLI or Python:
meshagent room agents invoke-tool \
  --room myroom \
  --toolkit llmtaskrunner \
  --tool run_llmtaskrunner_task \
  --arguments '{"prompt":"Write a poem about ai agents", "tools":[],"model":null}'

Step 4: Package and deploy the agent

To deploy your SDK TaskRunner permanently, you’ll package your code with a meshagent.yaml file that defines the service configuration and a container image that MeshAgent can run. For full details on the service spec and deployment flow, see Packaging Services and Deploying Services. MeshAgent supports two deployment patterns for containers:
  1. Runtime image + code mount (recommended): Use a pre-built MeshAgent runtime image (like python-sdk-slim) that contains Python and all MeshAgent dependencies. Mount your lightweight code-only image on top. This keeps your code image tiny (~KB), eliminates dependency installation time, and allows your service to start quickly.
  2. Single Image: Bundle your code and all dependencies into one image. This is good when you need to install additional libraries, but can result in larger images and slower pulls. If you build your own images we recommend optimizing them with eStargz.
This example uses the runtime image + code mount pattern with the public python-docs-examples code image so you can run the documentation sample without building your own image. If you want to build and push your own code image, follow the steps below and update the storage.images entry in meshagent.yaml. Prepare your project structure This example organizes the agent code and configuration in the same folder, making each agent self-contained:
your-project/
├── Dockerfile                    # Shared by all samples
├── taskrunner/
   ├── llm_taskrunners_service.py
   └── meshagent.yaml           # Config specific to this sample
└── another_sample/              # Other samples follow same pattern
    ├── another_sample.py
    └── meshagent.yaml
Note: If you’re building a single agent, you only need the taskrunner/ folder. The structure shown supports multiple samples sharing one Dockerfile.
Step 4a: Build a Docker container If you want a code-only image, create a scratch Dockerfile and copy the files you want to run. This creates a minimal image that pairs with the runtime image + code mount pattern.
FROM scratch

COPY . /
Build and push the image with docker buildx:
bash
docker buildx build . \
  -t "<REGISTRY>/<NAMESPACE>/<IMAGE_NAME>:<TAG>" \
  --platform linux/amd64 \
  --push
Note: Building from the project root copies your entire project structure into the image. For a single agent, this is fine - your image will just contain one folder. For multi-agent projects, all agents will be in one image, but each can deploy independently using its own meshagent.yaml.
Step 4b: Package the agent Define the service configuration in a meshagent.yaml file.
kind: Service
version: v1
metadata:
  name: llm-taskrunner
  description: "LLM TaskRunner"
  annotations:
    meshagent.service.id: "llm-taskrunner"
agents:
  - name: llmtaskrunner
    description: "LLM TaskRunner"
    annotations:
      meshagent.agent.type: "TaskRunner"
container:
  image: "us-central1-docker.pkg.dev/meshagent-public/images/python-sdk:{SERVER_VERSION}-esgz"
  command: python /src/taskrunner/llm_taskrunners_service.py
  storage:
    images:
      # Replace this image tag with your own code-only image if you build one.
      - image: "us-central1-docker.pkg.dev/meshagent-public/images/python-docs-examples:{SERVER_VERSION}"
        path: /src
        read_only: true

How the paths work:
  • Your code image contains taskrunner/llm_taskrunners_service.py
  • It’s mounted at /src in the runtime container
  • The command runs python /src/taskrunner/llm_taskrunners_service.py
Note: The default YAML in the docs uses us-central1-docker.pkg.dev/meshagent-public/images/python-docs-examples so you can test this example immediately without building your own image first. Replace this with your actual image tag when deploying your own code.
Step 4c: Deploy the agent Next from the CLI in the directory where your meshagent.yaml file is run:
meshagent service create --file "meshagent.yaml" --room=quickstart
The TaskRunner is now deployed to the quickstart room! Now the agent will always be available inside the room for us to run tasks.

How TaskRunner Works

TaskRunner vs LLMTaskRunner

  • TaskRunner is the base class. You implement ask(...) and define input/output schemas. This allows you to run agents from other frameworks or any other custom logic inside the ask() method.
  • LLMTaskRunner is a concrete implementation that calls an LLM adapter, resolves toolkits, and returns the model output. This is useful for creating a tool that uses an LLM to accomplish a task.
  • The CLI task-runner command builds an LLMTaskRunner for you.

Constructor Parameters

TaskRunner accepts everything from SingleRoomAgent (name, title, description, requires, labels) plus task-specific configuration.
ParameterTypeDescription
supports_toolsbool | NoneWhether callers can pass ad-hoc toolkits at runtime (default False).
input_schemadictRequired. JSON Schema for request arguments. If None, defaults to a no-args schema.
output_schemadict | NoneOptional JSON Schema for responses; if set, responses are validated.
toolkitslist[Toolkit] | NoneLocal toolkits always available to this TaskRunner (in addition to any requires).
LLMTaskRunner adds LLM-specific parameters like llm_adapter, tool_adapter, rules, client_rules, and input_prompt.

Lifecycle Overview

TaskRunner inherits lifecycle hooks from SingleRoomAgent and adds toolkit registration and routing.
  • await start(room: RoomClient): Registers the TaskRunner toolkit and tool, installs requirements, and enables routing.
  • await run(room: RoomClient, arguments: dict, ...): Executes a single task without registering a toolkit (used by meshagent task-runner run).
  • await stop(): Unregisters the toolkit if the protocol is open, then disconnects cleanly.
  • room property: Access the active RoomClient as usual.

Task Flow

When a task is invoked (from Studio or code):
  1. The room delivers a tool invocation for run_<agent_name>_task.
  2. Arguments are validated against input_schema.
  3. Your ask(context, arguments) method runs your logic and returns a dict.
  4. If output_schema is set, the response is validated before returning to the caller.

Key Behaviors and Hooks

  • Schema validation: validate_arguments() and validate_response() enforce input_schema and output_schema.
  • Tool support: When supports_tools=True, callers can include tool configs per request. LLMTaskRunner uses these to build toolkits for the LLM.
  • Thread persistence (optional): When LLMTaskRunner is configured with input_path=True (CLI: --allow-thread-selection), the input schema accepts a path. If provided, the runner opens that thread document, rehydrates the chat context from prior messages, appends the new prompt, and streams the assistant output back into the thread. If path is omitted, each run starts with a clean context.
  • Discovery: Registration exposes your TaskRunner’s schema and capabilities to Studio, CLI, and other agents so they are discoverable and invokable.

Key Methods

MethodDescription
async def ask(context, arguments) -> dictImplement your task logic and return a JSON-serializable dict.
async def validate_arguments(arguments)Validates inputs against input_schema.
async def validate_response(response)Validates outputs against output_schema when provided.
async def start(room) / async def stop()Registers/unregisters the runner and protocol handlers.
def to_json()Serializes metadata (schemas, flags, labels) used during registration.

Next Steps