Skip to main content
TaskRunner is a base class that lets you bring your own logic — whether that means calling another framework, running business rules, or integrating an LLM. Most often, you’ll extend TaskRunner with an LLM to build agents that can reason, use tools, and handle complex tasks. LLM TaskRunners are designed for focused execution — they start with a clean context each time, complete the task, and return a result. This makes them ideal when you have a defined workflow or process and simply want the agent to do the work. For example, generating content, processing documents, running scheduled jobs, or handling batch operations. This differs from chat-based agents (like ChatBot), which maintain conversation history and support iterative, collaborative exchanges. Both can perform complex work and use the same toolkits, but LLM TaskRunners excel when:
  • You have well-defined inputs and outputs
  • The task doesn’t require multi-turn clarification or refinement with the user
  • You need a fresh context window with each execution
  • You’re running background jobs, batch tasks, or callable tools for other agents

Building an LLM TaskRunner

We’ll walk through extending the base TaskRunner with LLM capabilities. This example shows two implementations:
  • LLMTaskRunner: Defines a fixed output schema at initialization. Best when you always want the same response format. By default, accepts a text prompt and returns a string response.
  • DynamicLLMTaskRunner: Accepts an output_schema parameter at runtime, letting you change the response structure for each request.

What You’re Adding to the Base TaskRunner

The base TaskRunner is intentionally simple - it handles the plumbing (schemas, validation, room integration) while you provide the execution logic. This example shows you how to extend it with LLM capabilities, creating task-oriented agents that can reason through problems and return structured results. We’ll expand on the base TaskRunner by adding:
  1. Chat Context: Initialize conversation state using the init_chat_context method.
  2. LLM Execution Loop: Implement an ask() function that calls .next() on the LLMAdapter, allowing the model to reason and use tools until the task completes.
  3. Schema Validation: Validate the LLM output against the declared schema to ensure consistency.

Constructor Parameters

  • llm_adapter: a LLM adapter to use to integrate with a LLM. We recommend using the OpenAIResponsesAdapter from meshagent-openai.
  • supports_tools Whether the agent should support passing a custom set of tools at runtime (optional)
  • tool_adapter: a custom tool adapter to use to transform tool responses into context messages (optional).
  • toolkits: used to specify local toolkits for the agent. While it’s generally recommended to register toolkits with the room so any agent or user can use them, sometimes you need each agent to have it’s own instance of a toolkit, for instance with synchorized document authoring.
  • requires: a list of requirements for the agent. You can use RequiredSchema, RequiredToolkit to use toolkits and schemas that have been registered with the room with this agent.
  • input_prompt: Whether the TaskRunner should accept a prompt as input, if true, the input should be in the format { "prompt" : "text" }.
  • input_schema: JSON schema describing what arguments your agent accepts. If not provided and input_prompt is True, defaults to a prompt schema accepting text.
  • output_schema: For LLMTaskRunner only, a JSON schema that responses must conform to (by default returns {"result": {"type": "string"}}). In DynamicLLMTaskRunner the output_schema is set dynamically at runtime.
  • rules: a set of rules that the task runner should use while executing. Rules are used to guide the behavior of the agent with system or developer prompts (optional).

Class Definitions

Here’s how to implement both LLMTaskRunner and DynamicLLMTaskRunner. These classes handle the LLM execution loop, schema validation, and toolkit integration:
import json
import logging
from typing import Optional

from jsonschema import validate, ValidationError
from meshagent.api.schema_util import prompt_schema, merge
from meshagent.api import Requirement
from meshagent.tools import Toolkit
from meshagent.agents import TaskRunner
from meshagent.agents.agent import AgentCallContext
from meshagent.agents.adapter import LLMAdapter, ToolResponseAdapter
from meshagent.otel import otel_config

otel_config(service_name="llm-taskrunner")
log = logging.getLogger("llm-taskrunner")


class LLMTaskRunner(TaskRunner):
    """
    A Task Runner that uses an LLM execution loop until the task is complete.
    """

    def __init__(
        self,
        *,
        name: str,
        llm_adapter: LLMAdapter,
        title: Optional[str] = None,
        description: Optional[str] = None,
        tool_adapter: Optional[ToolResponseAdapter] = None,
        toolkits: Optional[list[Toolkit]] = None,
        requires: Optional[list[Requirement]] = None,
        supports_tools: bool = True,
        input_prompt: bool = True,
        input_schema: Optional[dict] = None,
        output_schema: dict | None = None,
        rules: Optional[list[str]] = None,
        labels: Optional[list[str]] = None,
    ):
        if input_schema is None:
            if input_prompt:
                input_schema = prompt_schema(
                    description="use a prompt to generate content"
                )
            else:
                input_schema = {
                    "type": "object",
                    "additionalProperties": False,
                    "required": [],
                    "properties": {},
                }

        if output_schema is None:
            output_schema = {
                "type": "object",
                "additionalProperties": False,
                "required": ["result"],
                "properties": {"result": {"type": "string"}},
            }
        elif not isinstance(output_schema, dict):
            raise TypeError("output_schema must be a dict or None")

        static_toolkits = list(toolkits or [])

        super().__init__(
            name=name,
            title=title,
            description=description,
            input_schema=input_schema,
            output_schema=output_schema,
            requires=requires,
            supports_tools=supports_tools,
            labels=labels,
            toolkits=static_toolkits,
        )

        self._extra_rules = rules or []
        self._llm_adapter = llm_adapter
        self._tool_adapter = tool_adapter
        self.toolkits = static_toolkits

    async def init_chat_context(self):
        chat = self._llm_adapter.create_chat_context()
        if self._extra_rules:
            chat.append_rules(self._extra_rules)
        return chat

    async def ask(self, context: AgentCallContext, arguments: dict):
        prompt = arguments.get("prompt")
        if prompt is None:
            raise ValueError("`prompt` is required")

        context.chat.append_user_message(prompt)

        combined_toolkits: list[Toolkit] = [*self.toolkits, *context.toolkits]

        log.info(f"Running agent with prompt: {prompt}")
        log.info(f"Running agent with self.toolkits: {self.toolkits}")
        log.info(f"Running agent with context.toolkits: {context.toolkits}")

        resp = await self._llm_adapter.next(
            context=context.chat,
            room=context.room,
            toolkits=combined_toolkits,
            tool_adapter=self._tool_adapter,
            output_schema=self.output_schema,
        )

        # Validate the LLM output against the declared schema
        try:
            validate(instance=resp, schema=self.output_schema)
        except ValidationError as exc:
            log.error(f"LLM output failed schema validation: {exc}")
            raise RuntimeError("LLM output failed schema validation") from exc

        return resp


class DynamicLLMTaskRunner(LLMTaskRunner):
    """
    Same capabilities as LLMTaskRunner, but the caller supplies an arbitrary JSON-schema (`output_schema`) at runtime
    """

    def __init__(
        self,
        *,
        name: str,
        llm_adapter: LLMAdapter,
        supports_tools: bool = True,
        title: Optional[str] = None,
        description: Optional[str] = None,
        tool_adapter: Optional[ToolResponseAdapter] = None,
        toolkits: Optional[list[Toolkit]] = None,
        rules: Optional[list[str]] = None,
    ):
        input_schema = merge(
            schema=prompt_schema(description="use a prompt to generate content"),
            additional_properties={"output_schema": {"type": "object"}},
        )
        super().__init__(
            name=name,
            llm_adapter=llm_adapter,
            supports_tools=supports_tools,
            title=title,
            description=description,
            tool_adapter=tool_adapter,
            toolkits=toolkits,
            rules=rules,
            input_prompt=True,
            input_schema=input_schema,
            output_schema={"type": "object"},
        )

    async def ask(self, context: AgentCallContext, arguments: dict):
        prompt = arguments.get("prompt")
        if prompt is None:
            raise ValueError("`prompt` is required")

        # Parse and pass JSON output schema provided at runtime
        output_schema_raw = arguments.get("output_schema")
        if output_schema_raw is None:
            raise ValueError("`output_schema` is required for DynamicLLMTaskRunner")

        # Convert JSON string → dict if needed
        if isinstance(output_schema_raw, str):
            try:
                output_schema_raw = json.loads(output_schema_raw)
            except json.JSONDecodeError as exc:
                raise ValueError("`output_schema` must be valid JSON") from exc

        context.chat.append_user_message(prompt)

        combined_toolkits: list[Toolkit] = [*self.toolkits, *context.toolkits]

        log.info(f"Running agent with prompt: {prompt}")
        log.info(f"Running agent with self.toolkits: {self.toolkits}")
        log.info(f"Running agent with context.toolkits: {context.toolkits}")

        resp = await self._llm_adapter.next(
            context=context.chat,
            room=context.room,
            toolkits=combined_toolkits,
            tool_adapter=self._tool_adapter,
            output_schema=output_schema_raw,
        )

        try:
            validate(instance=resp, schema=output_schema_raw)
        except ValidationError as exc:
            log.error(f"LLM output failed caller schema validation: {exc}")
            raise RuntimeError("LLM output failed caller schema validation") from exc

        return resp

Service Implementation

Now that we’ve defined our classes, let’s wrap each of them with a MeshAgent ServiceHost so we can call them into the room as agents. This example uses the OpenAIResponsesAdapter with both the LLMTaskRunner (returns a fixed string response) and DynamicLLMTaskRunner (returns a schema defined at runtime).
import asyncio

# either import from other file or add this code to your existing file that defines the LLMTaskRunners
from llm_taskrunners import LLMTaskRunner, DynamicLLMTaskRunner
from meshagent.otel import otel_config
from meshagent.api.services import ServiceHost
from meshagent.openai import OpenAIResponsesAdapter

otel_config(service_name="llm-taskrunners")
service = ServiceHost()


@service.path(path="/llmtaskrunner", identity="llmtaskrunner")
class LLMRunner(LLMTaskRunner):
    def __init__(self):
        super().__init__(
            name="llmtaskrunner",
            title="LLM Task Runner",
            description="Returns {result: string} unless overridden.",
            llm_adapter=OpenAIResponsesAdapter(),
            toolkits=None,
            supports_tools=True,
            input_prompt=True,
            output_schema={
                "type": "object",
                "required": ["result"],
                "additionalProperties": False,
                "properties": {"result": {"type": "string"}},
            },
        )


@service.path(path="/dynamicllmtaskrunner", identity="dynamicllmtaskrunner")
class DynamicLLMRunner(DynamicLLMTaskRunner):
    def __init__(self):
        super().__init__(
            name="dynamicllmtaskrunner",
            title="Dynamic LLM TaskRunner",
            description="Prompt + caller‑supplied JSON Schema → structured output.",
            llm_adapter=OpenAIResponsesAdapter(),
        )


asyncio.run(service.run())

Running the Custom TaskRunners

From the terminal use the MeshAgent CLI to start the service and call both agents into the room:
bash
meshagent setup #authenticate if not already connected
meshagent service run "main.py" --room=myroom
Next you can invoke the agents using the MeshAgent CLI, from code, or in the MeshAgent Studio. Option 1: Invoking from the CLI You can use the MeshAgent CLI directly to connect to the room and invoke either TaskRunner.
meshagent agents ask \
--room myroom \
--agent llmtaskrunner \
--input '{"prompt":"Write a poem about ai agents"}'
Option 2: Invoking from Code You can use the MeshAgent Python SDK directly to connect to the room and invoke either TaskRunner. Paste the following code into a file called invoke_llm_taskrunner.py and edit it as applicable. By default this will invoke both the LLMTaskRunner and DynamicLLMTaskRunner using a default output schema defined in the file. Be sure you have already exported your MESHAGENT_API_KEY. You can create and activate one by running meshagent api-key create <KEY_NAME> activate this will print the value of the key one time for you to copy. Then save it and export it export MESHAGENT_API_KEY=xxxxx
import os
import asyncio
import logging
from typing import Dict, Any
from meshagent.api import (
    ApiScope,
    ParticipantGrant,
    ParticipantToken,
    RoomClient,
    WebSocketClientProtocol,
)
from meshagent.api.helpers import websocket_room_url
from meshagent.otel import otel_config

otel_config(service_name="llm-taskrunner-demo")
log = logging.getLogger("llm-taskrunner-demo")

# ---- Simple configuration knobs (edit these) ----
ROOM_NAME = "myroom"
PROMPT_TEXT = "Create a product listing for a bluetooth speaker"
RUN_LLMTASKRUNNER = True
RUN_DYNAMIC = True
# -------------------------------------------------

API_KEY = os.getenv("MESHAGENT_API_KEY")
if not API_KEY:
    raise RuntimeError("Set MESHAGENT_API_KEY before running this script.")


def default_product_schema() -> dict:
    return {
        "type": "object",
        "additionalProperties": False,
        "required": ["title", "price", "features", "description"],
        "properties": {
            "title": {"type": "string"},
            "price": {"type": "number"},
            "features": {"type": "array", "items": {"type": "string"}},
            "description": {"type": "string"},
        },
    }


async def ask_agent(*, room_name: str, agent_name: str, arguments: Dict[str, Any]):
    token = ParticipantToken(
        name="sample-participant",
        grants=[
            ParticipantGrant(name="room", scope=room_name),
            ParticipantGrant(name="role", scope="agent"),
            ParticipantGrant(name="api", scope=ApiScope.agent_default()),
        ],
    ).to_jwt(api_key=API_KEY)

    protocol = WebSocketClientProtocol(
        url=websocket_room_url(room_name=room_name), token=token
    )

    async with RoomClient(protocol=protocol) as room:
        log.info("Connected to room: %s", room.room_name)
        resp = await room.agents.ask(agent=agent_name, arguments=arguments)
        log.info("Response from %s: %s", agent_name, resp)
        return resp


async def main():
    # 1) Fixed-schema runner
    if RUN_LLMTASKRUNNER:
        await ask_agent(
            room_name=ROOM_NAME,
            agent_name="llmtaskrunner",
            arguments={"prompt": PROMPT_TEXT},
        )

    # 2) Dynamic-schema runner
    if RUN_DYNAMIC:
        schema = default_product_schema()
        await ask_agent(
            room_name=ROOM_NAME,
            agent_name="dynamicllmtaskrunner",
            arguments={"prompt": PROMPT_TEXT, "output_schema": schema},
        )


if __name__ == "__main__":
    asyncio.run(main())

Now run the file:
bash
python invoke_llm_taskrunner.py
Option 3 Invoking the LLMTaskRunner from the Studio
  1. Go to studio.meshagent.com
  2. Enter the room, myroom
  3. Click menu —> “Run Task”
  4. Select LLM Task Runner from the agent dropdown. (Optional) Click “add tools” and add tools to your new LLMTaskRunner
  5. Enter a prompt
  6. Results appear

Next Steps

Dive Deeper into TaskRunners Learn how to deploy agents with MeshAgent