TaskRunner is a base class that lets you bring your own logic — whether that means calling another framework, running business rules, or integrating an LLM. Most often, you’ll extend TaskRunner with an LLM to build agents that can reason, use tools, and handle complex tasks.
LLM TaskRunners are designed for focused execution — they start with a clean context each time, complete the task, and return a result. This makes them ideal when you have a defined workflow or process and simply want the agent to do the work. For example, generating content, processing documents, running scheduled jobs, or handling batch operations.
This differs from chat-based agents (like ChatBot), which maintain conversation history and support iterative, collaborative exchanges. Both can perform complex work and use the same toolkits, but LLM TaskRunners excel when:
- You have well-defined inputs and outputs
- The task doesn’t require multi-turn clarification or refinement with the user
- You need a fresh context window with each execution
- You’re running background jobs, batch tasks, or callable tools for other agents
Building an LLM TaskRunner
We’ll walk through extending the baseTaskRunner with LLM capabilities. This example shows two implementations:
LLMTaskRunner: Defines a fixed output schema at initialization. Best when you always want the same response format. By default, accepts a text prompt and returns a string response.DynamicLLMTaskRunner: Accepts anoutput_schemaparameter at runtime, letting you change the response structure for each request.
What You’re Adding to the Base TaskRunner
The baseTaskRunner is intentionally simple - it handles the plumbing (schemas, validation, room integration) while you provide the execution logic. This example shows you how to extend it with LLM capabilities, creating task-oriented agents that can reason through problems and return structured results.
We’ll expand on the base TaskRunner by adding:
- Chat Context: Initialize conversation state using the
init_chat_contextmethod. - LLM Execution Loop: Implement an
ask()function that calls.next()on theLLMAdapter, allowing the model to reason and use tools until the task completes. - Schema Validation: Validate the LLM output against the declared schema to ensure consistency.
Constructor Parameters
- llm_adapter: a LLM adapter to use to integrate with a LLM. We recommend using the OpenAIResponsesAdapter from
meshagent-openai. - supports_tools Whether the agent should support passing a custom set of tools at runtime (optional)
- tool_adapter: a custom tool adapter to use to transform tool responses into context messages (optional).
- toolkits: used to specify local toolkits for the agent. While it’s generally recommended to register toolkits with the room so any agent or user can use them, sometimes you need each agent to have it’s own instance of a toolkit, for instance with synchorized document authoring.
- requires: a list of requirements for the agent. You can use RequiredSchema, RequiredToolkit to use toolkits and schemas that have been registered with the room with this agent.
- input_prompt: Whether the TaskRunner should accept a prompt as input, if true, the input should be in the format
{ "prompt" : "text" }. - input_schema: JSON schema describing what arguments your agent accepts. If not provided and input_prompt is True, defaults to a prompt schema accepting text.
- output_schema: For
LLMTaskRunneronly, a JSON schema that responses must conform to (by default returns{"result": {"type": "string"}}). InDynamicLLMTaskRunnertheoutput_schemais set dynamically at runtime. - rules: a set of rules that the task runner should use while executing. Rules are used to guide the behavior of the agent with system or developer prompts (optional).
Class Definitions
Here’s how to implement bothLLMTaskRunner and DynamicLLMTaskRunner. These classes handle the LLM execution loop, schema validation, and toolkit integration:
Service Implementation
Now that we’ve defined our classes, let’s wrap each of them with a MeshAgentServiceHost so we can call them into the room as agents.
This example uses the OpenAIResponsesAdapter with both the LLMTaskRunner (returns a fixed string response) and DynamicLLMTaskRunner (returns a schema defined at runtime).
Running the Custom TaskRunners
From the terminal use the MeshAgent CLI to start the service and call both agents into the room:bash
invoke_llm_taskrunner.py and edit it as applicable. By default this will invoke both the LLMTaskRunner and DynamicLLMTaskRunner using a default output schema defined in the file.
Be sure you have already exported your MESHAGENT_API_KEY. You can create and activate one by running meshagent api-key create <KEY_NAME> activate this will print the value of the key one time for you to copy. Then save it and export it export MESHAGENT_API_KEY=xxxxx
bash
- Go to studio.meshagent.com
- Enter the room,
myroom - Click menu —> “Run Task”
- Select LLM Task Runner from the agent dropdown. (Optional) Click “add tools” and add tools to your new LLMTaskRunner
- Enter a prompt
- Results appear
Next Steps
Dive Deeper into TaskRunners- TaskRunner Overview: Review what TaskRunners are and when to use them
- Use an existing agent with a MeshAgent TaskRunner: Learn how to take an existing PydanticAI Agent, use it as a TaskRunner, then use the TaskRunner as a tool for a MeshAgent
ChatBot - Prebuilt MeshAgent TaskRunners: Learn how to use prebuilt TaskRunners that come out of the box with every MeshAgent room
- Services & Containers: Understand different options for running, deploying, and managing agents with MeshAgent
- Secrets & Registries: Learn how to store credentials securely for deployment