Overview
AWorker is a specialized queue-based agent that processes messages sent to a MeshAgent room queue. Other agents or
applications can push tasks to the queue and the Worker will handle them in the background. This is helpful for
long‑running or asynchronous jobs that shouldn’t block an interactive chat agent.
Two ways to build a Worker
- CLI: Run production-ready workers with a single command. Configure queue, tools, and rules using flags. Ideal for most use cases.
- SDK: Extend the base
Workerclass with custom code when you need deeper integrations or specialized behaviors.
In this guide you will learn
- When to use
Worker - How to run and deploy a
Workerwith the MeshAgent CLI - How to build and deploy a
Workerwith the MeshAgent SDK - How the
Workerworks including constructor parameters, lifecycle, processing flow, and hooks
When to use Worker
- Process background jobs pushed by apps or other agents
- Run long or repetitive jobs off the main chat thread
- Execute non-interactive tasks where no follow up questions are desired
- Batch operations that process multiple items sequentially
- Schedule or trigger work externally and have the agent pick it up
Run and deploy a Worker with the CLI
Step 1: Run a Worker from the CLI
Let’s run a Worker that listens on a queue and can write files to room storage.
bash
--room-rules "agents/worker/rules.txt" flag and supply a file path for the rules, the file will be created if it does not already exist, this file is relative to the room storage. The --queue flag is required.
Tip: Use meshagent worker join --help to see all available tools and options.
Step 2: Send work to the queue
bash
Step 3: Package and deploy the worker
Once your worker runs locally, package it as a service so it is always available in the room. Both options below deploy the exact same worker - choose based on your workflow:- Option 1 (
meshagent worker deploy): One command that deploys immediately (fastest/easiest approach) - Option 2 (
meshagent worker spec+meshagent service create): Generates a yaml file you can review or futher customize before deploying
bash
meshagent.yaml file that defines how our service should run, then deploy the agent to our room. The service spec can be dynamically generated from the CLI by running:
meshagent.yaml file
bash
--room flag is optional. Without it, the worker is deployed at the project level and appears in all rooms in your project.
Build and deploy a Worker with the SDK
The SDK approach produces the same deployed worker as the CLI examples above. Using the SDK allows you further control to write custom Python code for specialized processing logic, integrations, or behaviors. For most use cases the CLI is sufficient.Step 1: Create a Worker Agent
The sample below shows a Worker that listens on a queue and writes files using theStorageToolkit. After starting the
service you can push a message to the queue to trigger the worker.
First create a python file, main.py, and define our StorageWorker:
Step 2: Running the Worker
From your terminal inside an activated virtual environment start the service:bash
Step 3: Sending Work to the Queue
Now that our agent is running, let’s send it some work! Option 1: Using the MeshAgent CLI Use the MeshAgent CLI to directly connect to the room, mint a token, and send a message to the queue.bash
push_queue.py and define our function to push a message to the queue.
meshagent service run "main.py" --room=quickstart). From a different tab in your terminal you can run the python file we created to send a task to the queue.
bash
Checking results in the Studio
From studio.meshagent.com open the roomquickstart and you’ll see our poem about AI in the poems.txt file!
Step 4: Package and deploy the agent
To deploy your SDK Worker permanently, you’ll package your code with ameshagent.yaml file that defines the service configuration and a container image that MeshAgent can run.
For full details on the service spec and deployment flow, see Packaging Services and Deploying Services.
MeshAgent supports two deployment patterns for containers:
- Runtime image + code mount (recommended): Use a pre-built MeshAgent runtime image (like
python-sdk-slim) that contains Python and all MeshAgent dependencies. Mount your lightweight code-only image on top. This keeps your code image tiny (~KB), eliminates dependency installation time, and allows your service to start quickly. - Single Image: Bundle your code and all dependencies into one image. This is good when you need to install additional libraries, but can result in larger images and slower pulls. If you build your own images we recommend optimizing them with eStargz.
python-docs-examples code image so you can run the documentation sample without building your own image.
If you want to build and push your own code image, follow the steps below and update the storage.images entry in meshagent.yaml.
Prepare your project structure
This example organizes the agent code and configuration in the same folder, making each agent self-contained:
Note: If you’re building a single agent, you only need the queueworker/ folder. The structure shown supports multiple samples sharing one Dockerfile.
Step 4a: Build a Docker container
If you want a code-only image, create a scratch Dockerfile and copy the files you want to run. This creates a minimal image that pairs with the runtime image + code mount pattern.
docker buildx:
bash
Note: Building from the project root copies your entire project structure into the image. For a single agent, this is fine - your image will just contain one folder. For multi-agent projects, all agents will be in one image, but each can deploy independently using its own meshagent.yaml.
Step 4b: Package the agent
Define the service configuration in a meshagent.yaml file.
- Your code image contains
queueworker/queueworker.py - It’s mounted at
/srcin the runtime container - The command runs
python /src/queueworker/queueworker.py
Note: The default YAML in the docs uses us-central1-docker.pkg.dev/meshagent-public/images/python-docs-examples so you can test this example immediately without building your own image first. Replace this with your actual image tag when deploying your own code.
Step 4c: Deploy the agent
Next from the CLI in the directory where your meshagent.yaml file is run:
bash
Worker is now deployed to the quickstart room! Now the agent will always be available inside the room for us to send queue-based tasks. We can use the same CLI commands or SDK code from above to send messages to the queue.
How Worker Works
Constructor Arguments
AWorker accepts everything from SingleRoomAgent (title, description, requires, labels). The name constructor argument is deprecated; agent identity comes from its participant token.
| Argument | Type | Description |
|---|---|---|
queue | str | Required. Name of the room queue to listen for messages on. |
llm_adapter | LLMAdapter | The adapter to use so the agent works with an LLM. We recommend using the OpenAIResponsesAdapter from meshagent-openai. |
tool_adapter | ToolResponseAdapter | None | Optional adapter to translate tool outputs into LLM/context messages. We recommend using the OpenAIResponsesToolResponseAdapter from meshagent-openai. |
toolkits | list[Toolkit] | None | Local toolkits always available to this worker (in addition to any requires). |
rules | list[str] | None | Optional system prompt/rules that guide the agent’s behavior. |
requires | list[Requirement] | None | Schemas/toolkits to install in the room before processing. You can use RequiredSchema, RequiredToolkit to use existing toolkits and schemas that have been registered with the room. |
Lifecycle Overview
await start(room): Connects to the room, installs requirements, and starts the background run() loop that consumes messages from the configured queue.await stop(): Signals the loop to stop, awaits the main task, then disconnects cleanly.room property: Access the activeRoomClientfor queues, storage, messaging, and tools.
Processing Flow
-
Wait for work (long-polling). The worker listens for jobs using:
message = await room.queues.receive(name=self._queue, create=True, wait=True)Withwait=True, the call long-polls the queue: instead of returning immediately when the queue is empty, it waits asynchronously for a short window until a message arrives (or the wait times out). This is more efficient than tight polling and ensures the worker can pick up jobs immediately when they’re published. -
Build context and tools. Create a fresh chat context (
init_chat_context()), apply any rules, and resolve toolkits from requires plus local toolkits. -
Represent the job in context. By default,
append_message_context(...)serializes the message as a user message (JSON). Override this to customize how the payload is injected. -
Process the job.
process_message(...)runs the task (default implementation calls yourllm_adapter.next()with the prepared context and toolkits). - Handle errors & keep running. Errors are logged. On receive failures, the loop backs off exponentially and retries; otherwise it immediately waits for the next message.
Key Behaviors and Hooks
- Queue consumption:
run()long-polls the queue(receive(..., wait=True)), creating it if needed. - Context building:
append_message_context(...)inserts job data into the chat context (default: dump JSON as a user message). - Job execution:
process_message(...)is the main hook: it prepares context, tools, and (by default) calls the LLM adapter’s.next()function to work through the task. - Resilience: Errors are logged;
run()applies an exponential backoff between receive retries.
Key Methods
| Method | Description |
|---|---|
async def run(room) | Main loop: receives queue messages and processes them. |
async def process_message(chat_context, room, message, toolkits) | Core job handler—override for custom behavior. |
async def append_message_context(room, message, chat_context) | How a job is represented in the chat context (override to change). |
async def start(room) / async def stop() | Start/stop the worker, manage background task. |
Next Steps
Continue Learning about Agents and Explore a variety of other base agents including:- ChatBot for conversational text-based agents
- VoiceBot for conversational speech/voice based agents
- TaskRunner for agents that run in the background with defined inputs and outputs
- MailBot for email based agents