Overview
This page is for engineers who want to understand howmeshagent process maps to the implementation.
At a high level, the runtime works like this:
- channels convert outside inputs into process messages
- a supervisor routes those messages by
thread_id - one
LLMAgentProcesshandles each active thread - a thread adapter writes runtime events back into the persisted thread model
What meshagent process actually builds
At the CLI level, meshagent process is a command group. In the current implementation, that command group reuses the chatbot command implementation and switches it into the process runtime mode.
When you run meshagent process, the CLI builds:
- one
SingleRoomAgent - zero or one
ChatChannel - zero or more
MailChannelinstances - zero or more
QueueChannelinstances - zero or more
ToolkitChannelinstances - one
AgentSupervisor - one
LLMAgentProcessper active thread - one
AgentProcessThreadAdapterper active thread
process is not a single SDK class called “ProcessAgent”. It is a runtime assembly of channels plus per-thread execution.
Runtime flow
In practice:- a channel receives input from chat, mail, a queue, or a toolkit invocation
- the channel emits process messages, including a
thread_id - the supervisor finds or creates the process for that thread
- the thread-scoped
LLMAgentProcessruns the turn - the thread adapter records outputs, lifecycle state, tool activity, and status into the thread model
Channel responsibilities
A channel has one job:- receive input from some source
- convert it into process messages
- emit those messages into the supervisor
ChatChannelMailChannelQueueChannelToolkitChannel
What AgentSupervisor does
AgentSupervisor is the router for the runtime.
Its main responsibilities are to:
- hold the active channels for the runtime
- accept messages emitted by those channels
- route those messages to the correct thread process
- create a new process when a thread is seen for the first time
- start and stop managed processes as the runtime lifecycle changes
Channel roles
The built-in channels each adapt a different source of input:ChatChannel: bridges room messages and thread-oriented chat interfacesMailChannel: turns inbound mail into turns and sends the resulting reply back through the mailbox flowQueueChannel: listens on a room queue and turns queue payloads into agent turnsToolkitChannel: exposes the agent as a callable toolkit so other participants can send a prompt and receive a reply
What LLMAgentProcess does
LLMAgentProcess is the thread-level execution loop for LLM-backed turns.
Each instance handles one active thread at a time and is responsible for:
- receiving routed messages for that thread
- running the agent logic
- managing turn lifecycle events
- coordinating tool calls and approvals
- emitting outputs back to the channels and thread adapter
Per-thread routing and queueing
The supervisor routes bythread_id and creates a thread process on first use.
This matters for two reasons:
- messages for the same thread go to the same
LLMAgentProcess - unrelated threads do not share one live execution state
Streaming and live turn state
The process runtime does not just wait for a final answer. It emits incremental state as a turn progresses. That includes:- text deltas
- file output updates
- reasoning summary updates
- tool call progress and logs
- turn lifecycle events such as started, completed, failed, or interrupted
Approvals, steering, and interrupts
Process also handles live control flows around an active turn:- tool calls can pause and wait for approval
- turns can be steered with additional input
- turns can be interrupted and resumed or cancelled
What AgentProcessThreadAdapter does
AgentProcessThreadAdapter is the bridge between the process runtime and the persisted thread representation.
It is responsible for turning process events into thread updates such as:
- messages and content items
- tool call state
- status updates
- turn lifecycle data
Context isolation
The important rule is:- context isolation is by
thread_id, not by process
Adapter-provided model behavior
Some behavior comes from the configured LLM adapter rather than from the process runtime itself. For example, streaming model events and reasoning summaries are surfaced through the process runtime, but capabilities such as automatic context compaction depend on whether the selected adapter supports them.Where to go next
- Multi-channel Agents Overview: when to choose
processand what problem it solves - Using Multi-channel Agents: the practical CLI workflow
- Threads Overview: the persistence model process agents build on