Skip to main contentThe LLMAdapter is the base class that standardizes how MeshAgent communciates with any language model provider (OpenAI, Anthropic, self-hosted, etc.). You pick or implement an adapter for your provider and can use it with any agent that accepts an llm_adapter parameter.
Why adapters exist
Different LLM providers have different:
- Request and response formats
 
- Tool-calling protocols
 
- Streaming APIs
 
- System role conventions by models (e.g., some use “system”, others use “developer”)
 
- Termination signals
 
The adapter abstracts these differences so your agent logic remains provider-agnostic.
Adapter Responsibilities
The LLMAdapter supplies the methods your implementation uses to run a full conversation turn. For a concrete reference, see the OpenAI Responses Adapter.
- Input: Receive the current chat context and the toolkits selected for this turn.
 
- Call the model: Use the provider’s API/SDK to send the messages; stream tokens/events back.
 
- Handle tools: When the model requests a tool, execute it via the toolkit and capture the result (using a 
ToolResponseAdapter if provided). 
- Update Context: Add tool results back into the chat context so the model can use it in the same turn or a later one (depending on the agent).
 
- Output: Return the model’s final response for the turn.
 
Adapters append results to the in-memory chat context for the turn. Whether that context is persisted across turns is agent-specific (e.g., ChatBot saves to the thread document and is used for multi-turn conversations; a TaskRunner does not and is used for single-turn tasks).
What an LLMAdapter base class defines
default_model(): Return the model name (or identifier) the adapter should use when the caller does not override it. 
create_chat_context(): Creates a fresh AgentChatContext. Implementations can use this to set provider and model specific  roles (e.g. some OpenAI models set a “developer” prompt vs a “system” prompt). 
tool_providers(model: str): Can be used to advertise ToolkitBuilder instances this adapter can supply for a given model. This is one way to to bundle built-in provider tools. Alternatively, agents can manage the builders themselves (see the ChatBot implementation for an example). 
check_for_termination(context, room): Allows you to use the chat context to decide whether the conversation should continue. Can be used to check end-of-turn events based on provider semantics. 
next(...): The core method: given the chat context, room, toolkits for this turn, and an optional ToolResponseAdapter, call the underlying LLM, stream events, execute tool calls, inject tool results into the context, and return the final output. See OpenAI Responses Adapter for a detailed implementation. 
How agents use adapters (conversation turn flow)
- The agent resolves toolkits for this turn
 
- The agent calls 
llm_adapter.next(...) with the messages and toolkits 
- The adapter streams events, executes tool calls, and returns the final result
 
Implementing your own LLM Adapter
- Subclass 
LLMAdapter. Provide defaults for the model name and optionally a custom create_chat_context(). 
- Implement 
next(). Use your provider’s SDK or API to send the chat context, include tool definitions derived from the supplied toolkits, stream result, execute any tool calls, and append tool outputs back to the chat context. 
- Expose native tool builders (optional). You can include built-in toolkits with your adapter by overriding 
tool_providers() to return ToolkitBuilders so UIs can offer them as toggles. 
- Be mindful of cancellation and telemetry. Make appropriate operations async, apply appropriate timeouts/retries, and emit tracing data if you integrate with OpenTelemetry (the base adapters do).