LLMAdapter implementation. It enables MeshAgent agents to use the OpenAI Responses API, handling streaming, tool calls, and model-specific settings.
Key features
- Model defaults: Reads the model name from the constructor (
model=) or theOPENAI_MODELenvironment variable. Override per message by passingmodelin the chat payload;ChatBotforwards it tonext(). - Session context defaults: Creates
AgentSessionContext(system_role=None)so system/developer prompts are driven by the caller or wrapper agent. - Tool bundling: Converts the supplied toolkits into OpenAI tool definitions (both standard JSON function tools and OpenAI-native tools like
computer_use_preview,web_search_preview,image_generation). - Streaming support: Consumes the streaming response API, emitting events such as reasoning summaries, partial content, and tool call updates.
- Parallel tool calls: Optionally enables OpenAI’s
parallel_tool_callssetting (disabled automatically for models that do not support it). - Structured output: If
output_schemais provided tonext(), requests JSON schema output and validates the result. - Automatic compaction: Uses OpenAI Responses auto-compaction (
context_management) by default.
Constructor parameters
Python
model– default model name; can be overridden per message.parallel_tool_calls– request parallel tool execution when supported.client– reuse an existingAsyncOpenAIclient; otherwise the adapter creates one viameshagent.openai.proxy.get_client.response_options– extra parameters passed toresponses.create.reasoning_effort– populates the Responses APIreasoningoptions.provider– label emitted in telemetry and logs.log_requests– when true, logs HTTP requests for debugging.max_output_tokens– cap output tokens per response; also used when deciding whether to compact the context.context_management– controls compaction behavior:autoattachescontext_managementto each request and lets Responses handle compaction.standaloneuses manualresponses.compactpreflight in the adapter.nonedisables both auto and manual compaction.
compaction_threshold– threshold used for compaction (compact_thresholdin Responsescontext_managemententries and manual preflight trigger instandalonemode).
Tool provider integration
The adapter includes several builders and tools for OpenAI native tools. Agents can use them directly, or override them with agent-specific wrappers that add persistence (for example, the ChatBot’s thread-aware image generation builder that saves partial/final images to room storage and updates the thread document).- Image generation –
ImageGenerationConfig,ImageGenerationToolkitBuilder,ImageGenerationTool - Local shell –
LocalShellConfig,LocalShellToolkitBuilder,LocalShellTool - MCP –
MCPConfig,MCPToolkitBuilder,MCPTool - Web search preview –
WebSearchConfig,WebSearchToolkitBuilder,WebSearchTool - File Search -
FileSearchTool - Code Interpreter -
CodeInterpreterTool - Reasoning -
ReasoningTool
Handling a turn
Whennext() is called it:
- Bundles tools - Collects the tools from your toolkits and packages them for OpenAI’s API
- Calls the model - Sends messages and tools to OpenAI’s API
- Handles responses - Processes text, tool calls, or structured output
- Executes tools - When the model requests tools, executes them and formats results
- Loops - Continues calling the model with tool results until it produces a final answer
- Returns Result - Gives you the final output (text or structured data)
Context compaction
OpenAIResponsesAdapter defaults to context_management="auto" and sends:
context_management=[{"type":"compaction","compact_threshold":200000}]
You can switch to:
context_management="standalone"to use manualresponses.compactbefore a turn when usage crosses the threshold.context_management="none"to disable compaction management.
Related Topics
- Adapters Overview: Understand LLMAdapters and ToolResponseAdapters
- LLM Proxy: How requests are routed and metered inside rooms
- OpenAI Tool Response Adapter: How tool outputs are rendered back into the chat transcript.
- ChatBot Overview: Shows where the adapter is invoked in the overall agent flow.