Skip to main content
The OpenAI Responses adapter is our reference LLMAdapter implementation. It enables MeshAgent agents to use the OpenAI Responses API, handling streaming, tool calls, and model-specific settings.

Key features

  • Model defaults: Reads the model name from the constructor (model=) or the OPENAI_MODEL environment variable. Override per message by passing model in the chat payload; ChatBot forwards it to next().
  • System role selection: Adjusts the initial chat role (system, developer, etc.) based on the model name (o1, o3, computer-use, etc.).
  • Tool bundling: Converts the supplied toolkits into OpenAI tool definitions (both standard JSON function tools and OpenAI-native tools like computer_use_preview, web_search_preview, image_generation).
  • Streaming support: Consumes the streaming response API, emitting events such as reasoning summaries, partial content, and tool call updates.
  • Parallel tool calls: Optionally enables OpenAI’s parallel_tool_calls setting (disabled automatically for models that do not support it).

Constructor parameters

Python
OpenAIResponsesAdapter(
    model: str = "gpt-5",
    parallel_tool_calls: Optional[bool] = None,
    client: Optional[AsyncOpenAI] = None,
    response_options: Optional[dict] = None,
    reasoning_effort: Optional[str] = None,
    provider: str = "openai",
)
  • model – default model name; can be overridden per message.
  • parallel_tool_calls – request parallel tool execution when supported.
  • client – reuse an existing AsyncOpenAI client; otherwise the adapter creates one via meshagent.openai.proxy.get_client.
  • response_options – extra parameters passed to responses.create.
  • reasoning_effort – populates the Responses API reasoning options.
  • provider – label emitted in telemetry and logs.

Tool provider integration

The adapter includes several builders and tools for OpenAI native tools. Agents can use them directly, or override them with agent-specific wrappers that add persistence (for example, the ChatBot’s thread-aware image generation builder that saves partial/final images to room storage and updates the thread document).
  • Image generationImageGenerationConfig, ImageGenerationToolkitBuilder, ImageGenerationTool
  • Local shellLocalShellConfig, LocalShellToolkitBuilder, LocalShellTool
  • MCPMCPConfig, MCPToolkitBuilder, MCPTool
  • Web search previewWebSearchConfig, WebSearchToolkitBuilder, WebSearchTool
  • File Search - FileSearchTool
  • Code Interpreter - CodeInterpreterTool
  • Reasoning - ReasoningTool
Note: The adapter doesn’t “auto-register” these builders by default. Your agent decides which builders to expose each turn (e.g., ChatBot.get_thread_toolkit_builders(...)). This lets ChatBot substitute its thread-aware wrappers (like ChatBotThreadOpenAIImageGenerationToolkitBuilder).

Handling a turn

When next() is called it:
  1. Bundles tools - Collects the tools from your toolkits and packages them for OpenAI’s API
  2. Calls the model - Sends messages and tools to OpenAI’s API
  3. Handles responses - Processes text, tool calls, or structured output
  4. Executes tools - When the model requests tools, executes them and formats results
  5. Loops - Continues calling the model with tool results until it produces a final answer
  6. Returns Result - Gives you the final output (text or structured data)