Skip to main content
Terminal-based coding agents such as OpenAI Codex CLI and Claude Code use LLMs to reason about code, generate changes, and complete development tasks from the terminal. By default, these tools send model requests directly to a provider such as OpenAI or Anthropic. The MeshAgent LLM Proxy gives those CLIs a MeshAgent endpoint instead. MeshAgent forwards each request to the provider configured for the target room and includes the traffic in MeshAgent token usage reporting. Use this when you want people to run coding agents through MeshAgent instead of authenticating directly with OpenAI or Anthropic.

Why use the LLM Proxy

  • Let someone use Codex CLI or Claude Code through MeshAgent even if they do not already have their own OpenAI or Anthropic account set up for that CLI.
  • Centralize provider access in MeshAgent instead of configuring raw provider credentials on each developer machine or CI job.
  • Route Codex CLI, Claude Code, and similar tools through the same MeshAgent endpoint pattern.
  • Keep token usage reporting inside MeshAgent while the CLI continues to work like a normal OpenAI- or Anthropic-compatible client.

How it works

The LLM Proxy exposes stable MeshAgent endpoints for both OpenAI-compatible and Anthropic-compatible clients:
  • OpenAI-compatible clients such as Codex CLI: https://api.meshagent.com/openai/v1
  • Anthropic-compatible clients such as Claude Code: https://api.meshagent.com/anthropic
Instead of authenticating directly with the provider, the CLI authenticates to MeshAgent using a participant token. The proxy takes the target room from the token, validates the token, injects the real provider key server-side, forwards the request, and reports token usage for the traffic sent through it. The coding agent itself is unchanged: it still drives the task, decides when to call the model, and interprets responses. MeshAgent acts as the LLM gateway.

Setup: Shared first steps

These steps apply regardless of which CLI agent you are using.

1. Authenticate to MeshAgent and create an API key

If you haven’t set up MeshAgent yet, start with meshagent setup. Then create an API key. Copy the printed value — it is shown only once.
meshagent api-key create cli-agent-key --activate

2. Create a token spec and generate a participant token JWT

The MeshAgent LLM Proxy requires a participant token (JWT) as the bearer credential. The API key from Step 1 is used to sign it. The JWT is scoped to a specific room, and it must include the api.llm grant or the proxy will reject requests. Create a token spec file (contains no secrets — safe to commit):
token-spec.yaml
version: v1
kind: ParticipantToken
room: my-room
identity: cli-agent
role: agent
api:
  llm: {}
Replace my-room with your room name, then generate the JWT:
meshagent token --input token-spec.yaml
This prints a eyJ... JWT. Store it somewhere secure — you will use it as the API key value in your CLI agent’s configuration.
Tokens do not expire by default. Regenerate at any time by re-running this command with the same spec file.

Codex CLI

Codex CLI supports custom model providers in ~/.codex/config.toml, including custom base URLs and API key env vars — making it a natural fit for the MeshAgent LLM Proxy.

Step 1: Install Codex

brew install openai-codex

Step 2: Store the JWT as an environment variable

Add the JWT from the shared setup to your shell environment:
export MESHAGENT_ROOM_TOKEN="eyJ..."   # your full JWT here
For a persistent setup, add the same line to your shell profile such as ~/.zshrc or ~/.bashrc, then open a new terminal or reload your shell before starting Codex.

Step 3: Configure ~/.codex/config.toml

Open the config file:
nano ~/.codex/config.toml
Then add a MeshAgent provider and profile:
~/.codex/config.toml
[model_providers.meshagent]
name = "MeshAgent"
base_url = "https://api.meshagent.com/openai/v1"
env_key = "MESHAGENT_ROOM_TOKEN"

[profiles.meshagent]
model_provider = "meshagent"
model = "gpt-5.4"
KeyDescription
base_urlThe MeshAgent OpenAI-compatible LLM proxy endpoint.
env_keyThe env var name holding the JWT from the shared setup.
modelAny model available through the OpenAI API.

Step 4: Run Codex with the MeshAgent profile

codex --profile meshagent

Step 5: Verify Codex is using MeshAgent

After Codex starts, run:
/status
You should see Codex using the meshagent profile you configured in ~/.codex/config.toml, including output like:
Model provider:       MeshAgent - https://api.meshagent.com/openai/v1

Claude Code

Claude Code is Anthropic’s CLI coding agent. It uses the Anthropic API and can be pointed at a custom base URL via environment variables.

Step 1: Install Claude Code

curl -fsSL https://claude.ai/install.sh | bash

Step 2: Configure environment variables

Add the following to your shell profile such as ~/.zshrc or ~/.bashrc:
nano ~/.zshrc   # or ~/.bashrc for Bash users
export ANTHROPIC_BASE_URL="https://api.meshagent.com/anthropic"
export ANTHROPIC_AUTH_TOKEN="eyJ..."   # your JWT from the shared setup
export ANTHROPIC_API_KEY=""            # Important: Must be explicitly empty
Do not put these values in a project-level .env file. Configure them in your shell profile so Claude Code sees them in every session. ANTHROPIC_API_KEY must be explicitly set to an empty string.
After updating your shell profile, open a new terminal or reload your shell before starting Claude Code. You can also export ANTHROPIC_MODEL, for example claude-sonnet-4-6, or change the model while Claude Code is running with /model claude-sonnet-4-6.

Step 3: Run Claude Code

If you have previously logged into Claude with another account, run claude /logout before connecting through MeshAgent so cached credentials do not override this configuration.
claude

Step 4: Verify Claude Code is using MeshAgent

After Claude Code starts, run:
/status
You should see Claude Code using the environment variables you configured, including output like:
Auth token:          ANTHROPIC_AUTH_TOKEN
Anthropic base URL:  https://api.meshagent.com/anthropic