Using Codex
Codex can be configured to use the RCD LLM Service through a custom provider in
~/.codex/config.toml.
Codex should use the base URL https://llm.rcd.clemson.edu/v1. The
chat-completions compatibility endpoint is useful for some other clients, but
Codex should be configured with the base URL.
Codex depends on the service's POST /v1/responses compatibility layer. The RCD
LLM Service currently provides this through a shim, so Codex support is
functional but may change more often than the chat-completions endpoint.
1. Create An API Key
First, request an allocation, sign in to the service, and create an API key:
2. Export The API Key
Store the key in an environment variable before starting Codex:
export RCD_LLM_API_KEY="your-api-key-here"
If you want this to be available in new shells automatically, add the same line
to your shell startup file such as ~/.zshrc.
3. Choose A Model
Check the Available Models page for the exact model name to use.
4. Update ~/.codex/config.toml
Add a provider like the following:
model = "<model-name>"
model_provider = "rcd-llm"
web_search = "disabled"
[model_providers.rcd-llm]
name = "RCD LLM Service"
base_url = "https://llm.rcd.clemson.edu/v1"
env_key = "RCD_LLM_API_KEY"
Replace <model-name> with an exact model name from the Models
page.
The RCD LLM Service does not support server-side tools like web_search. They
must be disabled for things to function correctly.
If you already have model, model_provider, or model_providers entries in
your config.toml, merge this snippet into your existing file instead of
creating duplicate keys.
5. Start A New Codex Session
After saving ~/.codex/config.toml, start a new Codex session:
codex
Switching Between OpenAI and RCD LLM Service
Codex CLI now supports
profiles,
allowing you to switch between the OpenAI backend and the RCD LLM Service more
easily. To do this, add both an openai profile and an rcd-llm profile, then
set profile to whichever one you want to use by default. For example, to make
rcd-llm the default, use:
profile = "rcd-llm"
[profiles.openai]
model = "gpt-5.4"
[profiles.rcd-llm]
model_provider = "rcd-llm"
web_search = "disabled"
model = "glm-5.1-fp8"
[model_providers.rcd-llm]
name = "RCD LLM Service"
base_url = "https://llm.rcd.clemson.edu/v1"
env_key = "RCD_LLM_API_KEY"
Then, if you run codex, it will use the RCD LLM Service by default, but you
can run codex -p openai to use the OpenAI backend.
Model Catalog
We're piloting an endpoint that automatically generates a Codex-compatible model
catalog. This allows you to use the /model command in Codex to switch models
and provides some extra information to Codex about model capabilities (context
lengths, input modalities, etc.).
To use this,
download the catalog
and save it as ~/.codex/rcd_llms_model_catalog.json. Then update the
model_catalog_json option in ~/.codex/config.toml. For example, if using
profiles, use this as the profile:
[profiles.rcd-llm]
model_provider = "rcd-llm"
web_search = "disabled"
model = "glm-5.1-fp8"
model_catalog_json = "~/.codex/rcd_llms_model_catalog.json"
The model catalog is a snapshot of available models. It will not update if we change our available models. You will have to download the catalog again to update.
Notes
- You must still be on the Clemson network or connected through VPN.
- If the service model list changes, update the
modelvalue in~/.codex/config.toml. - Keep the API key out of Git repositories, shared shell scripts, and support tickets.