From 5fc866e8fe4e978ed35506acc51af9f211351d7d Mon Sep 17 00:00:00 2001 From: Peter Steinberger Date: Sat, 24 Jan 2026 14:36:32 +0000 Subject: [PATCH] docs: add openai subscription faq --- docs/help/faq.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/docs/help/faq.md b/docs/help/faq.md index 760ea0782..1a5ffd9d7 100644 --- a/docs/help/faq.md +++ b/docs/help/faq.md @@ -24,6 +24,7 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS, - [Do you support Claude subscription auth (Claude Code OAuth)?](#do-you-support-claude-subscription-auth-claude-code-oauth) - [Is AWS Bedrock supported?](#is-aws-bedrock-supported) - [How does Codex auth work?](#how-does-codex-auth-work) + - [Do you support OpenAI subscription auth (Codex OAuth)?](#do-you-support-openai-subscription-auth-codex-oauth) - [Is a local model OK for casual chats?](#is-a-local-model-ok-for-casual-chats) - [How do I keep hosted model traffic in a specific region?](#how-do-i-keep-hosted-model-traffic-in-a-specific-region) - [Do I have to buy a Mac Mini to install this?](#do-i-have-to-buy-a-mac-mini-to-install-this) @@ -336,6 +337,14 @@ Yes — via pi‑ai’s **Amazon Bedrock (Converse)** provider with **manual con Clawdbot supports **OpenAI Code (Codex)** via OAuth or by reusing your Codex CLI login (`~/.codex/auth.json`). The wizard can import the CLI login or run the OAuth flow and will set the default model to `openai-codex/gpt-5.2` when appropriate. See [Model providers](/concepts/model-providers) and [Wizard](/start/wizard). +### Do you support OpenAI subscription auth (Codex OAuth)? + +Yes. Clawdbot fully supports **OpenAI Code (Codex) subscription OAuth** and can also reuse an +existing Codex CLI login (`~/.codex/auth.json`) on the gateway host. The onboarding wizard +can import the CLI login or run the OAuth flow for you. + +See [OAuth](/concepts/oauth), [Model providers](/concepts/model-providers), and [Wizard](/start/wizard). + ### Is a local model OK for casual chats? Usually no. Clawdbot needs large context + strong safety; small cards truncate and leak. If you must, run the **largest** MiniMax M2.1 build you can locally (LM Studio) and see [/gateway/local-models](/gateway/local-models). Smaller/quantized models increase prompt-injection risk — see [Security](/gateway/security).