Skip to content

feat: /login支持codex订阅登录#438

Merged
claude-code-best merged 2 commits intoclaude-code-best:mainfrom
q1352013520:feature/codex-subscription
May 9, 2026
Merged

feat: /login支持codex订阅登录#438
claude-code-best merged 2 commits intoclaude-code-best:mainfrom
q1352013520:feature/codex-subscription

Conversation

@q1352013520
Copy link
Copy Markdown
Contributor

@q1352013520 q1352013520 commented May 8, 2026

  1. /login 新增 ChatGPT 订阅账号登录
    在 src/components/ConsoleOAuthFlow.tsx:461 里新增 ChatGPT account with subscription 选项,走 OpenAI/Codex device auth 流程。登录成功后写入:

    • modelType: "openai"
    • OPENAI_AUTH_MODE=chatgpt
  2. 新增 ChatGPT/Codex 认证管理
    新文件 src/services/api/openai/chatgptAuth.ts:1 负责:

    • 请求 device code
    • 轮询授权结果
    • 换取/刷新 token
    • 保存到 ~/.claude/openai-chatgpt-auth.json
    • 兼容读取 ~/.codex/auth.json
    • /logout 时删除 ChatGPT auth 文件
  3. OpenAI provider 增加 ChatGPT Responses 后端
    src/services/api/openai/index.ts:340 现在会判断 OPENAI_AUTH_MODE=chatgpt:

    新文件 src/services/api/openai/responsesAdapter.ts:165 负责把 Responses API 的 SSE 流转换成 Anthropic 内部事件,包括文本、reasoning、tool call、usage、错误等。

  4. 新增 ChatGPT Codex 模型列表和默认模型
    src/utils/model/chatgptModels.ts:10 增加了 ChatGPT Codex 模型:

    • gpt-5.5
    • gpt-5.4
    • gpt-5.4-mini
    • gpt-5.3-codex
    • gpt-5.3-codex-spark
    • gpt-5.2

    默认主模型是 gpt-5.5,fast/haiku 类模型默认是 gpt-5.4-mini。

  5. 模型选择器和 effort 支持 ChatGPT Codex
    src/components/ModelPicker.tsx:147 和 src/utils/effort.ts:96 增加了 xhigh effort 支持。
    对 ChatGPT Codex 模型,max 会映射成 xhigh,默认 effort 是 medium。

  6. /logout 现在会清理 ChatGPT 登录态
    src/commands/logout/logout.tsx:26 现在除了原来的 Anthropic/API key 清理,还会清理 ChatGPT token 和 OPENAI_AUTH_MODE。提示文案也从 Anthropic account 改成更通用的 configured
    account。

  7. OpenAI Compatible 和 ChatGPT 订阅模式互斥
    保存 OpenAI Compatible 配置时会清掉 OPENAI_AUTH_MODE,避免之前选过 ChatGPT 订阅后继续误走 ChatGPT Responses 路径。

  8. 附带一个 buddy/companion 兼容修复
    src/buddy/companion.ts:137 增加了 legacy seedless companion 的 species/rarity 推断,避免旧格式 companion 在 userId roll 变化后显示身份漂移。


View in Codesmith
Need help on this PR? Tag @codesmith with what you need.

  • Let Codesmith autofix CI failures and bot reviews

Summary by CodeRabbit

  • New Features

    • ChatGPT subscription login via device‑code flow.
    • Added "xhigh" effort tier and ChatGPT Codex model options with adjusted defaults when using ChatGPT auth.
    • ChatGPT Responses streaming path supported for authenticated users.
  • Bug Fixes

    • More generic logout messaging and improved clearing of ChatGPT auth state.
    • Better handling of legacy companion identity so species/rarity are preserved.
    • Provider switching respects ChatGPT auth mode.
  • Tests

    • Added tests for companion inference and Responses effort handling.

Review Change Stack

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 8, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 604448ee-757e-4fb4-bc4f-18de843b848d

📥 Commits

Reviewing files that changed from the base of the PR and between c7cb3d8 and b52c10d.

📒 Files selected for processing (4)
  • src/commands/logout/logout.tsx
  • src/components/ConsoleOAuthFlow.tsx
  • src/components/ModelPicker.tsx
  • src/utils/model/chatgptModels.ts
🚧 Files skipped from review as they are similar to previous changes (4)
  • src/utils/model/chatgptModels.ts
  • src/commands/logout/logout.tsx
  • src/components/ModelPicker.tsx
  • src/components/ConsoleOAuthFlow.tsx

📝 Walkthrough

Walkthrough

This PR adds ChatGPT device-code authentication and local token management, a ChatGPT Responses streaming adapter plus request-building and SSE parsing, xhigh effort support for ChatGPT Codex models with UI and model-selection wiring, provider/login/logout/settings updates, and companion legacy species/rarity inference with tests.

Changes

ChatGPT Device Login & Auth Flow

Layer / File(s) Summary
Auth Types & Constants
src/services/api/openai/chatgptAuth.ts
Defines device-code, token, and auth shapes and secure local storage helpers.
JWT & Network Helpers
src/services/api/openai/chatgptAuth.ts
Implements JWT payload decoding and JSON/form POST helpers for endpoints.
Device-Code Flow
src/services/api/openai/chatgptAuth.ts
Request device code, poll for authorization, exchange code, refresh tokens, and persist tokens.
Auth Public Helpers
src/services/api/openai/chatgptAuth.ts
Exports isChatGPTAuthEnabled(), getValidChatGPTAuth(), and removeChatGPTAuth().

Responses API Streaming & Adaptation

Layer / File(s) Summary
Request Building & Content Conversion
src/services/api/openai/responsesAdapter.ts
Converts messages, tools, and tool-choice into Responses API request format including reasoningEffort and parallel_tool_calls.
SSE Parsing & Usage Mapping
src/services/api/openai/responsesAdapter.ts
Parses SSE frames, extracts JSON payloads, and maps Responses usage/status to Anthropic metrics/reasons.
Stream Adaptation to Anthropic Format
src/services/api/openai/responsesAdapter.ts
Adapts Responses events into Anthropic BetaRawMessageStreamEvent sequence (content/thinking/tool blocks and completion).
Authenticated Fetch & HTTP Handling
src/services/api/openai/responsesAdapter.ts
Performs authenticated POST to Responses endpoint and returns parsed SSE iterable.
Request Building Tests
src/services/api/openai/__tests__/responsesAdapter.test.ts
Tests reasoningEffort → request.reasoning.effort mapping and omission of unsupported max_output_tokens.

OpenAI Adapter & Streaming Selection

Layer / File(s) Summary
Effort Mapping & Imports
src/services/api/openai/index.ts
Adds chatgpt auth and Responses adapter imports and maps options.effortValue→ResponsesReasoningEffort.
Runtime Branching
src/services/api/openai/index.ts
If ChatGPT auth enabled, buildResponsesRequest + createChatGPTResponsesStream + adaptResponsesStreamToAnthropic; otherwise use OpenAI-compatible chat.completions path.

Extended Effort Levels (xhigh Tier)

Layer / File(s) Summary
Effort Detection & Resolution
src/utils/effort.ts
Treats OpenAI+ChatGPT Codex models as supporting effort/xhigh, remaps resolved maxxhigh when applicable, and defaults Codex models to medium.
UI Effort Display & Cycling
src/components/ModelPicker.tsx
Clamps displayed effort and updates cycleEffortLevel to conditionally include xhigh/max based on model capabilities.
Command Help
src/commands/effort/effort.tsx
Adds xhigh to help usage and documents max→xhigh mapping for ChatGPT Codex models.

ChatGPT Codex Model Integration

Layer / File(s) Summary
Codex Model Metadata
src/utils/model/chatgptModels.ts
Adds ChatGPTCodexModelOption type, default/fast constants, option list, and helpers isChatGPTAuthMode/isChatGPTCodexReasoningModel.
Model Defaults & Options
src/utils/model/model.ts, src/utils/model/modelOptions.ts
Prefer CHATGPT_CODEX_* defaults and return Codex option list when OpenAI provider + ChatGPT auth mode are active.

OAuth UI & Account Management

Layer / File(s) Summary
Console OAuth Flow
src/components/ConsoleOAuthFlow.tsx
Adds "ChatGPT account with subscription" device-code login path, opens verification URL, persists OPENAI_AUTH_MODE:'chatgpt', and updates process.env on success.
OpenAI-compatible Setup Save
src/components/ConsoleOAuthFlow.tsx
Includes OPENAI_AUTH_MODE in env payload and deletes process.env keys when value is undefined.

Logout, Provider Switching & Managed Env

Layer / File(s) Summary
Provider Switch
src/commands/provider.ts
When OPENAI_AUTH_MODE==='chatgpt', relaxes requirement for OPENAI_API_KEY/OPENAI_BASE_URL during provider switch.
Logout Cleanup
src/commands/logout/logout.tsx
performLogout removes ChatGPT auth via removeChatGPTAuth() and clears persisted OPENAI_AUTH_MODE via clearChatGPTSettingsAuthMode(); success message generalized.
Managed Env Vars
src/utils/managedEnvConstants.ts
Adds OPENAI_AUTH_MODE to provider-managed env vars set.

Legacy Companion Inference & Tests

Layer / File(s) Summary
Companion Inference
src/buddy/companion.ts
Adds hasWord helper and inferLegacyCompanionBones(stored) to infer species/rarity from ${name} ${personality} and merges legacyBones into getCompanion output.
Companion Tests
src/buddy/__tests__/companion.test.ts
Tests verify inference of species/rarity from legacy seedless text and that seeded stored data prevents override.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 I nibbled on a device-code rhyme,

opened a browser, counted time,
xhigh hops into the model mix,
legacy bones peek from old tricks,
Codex codas hum — the login's fine.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 16.95% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title is in Chinese and refers to 'codex subscription login' support, which directly aligns with the PR's primary objective of adding ChatGPT subscription login via device-code flow.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/commands/effort/effort.tsx (1)

115-119: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Error message omits the new xhigh option.

The help text on line 158 lists xhigh as a valid option, but the validation error here still reports Valid options are: low, medium, high, max, auto. Users typing xhigh would only succeed (since isEffortLevel presumably accepts it), but if they typo it, they will get a misleading list of valid options.

Proposed fix
   if (!isEffortLevel(normalized)) {
     return {
-      message: `Invalid argument: ${args}. Valid options are: low, medium, high, max, auto`,
+      message: `Invalid argument: ${args}. Valid options are: low, medium, high, xhigh, max, auto`,
     };
   }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/commands/effort/effort.tsx` around lines 115 - 119, The validation error
message for the effort level is out of date: when isEffortLevel(normalized)
fails the returned message lists valid options but omits "xhigh"; update the
error return in the same block to include "xhigh" (i.e., change the message
string that currently references args to list: low, medium, high, xhigh, max,
auto) so the feedback matches the accepted values checked by isEffortLevel;
ensure you edit the same conditional that uses isEffortLevel(normalized) and
returns the message object referring to args/normalized.
src/utils/model/model.ts (1)

124-181: ⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Inconsistent handling of OPENAI_DEFAULT_*_MODEL overrides in ChatGPT auth mode.

getSmallFastModel (line 44–46) respects OPENAI_SMALL_FAST_MODEL even in ChatGPT auth mode (process.env.OPENAI_SMALL_FAST_MODEL ?? CHATGPT_CODEX_FAST_MODEL), but getDefaultOpusModel, getDefaultSonnetModel, and getDefaultHaikuModel short-circuit to the Codex constant before checking OPENAI_DEFAULT_OPUS_MODEL / _SONNET_MODEL / _HAIKU_MODEL (lines 130, 158, 183). Either:

  • The intent is to lock the four default getters to Codex IDs in ChatGPT mode — in which case getSmallFastModel should also ignore OPENAI_SMALL_FAST_MODEL, or
  • Power users should be able to override Codex defaults via OPENAI_DEFAULT_*_MODEL — in which case the three model getters should mirror the small-fast pattern.

Please align these four functions to one policy.

Option A — allow env overrides everywhere (mirrors `getSmallFastModel`)
 export function getDefaultOpusModel(): ModelName {
   const provider = getAPIProvider()
   if (provider === 'openai' && isChatGPTAuthMode()) {
-    return CHATGPT_CODEX_DEFAULT_MODEL
+    return process.env.OPENAI_DEFAULT_OPUS_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL
   }
   ...
 export function getDefaultSonnetModel(): ModelName {
   const provider = getAPIProvider()
   if (provider === 'openai' && isChatGPTAuthMode()) {
-    return CHATGPT_CODEX_DEFAULT_MODEL
+    return process.env.OPENAI_DEFAULT_SONNET_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL
   }
   ...
 export function getDefaultHaikuModel(): ModelName {
   const provider = getAPIProvider()
   if (provider === 'openai' && isChatGPTAuthMode()) {
-    return CHATGPT_CODEX_FAST_MODEL
+    return process.env.OPENAI_DEFAULT_HAIKU_MODEL ?? CHATGPT_CODEX_FAST_MODEL
   }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/utils/model/model.ts` around lines 124 - 181, The three getters
getDefaultOpusModel, getDefaultSonnetModel, and getDefaultHaikuModel
short-circuit to CHATGPT_* Codex constants in ChatGPT auth mode and ignore
OPENAI_DEFAULT_*_MODEL; mirror the pattern used in getSmallFastModel so OpenAI
env overrides are respected in ChatGPT auth mode: inside each function when
provider === 'openai' && isChatGPTAuthMode() return
process.env.OPENAI_DEFAULT_OPUS_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL (and
similarly use OPENAI_DEFAULT_SONNET_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL for
getDefaultSonnetModel, and OPENAI_DEFAULT_HAIKU_MODEL ??
CHATGPT_CODEX_FAST_MODEL for getDefaultHaikuModel), keeping the rest of the
provider-specific checks unchanged.
🧹 Nitpick comments (5)
src/buddy/companion.ts (1)

131-134: ⚡ Quick win

Escape dynamic regex tokens in hasWord before building RegExp.

word is interpolated directly into the regex pattern. It’s currently sourced from constants, but escaping keeps this safe and robust if values ever include regex metacharacters.

Suggested patch
+function escapeRegExp(value: string): string {
+  return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')
+}
+
 function hasWord(text: string, word: string): boolean {
-  return new RegExp(`(^|${WORD_BOUNDARY})${word}($|${WORD_BOUNDARY})`).test(
+  const safeWord = escapeRegExp(word.toLowerCase())
+  return new RegExp(`(^|${WORD_BOUNDARY})${safeWord}($|${WORD_BOUNDARY})`).test(
     text,
   )
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/buddy/companion.ts` around lines 131 - 134, The hasWord function
interpolates the dynamic variable word directly into a RegExp, which can break
or be exploitable if word contains regex metacharacters; fix it by escaping
regex-special characters in word before constructing the RegExp (e.g., add or
use an escapeRegExp helper that replaces characters like -\/\\^$*+?.()|[]{} with
escaped versions) and then build the pattern using the escaped value with the
existing WORD_BOUNDARY; update the call site inside hasWord to use the
escapedWord when creating the RegExp.
src/commands/provider.ts (1)

82-97: 💤 Low value

Provider switch trusts OPENAI_AUTH_MODE without verifying credentials exist.

When OPENAI_AUTH_MODE === 'chatgpt', /provider openai short-circuits the env-var validation, but it does not check that ChatGPT auth tokens are actually present (e.g., via the existence of ~/.claude/openai-chatgpt-auth.json or the codex fallback). A user with a stale OPENAI_AUTH_MODE env var but no completed /login flow gets a silent success here, and the failure surfaces only at the next API call. Consider adding a lightweight existence check or a hint message ("Run /login to complete ChatGPT subscription setup") when no auth file is present.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/commands/provider.ts` around lines 82 - 97, The provider switch trusts
OPENAI_AUTH_MODE === 'chatgpt' without verifying credentials; modify the logic
in the provider handling (the block using arg === 'openai' and getMergedEnv())
to perform a lightweight check for ChatGPT credentials before treating
hasChatGPTAuth as sufficient—add a helper like hasChatGPTCredentials() (or reuse
an existing auth-check helper) and call it when mergedEnv.OPENAI_AUTH_MODE ===
'chatgpt'; if the helper indicates credentials are missing, do not short-circuit
the env-var validation: updateSettingsForSource('userSettings', { modelType:
'openai' }) should still occur but return a message prompting the user to run
/login (or complete auth) and include which auth artifact is missing, similar to
the existing missing-env response.
src/utils/model/chatgptModels.ts (1)

47-52: 💤 Low value

[1m] suffix stripping is case-sensitive but other code uses case-insensitive matching.

Elsewhere in the codebase (e.g., parseUserSpecifiedModel in model.ts and getMarketingNameForModel), the [1m] suffix is matched case-insensitively (/\[1m]$/i or .toLowerCase().includes('[1m]')). Here, although model.toLowerCase() precedes the regex, the regex itself uses no flag, which works only because the input is already lowercased. This is correct but fragile — if a future change reorders or removes the .toLowerCase() call, the suffix matching will silently fail. Consider adding the i flag for defense in depth, or computing on the trimmed/normalized model string explicitly.

Proposed defensive change
 export function isChatGPTCodexReasoningModel(model: string): boolean {
-  const normalized = model.toLowerCase().replace(/\[1m\]$/, '')
+  const normalized = model.toLowerCase().replace(/\[1m\]$/i, '').trim()
   return CHATGPT_CODEX_MODEL_OPTIONS.some(
     option => option.value.toLowerCase() === normalized,
   )
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/utils/model/chatgptModels.ts` around lines 47 - 52, The
isChatGPTCodexReasoningModel function currently lowercases the model then strips
a trailing “[1m]” with a case-sensitive regex which is fragile; update the
normalization to defensively remove the suffix case-insensitively (e.g., use
/\[1m]$/i) or perform the strip on an explicitly normalized string before
comparing, ensuring the regex uses the i flag and keeping the comparison against
CHATGPT_CODEX_MODEL_OPTIONS via option.value.toLowerCase() === normalized.
src/services/api/openai/chatgptAuth.ts (1)

224-259: 💤 Low value

Abort signal is not honored during the polling sleep.

The check if (signal?.aborted) throw … only runs at the top of each loop iteration. If the user cancels mid-sleep (e.g., dismisses the device-code dialog), the cancellation can take up to intervalSeconds to take effect, and the cancellation error is generic rather than AbortError. Consider an abort-aware sleep so cancellation is responsive and surfaces consistently.

Proposed change
-    await new Promise(resolve =>
-      setTimeout(resolve, deviceCode.intervalSeconds * 1000),
-    )
+    await new Promise<void>((resolve, reject) => {
+      const timer = setTimeout(resolve, deviceCode.intervalSeconds * 1000)
+      signal?.addEventListener(
+        'abort',
+        () => {
+          clearTimeout(timer)
+          reject(new Error('ChatGPT login cancelled'))
+        },
+        { once: true },
+      )
+    })
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/api/openai/chatgptAuth.ts` around lines 224 - 259, The polling
sleep must be abort-aware so cancellation during the setTimeout is immediate and
surfaces as an AbortError; update pollForAuthorizationCode to replace the simple
await new Promise(resolve => setTimeout(...)) with an abort-aware sleep that (1)
immediately rejects if signal?.aborted, (2) attaches a
signal.addEventListener('abort') handler to reject the sleep with an Error whose
name is 'AbortError' (or a DOMException if available), and (3) clears the
timeout and removes the listener on completion; keep all other logic in
pollForAuthorizationCode the same and reference deviceCode.intervalSeconds for
the timeout duration.
src/services/api/openai/__tests__/responsesAdapter.test.ts (1)

1-27: ⚡ Quick win

Coverage is thin for the responses adapter.

These two tests only exercise buildResponsesRequest (reasoning-effort plumbing and absence of max_output_tokens). The PR's main correctness risk is in the SSE → Anthropic-event conversion path (text deltas, reasoning, tool calls, usage, error frames) that lives in the same adapter module. Consider adding tests for that streaming logic — even a single fixture-driven SSE script verifying the produced internal event sequence would meaningfully reduce regression risk for the new ChatGPT Codex path.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/api/openai/__tests__/responsesAdapter.test.ts` around lines 1 -
27, Add a unit test for the SSE → Anthropic-event conversion logic in the same
responsesAdapter module (the file that exports buildResponsesRequest) by
creating a fixture-driven test that feeds a sample SSE stream (containing text
deltas, reasoning frames, tool call/start/finish frames, usage, and an error
frame) into the adapter's SSE-to-event converter and asserts the produced
internal event sequence and payloads (text chunks, reasoning.effort, tool call
metadata, usage totals, and error frame shape) match expected values; put the
new test alongside the existing buildResponsesRequest tests, import the SSE
conversion function from the responsesAdapter module, use a deterministic
fixture string for the SSE script, and verify ordering and content of events to
cover the streaming path.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@src/components/ConsoleOAuthFlow.tsx`:
- Around line 999-1014: The requestChatGPTDeviceCode call in ConsoleOAuthFlow
cannot be aborted because requestChatGPTDeviceCode and the shared postJSON
helper don't accept an AbortSignal; update requestChatGPTDeviceCode (in
chatgptAuth.ts) to take an optional signal parameter and forward it into
postJSON, update postJSON to accept and pass the signal to fetch, then pass
controller.signal from ConsoleOAuthFlow where requestChatGPTDeviceCode is
invoked (mirroring how pollForAuthorizationCode already accepts a signal).
- Around line 1049-1076: In ConsoleOAuthFlow.tsx add the missing Esc keybinding
for the ChatGPT device-code flow: when status.phase is 'requesting' or 'waiting'
(the same branch that renders the "Esc to go back" text) register
useKeybinding('confirm:no', () => setOAuthStatus({ state: 'idle' }), { context:
'Confirmation' }) so pressing Esc returns to idle; place this alongside the
other flows' keybinding registrations inside the component that renders the
device-code UI (refer to status.phase, status.deviceCode, and setOAuthStatus to
locate the correct spot).

In `@src/services/api/openai/chatgptAuth.ts`:
- Around line 52-57: codexAuthFilePath currently falls back to process.env.HOME
which is undefined on Windows; update the function to use the imported homedir()
(or process.env.USERPROFILE) as the home directory source so the returned path
matches the Codex CLI location. Locate codexAuthFilePath and replace the HOME
fallback logic with homedir() (or a homedir() || process.env.USERPROFILE) when
building the join(...) for 'auth.json' so the computed path is cross-platform.

In `@src/services/api/openai/responsesAdapter.ts`:
- Around line 419-441: When handling 'response.completed' /
'response.incomplete' in the block that currently closes textBlockOpen and
thinkingBlockOpen, also iterate the toolBlocks array (the toolBlocks entries
that have open: true) and for each open tool block yield a content_block_stop
event with that block's index and set its open flag to false so tool blocks are
force-closed; perform this before emitting the final message_delta/message_stop
(i.e., in the same branch handling response.completed/response.incomplete) to
ensure no unmatched content_block_start remains.
- Around line 193-222: In parseSSE wrap the JSON.parse(data) call in a try/catch
inside the async generator so a single malformed SSE frame doesn't break the
stream; when parsing fails, log or debug the parse error (include the raw data
string for context), skip yielding for that frame, and continue the generator
loop in parseSSE (do not rethrow). Refer to the parseSSE function and the block
that builds data from lines and currently calls JSON.parse(data) — change that
section to catch JSON errors, handle/log them, and proceed to the next frame.

---

Outside diff comments:
In `@src/commands/effort/effort.tsx`:
- Around line 115-119: The validation error message for the effort level is out
of date: when isEffortLevel(normalized) fails the returned message lists valid
options but omits "xhigh"; update the error return in the same block to include
"xhigh" (i.e., change the message string that currently references args to list:
low, medium, high, xhigh, max, auto) so the feedback matches the accepted values
checked by isEffortLevel; ensure you edit the same conditional that uses
isEffortLevel(normalized) and returns the message object referring to
args/normalized.

In `@src/utils/model/model.ts`:
- Around line 124-181: The three getters getDefaultOpusModel,
getDefaultSonnetModel, and getDefaultHaikuModel short-circuit to CHATGPT_* Codex
constants in ChatGPT auth mode and ignore OPENAI_DEFAULT_*_MODEL; mirror the
pattern used in getSmallFastModel so OpenAI env overrides are respected in
ChatGPT auth mode: inside each function when provider === 'openai' &&
isChatGPTAuthMode() return process.env.OPENAI_DEFAULT_OPUS_MODEL ??
CHATGPT_CODEX_DEFAULT_MODEL (and similarly use OPENAI_DEFAULT_SONNET_MODEL ??
CHATGPT_CODEX_DEFAULT_MODEL for getDefaultSonnetModel, and
OPENAI_DEFAULT_HAIKU_MODEL ?? CHATGPT_CODEX_FAST_MODEL for
getDefaultHaikuModel), keeping the rest of the provider-specific checks
unchanged.

---

Nitpick comments:
In `@src/buddy/companion.ts`:
- Around line 131-134: The hasWord function interpolates the dynamic variable
word directly into a RegExp, which can break or be exploitable if word contains
regex metacharacters; fix it by escaping regex-special characters in word before
constructing the RegExp (e.g., add or use an escapeRegExp helper that replaces
characters like -\/\\^$*+?.()|[]{} with escaped versions) and then build the
pattern using the escaped value with the existing WORD_BOUNDARY; update the call
site inside hasWord to use the escapedWord when creating the RegExp.

In `@src/commands/provider.ts`:
- Around line 82-97: The provider switch trusts OPENAI_AUTH_MODE === 'chatgpt'
without verifying credentials; modify the logic in the provider handling (the
block using arg === 'openai' and getMergedEnv()) to perform a lightweight check
for ChatGPT credentials before treating hasChatGPTAuth as sufficient—add a
helper like hasChatGPTCredentials() (or reuse an existing auth-check helper) and
call it when mergedEnv.OPENAI_AUTH_MODE === 'chatgpt'; if the helper indicates
credentials are missing, do not short-circuit the env-var validation:
updateSettingsForSource('userSettings', { modelType: 'openai' }) should still
occur but return a message prompting the user to run /login (or complete auth)
and include which auth artifact is missing, similar to the existing missing-env
response.

In `@src/services/api/openai/__tests__/responsesAdapter.test.ts`:
- Around line 1-27: Add a unit test for the SSE → Anthropic-event conversion
logic in the same responsesAdapter module (the file that exports
buildResponsesRequest) by creating a fixture-driven test that feeds a sample SSE
stream (containing text deltas, reasoning frames, tool call/start/finish frames,
usage, and an error frame) into the adapter's SSE-to-event converter and asserts
the produced internal event sequence and payloads (text chunks,
reasoning.effort, tool call metadata, usage totals, and error frame shape) match
expected values; put the new test alongside the existing buildResponsesRequest
tests, import the SSE conversion function from the responsesAdapter module, use
a deterministic fixture string for the SSE script, and verify ordering and
content of events to cover the streaming path.

In `@src/services/api/openai/chatgptAuth.ts`:
- Around line 224-259: The polling sleep must be abort-aware so cancellation
during the setTimeout is immediate and surfaces as an AbortError; update
pollForAuthorizationCode to replace the simple await new Promise(resolve =>
setTimeout(...)) with an abort-aware sleep that (1) immediately rejects if
signal?.aborted, (2) attaches a signal.addEventListener('abort') handler to
reject the sleep with an Error whose name is 'AbortError' (or a DOMException if
available), and (3) clears the timeout and removes the listener on completion;
keep all other logic in pollForAuthorizationCode the same and reference
deviceCode.intervalSeconds for the timeout duration.

In `@src/utils/model/chatgptModels.ts`:
- Around line 47-52: The isChatGPTCodexReasoningModel function currently
lowercases the model then strips a trailing “[1m]” with a case-sensitive regex
which is fragile; update the normalization to defensively remove the suffix
case-insensitively (e.g., use /\[1m]$/i) or perform the strip on an explicitly
normalized string before comparing, ensuring the regex uses the i flag and
keeping the comparison against CHATGPT_CODEX_MODEL_OPTIONS via
option.value.toLowerCase() === normalized.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1f922ca2-0e7b-4344-a698-cb45efe7635c

📥 Commits

Reviewing files that changed from the base of the PR and between 02dd796 and c7cb3d8.

📒 Files selected for processing (17)
  • src/buddy/__tests__/companion.test.ts
  • src/buddy/companion.ts
  • src/commands/effort/effort.tsx
  • src/commands/logout/index.ts
  • src/commands/logout/logout.tsx
  • src/commands/provider.ts
  • src/components/ConsoleOAuthFlow.tsx
  • src/components/ModelPicker.tsx
  • src/services/api/openai/__tests__/responsesAdapter.test.ts
  • src/services/api/openai/chatgptAuth.ts
  • src/services/api/openai/index.ts
  • src/services/api/openai/responsesAdapter.ts
  • src/utils/effort.ts
  • src/utils/managedEnvConstants.ts
  • src/utils/model/chatgptModels.ts
  • src/utils/model/model.ts
  • src/utils/model/modelOptions.ts

Comment on lines +999 to +1014
useEffect(() => {
if (startedRef.current) return;
startedRef.current = true;
let cancelled = false;
const controller = new AbortController();
async function runLogin() {
try {
const deviceCode = await requestChatGPTDeviceCode();
if (cancelled) return;
setOAuthStatus({
state: 'chatgpt_subscription',
phase: 'waiting',
deviceCode,
});
void openBrowser(deviceCode.verificationUrl);
await completeChatGPTDeviceLogin(deviceCode, controller.signal);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify whether requestChatGPTDeviceCode accepts an AbortSignal parameter.
fd -t f 'chatgptAuth.ts' --exec rg -nP -C2 'export\s+(async\s+)?function\s+requestChatGPTDeviceCode' {} \;

Repository: claude-code-best/claude-code

Length of output: 229


🏁 Script executed:

#!/bin/bash
# Get the full implementation of requestChatGPTDeviceCode
fd -t f 'chatgptAuth.ts' --exec cat -n {} \; | head -250 | tail -100

Repository: claude-code-best/claude-code

Length of output: 3730


requestChatGPTDeviceCode() doesn't support abort signal — initial device-code request can't be cancelled.

The function doesn't accept an AbortSignal parameter, so if the request hangs (slow network, captive portal, server outage), unmount/cancel won't abort the in-flight fetch. Unlike pollForAuthorizationCode, which already accepts a signal, the device code request lacks cancellation support.

To fix, add signal parameter to both requestChatGPTDeviceCode and the postJSON helper, then pass controller.signal from line 1006:

-            const deviceCode = await requestChatGPTDeviceCode();
+            const deviceCode = await requestChatGPTDeviceCode(controller.signal);

Also update the function signatures in chatgptAuth.ts to accept and forward the signal.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/ConsoleOAuthFlow.tsx` around lines 999 - 1014, The
requestChatGPTDeviceCode call in ConsoleOAuthFlow cannot be aborted because
requestChatGPTDeviceCode and the shared postJSON helper don't accept an
AbortSignal; update requestChatGPTDeviceCode (in chatgptAuth.ts) to take an
optional signal parameter and forward it into postJSON, update postJSON to
accept and pass the signal to fetch, then pass controller.signal from
ConsoleOAuthFlow where requestChatGPTDeviceCode is invoked (mirroring how
pollForAuthorizationCode already accepts a signal).

Comment on lines +1049 to +1076
return (
<Box flexDirection="column" gap={1}>
<Text bold>ChatGPT Account Setup</Text>
{status.phase === 'requesting' && (
<Box>
<Spinner />
<Text>Requesting sign-in code…</Text>
</Box>
)}
{status.phase === 'waiting' && status.deviceCode && (
<Box flexDirection="column" gap={1}>
<Text>Open this link and sign in with your ChatGPT account:</Text>
<Link url={status.deviceCode.verificationUrl}>
<Text dimColor>{status.deviceCode.verificationUrl}</Text>
</Link>
<Text>
Enter code: <Text bold>{status.deviceCode.userCode}</Text>
</Text>
<Box>
<Spinner />
<Text>Waiting for ChatGPT authorization…</Text>
</Box>
</Box>
)}
<Text dimColor>Esc to go back. Device codes expire after 15 minutes.</Text>
</Box>
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Missing confirm:no (Esc) keybinding makes the displayed "Esc to go back" instruction non-functional.

The other login sub-cases (custom_platform, openai_chat_api, gemini_api) each register a useKeybinding('confirm:no', () => setOAuthStatus({ state: 'idle' }), { context: 'Confirmation' }) so the user can return to the menu while waiting. This case shows the same instruction at line 1073 but never registers the keybinding, so Esc does nothing during the requesting/waiting phases. Users who started this flow accidentally have no way to back out other than killing the process.

🛠️ Proposed fix
       useEffect(() => {
         ...
       }, [setOAuthStatus, onDone]);

+      useKeybinding(
+        'confirm:no',
+        () => {
+          setOAuthStatus({ state: 'idle' });
+        },
+        { context: 'Confirmation' },
+      );
+
       return (
         <Box flexDirection="column" gap={1}>
           <Text bold>ChatGPT Account Setup</Text>
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/components/ConsoleOAuthFlow.tsx` around lines 1049 - 1076, In
ConsoleOAuthFlow.tsx add the missing Esc keybinding for the ChatGPT device-code
flow: when status.phase is 'requesting' or 'waiting' (the same branch that
renders the "Esc to go back" text) register useKeybinding('confirm:no', () =>
setOAuthStatus({ state: 'idle' }), { context: 'Confirmation' }) so pressing Esc
returns to idle; place this alongside the other flows' keybinding registrations
inside the component that renders the device-code UI (refer to status.phase,
status.deviceCode, and setOAuthStatus to locate the correct spot).

Comment on lines +52 to +57
function codexAuthFilePath(): string {
return join(
process.env.CODEX_HOME ?? join(process.env.HOME ?? '', '.codex'),
'auth.json',
)
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Codex fallback path is not cross-platform.

process.env.HOME is not set on Windows; this fallback yields \\.codex\\auth.json (resolved relative to cwd) on Windows, which does not match where the Codex CLI stores its auth file. Use homedir() (already imported) or fall back to USERPROFILE for parity.

Proposed fix
 function codexAuthFilePath(): string {
   return join(
-    process.env.CODEX_HOME ?? join(process.env.HOME ?? '', '.codex'),
+    process.env.CODEX_HOME ?? join(homedir(), '.codex'),
     'auth.json',
   )
 }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/api/openai/chatgptAuth.ts` around lines 52 - 57,
codexAuthFilePath currently falls back to process.env.HOME which is undefined on
Windows; update the function to use the imported homedir() (or
process.env.USERPROFILE) as the home directory source so the returned path
matches the Codex CLI location. Locate codexAuthFilePath and replace the HOME
fallback logic with homedir() (or a homedir() || process.env.USERPROFILE) when
building the join(...) for 'auth.json' so the computed path is cross-platform.

Comment on lines +193 to +222
async function* parseSSE(
response: Response,
): AsyncGenerator<Record<string, unknown>, void> {
if (!response.body) throw new Error('ChatGPT response did not include a body')
const reader = response.body.getReader()
const decoder = new TextDecoder()
let buffer = ''
while (true) {
const { done, value } = await reader.read()
if (done) break
buffer += decoder.decode(value, { stream: true })
let splitAt = buffer.indexOf('\n\n')
while (splitAt >= 0) {
const frame = buffer.slice(0, splitAt)
buffer = buffer.slice(splitAt + 2)
const data = frame
.split(/\r?\n/)
.filter(line => line.startsWith('data:'))
.map(line => line.slice(5).trimStart())
.join('\n')
if (data && data !== '[DONE]') {
const parsed = JSON.parse(data) as unknown
if (parsed && typeof parsed === 'object') {
yield parsed as Record<string, unknown>
}
}
splitAt = buffer.indexOf('\n\n')
}
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Wrap JSON.parse in try/catch so a single malformed SSE frame doesn't abort the entire stream.

A truncated or non-JSON data: payload (transient network glitch, server-injected keepalive, schema drift) will throw out of the generator and propagate to queryModelOpenAI's catch, which yields an api_error and discards any in-flight content blocks. Other SSE adapters in the codebase typically skip and log unparseable frames to keep the stream resilient.

🛡️ Proposed fix
       if (data && data !== '[DONE]') {
-        const parsed = JSON.parse(data) as unknown
-        if (parsed && typeof parsed === 'object') {
-          yield parsed as Record<string, unknown>
+        try {
+          const parsed = JSON.parse(data) as unknown
+          if (parsed && typeof parsed === 'object') {
+            yield parsed as Record<string, unknown>
+          }
+        } catch {
+          // Skip malformed SSE frame; stream continues.
         }
       }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/api/openai/responsesAdapter.ts` around lines 193 - 222, In
parseSSE wrap the JSON.parse(data) call in a try/catch inside the async
generator so a single malformed SSE frame doesn't break the stream; when parsing
fails, log or debug the parse error (include the raw data string for context),
skip yielding for that frame, and continue the generator loop in parseSSE (do
not rethrow). Refer to the parseSSE function and the block that builds data from
lines and currently calls JSON.parse(data) — change that section to catch JSON
errors, handle/log them, and proceed to the next frame.

Comment on lines +419 to +441
if (type === 'response.completed' || type === 'response.incomplete') {
if (textBlockOpen) {
yield {
type: 'content_block_stop',
index: currentContentIndex,
} as BetaRawMessageStreamEvent
textBlockOpen = false
}
if (thinkingBlockOpen) {
yield {
type: 'content_block_stop',
index: currentContentIndex,
} as BetaRawMessageStreamEvent
thinkingBlockOpen = false
}
const response = event.response as Record<string, unknown> | undefined
yield {
type: 'message_delta',
delta: { stop_reason: mapStopReason(response), stop_sequence: null },
usage: extractUsage(response),
} as unknown as BetaRawMessageStreamEvent
yield { type: 'message_stop' } as BetaRawMessageStreamEvent
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor | ⚡ Quick win

Tool blocks aren't force-closed when response.completed/response.incomplete arrives early.

Text and thinking blocks are closed defensively here, but toolBlocks entries with open: true aren't iterated. The happy path emits response.output_item.done per tool before response.completed, but if the server ever completes a response with unfinished tool items (e.g., response.incomplete from token cap mid-tool-call), downstream consumers will see unbalanced content_block_start without a matching content_block_stop.

🛡️ Proposed fix
     if (type === 'response.completed' || type === 'response.incomplete') {
       if (textBlockOpen) {
         yield { type: 'content_block_stop', index: currentContentIndex } as BetaRawMessageStreamEvent
         textBlockOpen = false
       }
       if (thinkingBlockOpen) {
         yield { type: 'content_block_stop', index: currentContentIndex } as BetaRawMessageStreamEvent
         thinkingBlockOpen = false
       }
+      for (const block of toolBlocks.values()) {
+        if (block.open) {
+          yield { type: 'content_block_stop', index: block.contentIndex } as BetaRawMessageStreamEvent
+          block.open = false
+        }
+      }
       const response = event.response as Record<string, unknown> | undefined
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (type === 'response.completed' || type === 'response.incomplete') {
if (textBlockOpen) {
yield {
type: 'content_block_stop',
index: currentContentIndex,
} as BetaRawMessageStreamEvent
textBlockOpen = false
}
if (thinkingBlockOpen) {
yield {
type: 'content_block_stop',
index: currentContentIndex,
} as BetaRawMessageStreamEvent
thinkingBlockOpen = false
}
const response = event.response as Record<string, unknown> | undefined
yield {
type: 'message_delta',
delta: { stop_reason: mapStopReason(response), stop_sequence: null },
usage: extractUsage(response),
} as unknown as BetaRawMessageStreamEvent
yield { type: 'message_stop' } as BetaRawMessageStreamEvent
}
if (type === 'response.completed' || type === 'response.incomplete') {
if (textBlockOpen) {
yield {
type: 'content_block_stop',
index: currentContentIndex,
} as BetaRawMessageStreamEvent
textBlockOpen = false
}
if (thinkingBlockOpen) {
yield {
type: 'content_block_stop',
index: currentContentIndex,
} as BetaRawMessageStreamEvent
thinkingBlockOpen = false
}
for (const block of toolBlocks.values()) {
if (block.open) {
yield { type: 'content_block_stop', index: block.contentIndex } as BetaRawMessageStreamEvent
block.open = false
}
}
const response = event.response as Record<string, unknown> | undefined
yield {
type: 'message_delta',
delta: { stop_reason: mapStopReason(response), stop_sequence: null },
usage: extractUsage(response),
} as unknown as BetaRawMessageStreamEvent
yield { type: 'message_stop' } as BetaRawMessageStreamEvent
}
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/services/api/openai/responsesAdapter.ts` around lines 419 - 441, When
handling 'response.completed' / 'response.incomplete' in the block that
currently closes textBlockOpen and thinkingBlockOpen, also iterate the
toolBlocks array (the toolBlocks entries that have open: true) and for each open
tool block yield a content_block_stop event with that block's index and set its
open flag to false so tool blocks are force-closed; perform this before emitting
the final message_delta/message_stop (i.e., in the same branch handling
response.completed/response.incomplete) to ensure no unmatched
content_block_start remains.

@claude-code-best
Copy link
Copy Markdown
Owner

@q1352013520 请修复 CI 哈

@claude-code-best claude-code-best merged commit df8c4f4 into claude-code-best:main May 9, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants