feat: /login支持codex订阅登录#438
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
🚧 Files skipped from review as they are similar to previous changes (4)
📝 WalkthroughWalkthroughThis PR adds ChatGPT device-code authentication and local token management, a ChatGPT Responses streaming adapter plus request-building and SSE parsing, xhigh effort support for ChatGPT Codex models with UI and model-selection wiring, provider/login/logout/settings updates, and companion legacy species/rarity inference with tests. ChangesChatGPT Device Login & Auth Flow
Responses API Streaming & Adaptation
OpenAI Adapter & Streaming Selection
Extended Effort Levels (xhigh Tier)
ChatGPT Codex Model Integration
OAuth UI & Account Management
Logout, Provider Switching & Managed Env
Legacy Companion Inference & Tests
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/commands/effort/effort.tsx (1)
115-119:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winError message omits the new
xhighoption.The help text on line 158 lists
xhighas a valid option, but the validation error here still reportsValid options are: low, medium, high, max, auto. Users typingxhighwould only succeed (sinceisEffortLevelpresumably accepts it), but if they typo it, they will get a misleading list of valid options.Proposed fix
if (!isEffortLevel(normalized)) { return { - message: `Invalid argument: ${args}. Valid options are: low, medium, high, max, auto`, + message: `Invalid argument: ${args}. Valid options are: low, medium, high, xhigh, max, auto`, }; }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/commands/effort/effort.tsx` around lines 115 - 119, The validation error message for the effort level is out of date: when isEffortLevel(normalized) fails the returned message lists valid options but omits "xhigh"; update the error return in the same block to include "xhigh" (i.e., change the message string that currently references args to list: low, medium, high, xhigh, max, auto) so the feedback matches the accepted values checked by isEffortLevel; ensure you edit the same conditional that uses isEffortLevel(normalized) and returns the message object referring to args/normalized.src/utils/model/model.ts (1)
124-181:⚠️ Potential issue | 🟡 Minor | ⚡ Quick winInconsistent handling of
OPENAI_DEFAULT_*_MODELoverrides in ChatGPT auth mode.
getSmallFastModel(line 44–46) respectsOPENAI_SMALL_FAST_MODELeven in ChatGPT auth mode (process.env.OPENAI_SMALL_FAST_MODEL ?? CHATGPT_CODEX_FAST_MODEL), butgetDefaultOpusModel,getDefaultSonnetModel, andgetDefaultHaikuModelshort-circuit to the Codex constant before checkingOPENAI_DEFAULT_OPUS_MODEL/_SONNET_MODEL/_HAIKU_MODEL(lines 130, 158, 183). Either:
- The intent is to lock the four default getters to Codex IDs in ChatGPT mode — in which case
getSmallFastModelshould also ignoreOPENAI_SMALL_FAST_MODEL, or- Power users should be able to override Codex defaults via
OPENAI_DEFAULT_*_MODEL— in which case the three model getters should mirror the small-fast pattern.Please align these four functions to one policy.
Option A — allow env overrides everywhere (mirrors `getSmallFastModel`)
export function getDefaultOpusModel(): ModelName { const provider = getAPIProvider() if (provider === 'openai' && isChatGPTAuthMode()) { - return CHATGPT_CODEX_DEFAULT_MODEL + return process.env.OPENAI_DEFAULT_OPUS_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL } ... export function getDefaultSonnetModel(): ModelName { const provider = getAPIProvider() if (provider === 'openai' && isChatGPTAuthMode()) { - return CHATGPT_CODEX_DEFAULT_MODEL + return process.env.OPENAI_DEFAULT_SONNET_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL } ... export function getDefaultHaikuModel(): ModelName { const provider = getAPIProvider() if (provider === 'openai' && isChatGPTAuthMode()) { - return CHATGPT_CODEX_FAST_MODEL + return process.env.OPENAI_DEFAULT_HAIKU_MODEL ?? CHATGPT_CODEX_FAST_MODEL }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/utils/model/model.ts` around lines 124 - 181, The three getters getDefaultOpusModel, getDefaultSonnetModel, and getDefaultHaikuModel short-circuit to CHATGPT_* Codex constants in ChatGPT auth mode and ignore OPENAI_DEFAULT_*_MODEL; mirror the pattern used in getSmallFastModel so OpenAI env overrides are respected in ChatGPT auth mode: inside each function when provider === 'openai' && isChatGPTAuthMode() return process.env.OPENAI_DEFAULT_OPUS_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL (and similarly use OPENAI_DEFAULT_SONNET_MODEL ?? CHATGPT_CODEX_DEFAULT_MODEL for getDefaultSonnetModel, and OPENAI_DEFAULT_HAIKU_MODEL ?? CHATGPT_CODEX_FAST_MODEL for getDefaultHaikuModel), keeping the rest of the provider-specific checks unchanged.
🧹 Nitpick comments (5)
src/buddy/companion.ts (1)
131-134: ⚡ Quick winEscape dynamic regex tokens in
hasWordbefore buildingRegExp.
wordis interpolated directly into the regex pattern. It’s currently sourced from constants, but escaping keeps this safe and robust if values ever include regex metacharacters.Suggested patch
+function escapeRegExp(value: string): string { + return value.replace(/[.*+?^${}()|[\]\\]/g, '\\$&') +} + function hasWord(text: string, word: string): boolean { - return new RegExp(`(^|${WORD_BOUNDARY})${word}($|${WORD_BOUNDARY})`).test( + const safeWord = escapeRegExp(word.toLowerCase()) + return new RegExp(`(^|${WORD_BOUNDARY})${safeWord}($|${WORD_BOUNDARY})`).test( text, ) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/buddy/companion.ts` around lines 131 - 134, The hasWord function interpolates the dynamic variable word directly into a RegExp, which can break or be exploitable if word contains regex metacharacters; fix it by escaping regex-special characters in word before constructing the RegExp (e.g., add or use an escapeRegExp helper that replaces characters like -\/\\^$*+?.()|[]{} with escaped versions) and then build the pattern using the escaped value with the existing WORD_BOUNDARY; update the call site inside hasWord to use the escapedWord when creating the RegExp.src/commands/provider.ts (1)
82-97: 💤 Low valueProvider switch trusts
OPENAI_AUTH_MODEwithout verifying credentials exist.When
OPENAI_AUTH_MODE === 'chatgpt',/provider openaishort-circuits the env-var validation, but it does not check that ChatGPT auth tokens are actually present (e.g., via the existence of~/.claude/openai-chatgpt-auth.jsonor the codex fallback). A user with a staleOPENAI_AUTH_MODEenv var but no completed/loginflow gets a silent success here, and the failure surfaces only at the next API call. Consider adding a lightweight existence check or a hint message ("Run /login to complete ChatGPT subscription setup") when no auth file is present.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/commands/provider.ts` around lines 82 - 97, The provider switch trusts OPENAI_AUTH_MODE === 'chatgpt' without verifying credentials; modify the logic in the provider handling (the block using arg === 'openai' and getMergedEnv()) to perform a lightweight check for ChatGPT credentials before treating hasChatGPTAuth as sufficient—add a helper like hasChatGPTCredentials() (or reuse an existing auth-check helper) and call it when mergedEnv.OPENAI_AUTH_MODE === 'chatgpt'; if the helper indicates credentials are missing, do not short-circuit the env-var validation: updateSettingsForSource('userSettings', { modelType: 'openai' }) should still occur but return a message prompting the user to run /login (or complete auth) and include which auth artifact is missing, similar to the existing missing-env response.src/utils/model/chatgptModels.ts (1)
47-52: 💤 Low value
[1m]suffix stripping is case-sensitive but other code uses case-insensitive matching.Elsewhere in the codebase (e.g.,
parseUserSpecifiedModelinmodel.tsandgetMarketingNameForModel), the[1m]suffix is matched case-insensitively (/\[1m]$/ior.toLowerCase().includes('[1m]')). Here, althoughmodel.toLowerCase()precedes the regex, the regex itself uses no flag, which works only because the input is already lowercased. This is correct but fragile — if a future change reorders or removes the.toLowerCase()call, the suffix matching will silently fail. Consider adding theiflag for defense in depth, or computing on the trimmed/normalized model string explicitly.Proposed defensive change
export function isChatGPTCodexReasoningModel(model: string): boolean { - const normalized = model.toLowerCase().replace(/\[1m\]$/, '') + const normalized = model.toLowerCase().replace(/\[1m\]$/i, '').trim() return CHATGPT_CODEX_MODEL_OPTIONS.some( option => option.value.toLowerCase() === normalized, ) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/utils/model/chatgptModels.ts` around lines 47 - 52, The isChatGPTCodexReasoningModel function currently lowercases the model then strips a trailing “[1m]” with a case-sensitive regex which is fragile; update the normalization to defensively remove the suffix case-insensitively (e.g., use /\[1m]$/i) or perform the strip on an explicitly normalized string before comparing, ensuring the regex uses the i flag and keeping the comparison against CHATGPT_CODEX_MODEL_OPTIONS via option.value.toLowerCase() === normalized.src/services/api/openai/chatgptAuth.ts (1)
224-259: 💤 Low valueAbort signal is not honored during the polling sleep.
The check
if (signal?.aborted) throw …only runs at the top of each loop iteration. If the user cancels mid-sleep (e.g., dismisses the device-code dialog), the cancellation can take up tointervalSecondsto take effect, and the cancellation error is generic rather thanAbortError. Consider an abort-aware sleep so cancellation is responsive and surfaces consistently.Proposed change
- await new Promise(resolve => - setTimeout(resolve, deviceCode.intervalSeconds * 1000), - ) + await new Promise<void>((resolve, reject) => { + const timer = setTimeout(resolve, deviceCode.intervalSeconds * 1000) + signal?.addEventListener( + 'abort', + () => { + clearTimeout(timer) + reject(new Error('ChatGPT login cancelled')) + }, + { once: true }, + ) + })🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/api/openai/chatgptAuth.ts` around lines 224 - 259, The polling sleep must be abort-aware so cancellation during the setTimeout is immediate and surfaces as an AbortError; update pollForAuthorizationCode to replace the simple await new Promise(resolve => setTimeout(...)) with an abort-aware sleep that (1) immediately rejects if signal?.aborted, (2) attaches a signal.addEventListener('abort') handler to reject the sleep with an Error whose name is 'AbortError' (or a DOMException if available), and (3) clears the timeout and removes the listener on completion; keep all other logic in pollForAuthorizationCode the same and reference deviceCode.intervalSeconds for the timeout duration.src/services/api/openai/__tests__/responsesAdapter.test.ts (1)
1-27: ⚡ Quick winCoverage is thin for the responses adapter.
These two tests only exercise
buildResponsesRequest(reasoning-effort plumbing and absence ofmax_output_tokens). The PR's main correctness risk is in the SSE → Anthropic-event conversion path (text deltas, reasoning, tool calls, usage, error frames) that lives in the same adapter module. Consider adding tests for that streaming logic — even a single fixture-driven SSE script verifying the produced internal event sequence would meaningfully reduce regression risk for the new ChatGPT Codex path.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/api/openai/__tests__/responsesAdapter.test.ts` around lines 1 - 27, Add a unit test for the SSE → Anthropic-event conversion logic in the same responsesAdapter module (the file that exports buildResponsesRequest) by creating a fixture-driven test that feeds a sample SSE stream (containing text deltas, reasoning frames, tool call/start/finish frames, usage, and an error frame) into the adapter's SSE-to-event converter and asserts the produced internal event sequence and payloads (text chunks, reasoning.effort, tool call metadata, usage totals, and error frame shape) match expected values; put the new test alongside the existing buildResponsesRequest tests, import the SSE conversion function from the responsesAdapter module, use a deterministic fixture string for the SSE script, and verify ordering and content of events to cover the streaming path.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/components/ConsoleOAuthFlow.tsx`:
- Around line 999-1014: The requestChatGPTDeviceCode call in ConsoleOAuthFlow
cannot be aborted because requestChatGPTDeviceCode and the shared postJSON
helper don't accept an AbortSignal; update requestChatGPTDeviceCode (in
chatgptAuth.ts) to take an optional signal parameter and forward it into
postJSON, update postJSON to accept and pass the signal to fetch, then pass
controller.signal from ConsoleOAuthFlow where requestChatGPTDeviceCode is
invoked (mirroring how pollForAuthorizationCode already accepts a signal).
- Around line 1049-1076: In ConsoleOAuthFlow.tsx add the missing Esc keybinding
for the ChatGPT device-code flow: when status.phase is 'requesting' or 'waiting'
(the same branch that renders the "Esc to go back" text) register
useKeybinding('confirm:no', () => setOAuthStatus({ state: 'idle' }), { context:
'Confirmation' }) so pressing Esc returns to idle; place this alongside the
other flows' keybinding registrations inside the component that renders the
device-code UI (refer to status.phase, status.deviceCode, and setOAuthStatus to
locate the correct spot).
In `@src/services/api/openai/chatgptAuth.ts`:
- Around line 52-57: codexAuthFilePath currently falls back to process.env.HOME
which is undefined on Windows; update the function to use the imported homedir()
(or process.env.USERPROFILE) as the home directory source so the returned path
matches the Codex CLI location. Locate codexAuthFilePath and replace the HOME
fallback logic with homedir() (or a homedir() || process.env.USERPROFILE) when
building the join(...) for 'auth.json' so the computed path is cross-platform.
In `@src/services/api/openai/responsesAdapter.ts`:
- Around line 419-441: When handling 'response.completed' /
'response.incomplete' in the block that currently closes textBlockOpen and
thinkingBlockOpen, also iterate the toolBlocks array (the toolBlocks entries
that have open: true) and for each open tool block yield a content_block_stop
event with that block's index and set its open flag to false so tool blocks are
force-closed; perform this before emitting the final message_delta/message_stop
(i.e., in the same branch handling response.completed/response.incomplete) to
ensure no unmatched content_block_start remains.
- Around line 193-222: In parseSSE wrap the JSON.parse(data) call in a try/catch
inside the async generator so a single malformed SSE frame doesn't break the
stream; when parsing fails, log or debug the parse error (include the raw data
string for context), skip yielding for that frame, and continue the generator
loop in parseSSE (do not rethrow). Refer to the parseSSE function and the block
that builds data from lines and currently calls JSON.parse(data) — change that
section to catch JSON errors, handle/log them, and proceed to the next frame.
---
Outside diff comments:
In `@src/commands/effort/effort.tsx`:
- Around line 115-119: The validation error message for the effort level is out
of date: when isEffortLevel(normalized) fails the returned message lists valid
options but omits "xhigh"; update the error return in the same block to include
"xhigh" (i.e., change the message string that currently references args to list:
low, medium, high, xhigh, max, auto) so the feedback matches the accepted values
checked by isEffortLevel; ensure you edit the same conditional that uses
isEffortLevel(normalized) and returns the message object referring to
args/normalized.
In `@src/utils/model/model.ts`:
- Around line 124-181: The three getters getDefaultOpusModel,
getDefaultSonnetModel, and getDefaultHaikuModel short-circuit to CHATGPT_* Codex
constants in ChatGPT auth mode and ignore OPENAI_DEFAULT_*_MODEL; mirror the
pattern used in getSmallFastModel so OpenAI env overrides are respected in
ChatGPT auth mode: inside each function when provider === 'openai' &&
isChatGPTAuthMode() return process.env.OPENAI_DEFAULT_OPUS_MODEL ??
CHATGPT_CODEX_DEFAULT_MODEL (and similarly use OPENAI_DEFAULT_SONNET_MODEL ??
CHATGPT_CODEX_DEFAULT_MODEL for getDefaultSonnetModel, and
OPENAI_DEFAULT_HAIKU_MODEL ?? CHATGPT_CODEX_FAST_MODEL for
getDefaultHaikuModel), keeping the rest of the provider-specific checks
unchanged.
---
Nitpick comments:
In `@src/buddy/companion.ts`:
- Around line 131-134: The hasWord function interpolates the dynamic variable
word directly into a RegExp, which can break or be exploitable if word contains
regex metacharacters; fix it by escaping regex-special characters in word before
constructing the RegExp (e.g., add or use an escapeRegExp helper that replaces
characters like -\/\\^$*+?.()|[]{} with escaped versions) and then build the
pattern using the escaped value with the existing WORD_BOUNDARY; update the call
site inside hasWord to use the escapedWord when creating the RegExp.
In `@src/commands/provider.ts`:
- Around line 82-97: The provider switch trusts OPENAI_AUTH_MODE === 'chatgpt'
without verifying credentials; modify the logic in the provider handling (the
block using arg === 'openai' and getMergedEnv()) to perform a lightweight check
for ChatGPT credentials before treating hasChatGPTAuth as sufficient—add a
helper like hasChatGPTCredentials() (or reuse an existing auth-check helper) and
call it when mergedEnv.OPENAI_AUTH_MODE === 'chatgpt'; if the helper indicates
credentials are missing, do not short-circuit the env-var validation:
updateSettingsForSource('userSettings', { modelType: 'openai' }) should still
occur but return a message prompting the user to run /login (or complete auth)
and include which auth artifact is missing, similar to the existing missing-env
response.
In `@src/services/api/openai/__tests__/responsesAdapter.test.ts`:
- Around line 1-27: Add a unit test for the SSE → Anthropic-event conversion
logic in the same responsesAdapter module (the file that exports
buildResponsesRequest) by creating a fixture-driven test that feeds a sample SSE
stream (containing text deltas, reasoning frames, tool call/start/finish frames,
usage, and an error frame) into the adapter's SSE-to-event converter and asserts
the produced internal event sequence and payloads (text chunks,
reasoning.effort, tool call metadata, usage totals, and error frame shape) match
expected values; put the new test alongside the existing buildResponsesRequest
tests, import the SSE conversion function from the responsesAdapter module, use
a deterministic fixture string for the SSE script, and verify ordering and
content of events to cover the streaming path.
In `@src/services/api/openai/chatgptAuth.ts`:
- Around line 224-259: The polling sleep must be abort-aware so cancellation
during the setTimeout is immediate and surfaces as an AbortError; update
pollForAuthorizationCode to replace the simple await new Promise(resolve =>
setTimeout(...)) with an abort-aware sleep that (1) immediately rejects if
signal?.aborted, (2) attaches a signal.addEventListener('abort') handler to
reject the sleep with an Error whose name is 'AbortError' (or a DOMException if
available), and (3) clears the timeout and removes the listener on completion;
keep all other logic in pollForAuthorizationCode the same and reference
deviceCode.intervalSeconds for the timeout duration.
In `@src/utils/model/chatgptModels.ts`:
- Around line 47-52: The isChatGPTCodexReasoningModel function currently
lowercases the model then strips a trailing “[1m]” with a case-sensitive regex
which is fragile; update the normalization to defensively remove the suffix
case-insensitively (e.g., use /\[1m]$/i) or perform the strip on an explicitly
normalized string before comparing, ensuring the regex uses the i flag and
keeping the comparison against CHATGPT_CODEX_MODEL_OPTIONS via
option.value.toLowerCase() === normalized.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1f922ca2-0e7b-4344-a698-cb45efe7635c
📒 Files selected for processing (17)
src/buddy/__tests__/companion.test.tssrc/buddy/companion.tssrc/commands/effort/effort.tsxsrc/commands/logout/index.tssrc/commands/logout/logout.tsxsrc/commands/provider.tssrc/components/ConsoleOAuthFlow.tsxsrc/components/ModelPicker.tsxsrc/services/api/openai/__tests__/responsesAdapter.test.tssrc/services/api/openai/chatgptAuth.tssrc/services/api/openai/index.tssrc/services/api/openai/responsesAdapter.tssrc/utils/effort.tssrc/utils/managedEnvConstants.tssrc/utils/model/chatgptModels.tssrc/utils/model/model.tssrc/utils/model/modelOptions.ts
| useEffect(() => { | ||
| if (startedRef.current) return; | ||
| startedRef.current = true; | ||
| let cancelled = false; | ||
| const controller = new AbortController(); | ||
| async function runLogin() { | ||
| try { | ||
| const deviceCode = await requestChatGPTDeviceCode(); | ||
| if (cancelled) return; | ||
| setOAuthStatus({ | ||
| state: 'chatgpt_subscription', | ||
| phase: 'waiting', | ||
| deviceCode, | ||
| }); | ||
| void openBrowser(deviceCode.verificationUrl); | ||
| await completeChatGPTDeviceLogin(deviceCode, controller.signal); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether requestChatGPTDeviceCode accepts an AbortSignal parameter.
fd -t f 'chatgptAuth.ts' --exec rg -nP -C2 'export\s+(async\s+)?function\s+requestChatGPTDeviceCode' {} \;Repository: claude-code-best/claude-code
Length of output: 229
🏁 Script executed:
#!/bin/bash
# Get the full implementation of requestChatGPTDeviceCode
fd -t f 'chatgptAuth.ts' --exec cat -n {} \; | head -250 | tail -100Repository: claude-code-best/claude-code
Length of output: 3730
requestChatGPTDeviceCode() doesn't support abort signal — initial device-code request can't be cancelled.
The function doesn't accept an AbortSignal parameter, so if the request hangs (slow network, captive portal, server outage), unmount/cancel won't abort the in-flight fetch. Unlike pollForAuthorizationCode, which already accepts a signal, the device code request lacks cancellation support.
To fix, add signal parameter to both requestChatGPTDeviceCode and the postJSON helper, then pass controller.signal from line 1006:
- const deviceCode = await requestChatGPTDeviceCode();
+ const deviceCode = await requestChatGPTDeviceCode(controller.signal);Also update the function signatures in chatgptAuth.ts to accept and forward the signal.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/ConsoleOAuthFlow.tsx` around lines 999 - 1014, The
requestChatGPTDeviceCode call in ConsoleOAuthFlow cannot be aborted because
requestChatGPTDeviceCode and the shared postJSON helper don't accept an
AbortSignal; update requestChatGPTDeviceCode (in chatgptAuth.ts) to take an
optional signal parameter and forward it into postJSON, update postJSON to
accept and pass the signal to fetch, then pass controller.signal from
ConsoleOAuthFlow where requestChatGPTDeviceCode is invoked (mirroring how
pollForAuthorizationCode already accepts a signal).
| return ( | ||
| <Box flexDirection="column" gap={1}> | ||
| <Text bold>ChatGPT Account Setup</Text> | ||
| {status.phase === 'requesting' && ( | ||
| <Box> | ||
| <Spinner /> | ||
| <Text>Requesting sign-in code…</Text> | ||
| </Box> | ||
| )} | ||
| {status.phase === 'waiting' && status.deviceCode && ( | ||
| <Box flexDirection="column" gap={1}> | ||
| <Text>Open this link and sign in with your ChatGPT account:</Text> | ||
| <Link url={status.deviceCode.verificationUrl}> | ||
| <Text dimColor>{status.deviceCode.verificationUrl}</Text> | ||
| </Link> | ||
| <Text> | ||
| Enter code: <Text bold>{status.deviceCode.userCode}</Text> | ||
| </Text> | ||
| <Box> | ||
| <Spinner /> | ||
| <Text>Waiting for ChatGPT authorization…</Text> | ||
| </Box> | ||
| </Box> | ||
| )} | ||
| <Text dimColor>Esc to go back. Device codes expire after 15 minutes.</Text> | ||
| </Box> | ||
| ); | ||
| } |
There was a problem hiding this comment.
Missing confirm:no (Esc) keybinding makes the displayed "Esc to go back" instruction non-functional.
The other login sub-cases (custom_platform, openai_chat_api, gemini_api) each register a useKeybinding('confirm:no', () => setOAuthStatus({ state: 'idle' }), { context: 'Confirmation' }) so the user can return to the menu while waiting. This case shows the same instruction at line 1073 but never registers the keybinding, so Esc does nothing during the requesting/waiting phases. Users who started this flow accidentally have no way to back out other than killing the process.
🛠️ Proposed fix
useEffect(() => {
...
}, [setOAuthStatus, onDone]);
+ useKeybinding(
+ 'confirm:no',
+ () => {
+ setOAuthStatus({ state: 'idle' });
+ },
+ { context: 'Confirmation' },
+ );
+
return (
<Box flexDirection="column" gap={1}>
<Text bold>ChatGPT Account Setup</Text>🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/ConsoleOAuthFlow.tsx` around lines 1049 - 1076, In
ConsoleOAuthFlow.tsx add the missing Esc keybinding for the ChatGPT device-code
flow: when status.phase is 'requesting' or 'waiting' (the same branch that
renders the "Esc to go back" text) register useKeybinding('confirm:no', () =>
setOAuthStatus({ state: 'idle' }), { context: 'Confirmation' }) so pressing Esc
returns to idle; place this alongside the other flows' keybinding registrations
inside the component that renders the device-code UI (refer to status.phase,
status.deviceCode, and setOAuthStatus to locate the correct spot).
| function codexAuthFilePath(): string { | ||
| return join( | ||
| process.env.CODEX_HOME ?? join(process.env.HOME ?? '', '.codex'), | ||
| 'auth.json', | ||
| ) | ||
| } |
There was a problem hiding this comment.
Codex fallback path is not cross-platform.
process.env.HOME is not set on Windows; this fallback yields \\.codex\\auth.json (resolved relative to cwd) on Windows, which does not match where the Codex CLI stores its auth file. Use homedir() (already imported) or fall back to USERPROFILE for parity.
Proposed fix
function codexAuthFilePath(): string {
return join(
- process.env.CODEX_HOME ?? join(process.env.HOME ?? '', '.codex'),
+ process.env.CODEX_HOME ?? join(homedir(), '.codex'),
'auth.json',
)
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/api/openai/chatgptAuth.ts` around lines 52 - 57,
codexAuthFilePath currently falls back to process.env.HOME which is undefined on
Windows; update the function to use the imported homedir() (or
process.env.USERPROFILE) as the home directory source so the returned path
matches the Codex CLI location. Locate codexAuthFilePath and replace the HOME
fallback logic with homedir() (or a homedir() || process.env.USERPROFILE) when
building the join(...) for 'auth.json' so the computed path is cross-platform.
| async function* parseSSE( | ||
| response: Response, | ||
| ): AsyncGenerator<Record<string, unknown>, void> { | ||
| if (!response.body) throw new Error('ChatGPT response did not include a body') | ||
| const reader = response.body.getReader() | ||
| const decoder = new TextDecoder() | ||
| let buffer = '' | ||
| while (true) { | ||
| const { done, value } = await reader.read() | ||
| if (done) break | ||
| buffer += decoder.decode(value, { stream: true }) | ||
| let splitAt = buffer.indexOf('\n\n') | ||
| while (splitAt >= 0) { | ||
| const frame = buffer.slice(0, splitAt) | ||
| buffer = buffer.slice(splitAt + 2) | ||
| const data = frame | ||
| .split(/\r?\n/) | ||
| .filter(line => line.startsWith('data:')) | ||
| .map(line => line.slice(5).trimStart()) | ||
| .join('\n') | ||
| if (data && data !== '[DONE]') { | ||
| const parsed = JSON.parse(data) as unknown | ||
| if (parsed && typeof parsed === 'object') { | ||
| yield parsed as Record<string, unknown> | ||
| } | ||
| } | ||
| splitAt = buffer.indexOf('\n\n') | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Wrap JSON.parse in try/catch so a single malformed SSE frame doesn't abort the entire stream.
A truncated or non-JSON data: payload (transient network glitch, server-injected keepalive, schema drift) will throw out of the generator and propagate to queryModelOpenAI's catch, which yields an api_error and discards any in-flight content blocks. Other SSE adapters in the codebase typically skip and log unparseable frames to keep the stream resilient.
🛡️ Proposed fix
if (data && data !== '[DONE]') {
- const parsed = JSON.parse(data) as unknown
- if (parsed && typeof parsed === 'object') {
- yield parsed as Record<string, unknown>
+ try {
+ const parsed = JSON.parse(data) as unknown
+ if (parsed && typeof parsed === 'object') {
+ yield parsed as Record<string, unknown>
+ }
+ } catch {
+ // Skip malformed SSE frame; stream continues.
}
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/api/openai/responsesAdapter.ts` around lines 193 - 222, In
parseSSE wrap the JSON.parse(data) call in a try/catch inside the async
generator so a single malformed SSE frame doesn't break the stream; when parsing
fails, log or debug the parse error (include the raw data string for context),
skip yielding for that frame, and continue the generator loop in parseSSE (do
not rethrow). Refer to the parseSSE function and the block that builds data from
lines and currently calls JSON.parse(data) — change that section to catch JSON
errors, handle/log them, and proceed to the next frame.
| if (type === 'response.completed' || type === 'response.incomplete') { | ||
| if (textBlockOpen) { | ||
| yield { | ||
| type: 'content_block_stop', | ||
| index: currentContentIndex, | ||
| } as BetaRawMessageStreamEvent | ||
| textBlockOpen = false | ||
| } | ||
| if (thinkingBlockOpen) { | ||
| yield { | ||
| type: 'content_block_stop', | ||
| index: currentContentIndex, | ||
| } as BetaRawMessageStreamEvent | ||
| thinkingBlockOpen = false | ||
| } | ||
| const response = event.response as Record<string, unknown> | undefined | ||
| yield { | ||
| type: 'message_delta', | ||
| delta: { stop_reason: mapStopReason(response), stop_sequence: null }, | ||
| usage: extractUsage(response), | ||
| } as unknown as BetaRawMessageStreamEvent | ||
| yield { type: 'message_stop' } as BetaRawMessageStreamEvent | ||
| } |
There was a problem hiding this comment.
Tool blocks aren't force-closed when response.completed/response.incomplete arrives early.
Text and thinking blocks are closed defensively here, but toolBlocks entries with open: true aren't iterated. The happy path emits response.output_item.done per tool before response.completed, but if the server ever completes a response with unfinished tool items (e.g., response.incomplete from token cap mid-tool-call), downstream consumers will see unbalanced content_block_start without a matching content_block_stop.
🛡️ Proposed fix
if (type === 'response.completed' || type === 'response.incomplete') {
if (textBlockOpen) {
yield { type: 'content_block_stop', index: currentContentIndex } as BetaRawMessageStreamEvent
textBlockOpen = false
}
if (thinkingBlockOpen) {
yield { type: 'content_block_stop', index: currentContentIndex } as BetaRawMessageStreamEvent
thinkingBlockOpen = false
}
+ for (const block of toolBlocks.values()) {
+ if (block.open) {
+ yield { type: 'content_block_stop', index: block.contentIndex } as BetaRawMessageStreamEvent
+ block.open = false
+ }
+ }
const response = event.response as Record<string, unknown> | undefined📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (type === 'response.completed' || type === 'response.incomplete') { | |
| if (textBlockOpen) { | |
| yield { | |
| type: 'content_block_stop', | |
| index: currentContentIndex, | |
| } as BetaRawMessageStreamEvent | |
| textBlockOpen = false | |
| } | |
| if (thinkingBlockOpen) { | |
| yield { | |
| type: 'content_block_stop', | |
| index: currentContentIndex, | |
| } as BetaRawMessageStreamEvent | |
| thinkingBlockOpen = false | |
| } | |
| const response = event.response as Record<string, unknown> | undefined | |
| yield { | |
| type: 'message_delta', | |
| delta: { stop_reason: mapStopReason(response), stop_sequence: null }, | |
| usage: extractUsage(response), | |
| } as unknown as BetaRawMessageStreamEvent | |
| yield { type: 'message_stop' } as BetaRawMessageStreamEvent | |
| } | |
| if (type === 'response.completed' || type === 'response.incomplete') { | |
| if (textBlockOpen) { | |
| yield { | |
| type: 'content_block_stop', | |
| index: currentContentIndex, | |
| } as BetaRawMessageStreamEvent | |
| textBlockOpen = false | |
| } | |
| if (thinkingBlockOpen) { | |
| yield { | |
| type: 'content_block_stop', | |
| index: currentContentIndex, | |
| } as BetaRawMessageStreamEvent | |
| thinkingBlockOpen = false | |
| } | |
| for (const block of toolBlocks.values()) { | |
| if (block.open) { | |
| yield { type: 'content_block_stop', index: block.contentIndex } as BetaRawMessageStreamEvent | |
| block.open = false | |
| } | |
| } | |
| const response = event.response as Record<string, unknown> | undefined | |
| yield { | |
| type: 'message_delta', | |
| delta: { stop_reason: mapStopReason(response), stop_sequence: null }, | |
| usage: extractUsage(response), | |
| } as unknown as BetaRawMessageStreamEvent | |
| yield { type: 'message_stop' } as BetaRawMessageStreamEvent | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/api/openai/responsesAdapter.ts` around lines 419 - 441, When
handling 'response.completed' / 'response.incomplete' in the block that
currently closes textBlockOpen and thinkingBlockOpen, also iterate the
toolBlocks array (the toolBlocks entries that have open: true) and for each open
tool block yield a content_block_stop event with that block's index and set its
open flag to false so tool blocks are force-closed; perform this before emitting
the final message_delta/message_stop (i.e., in the same branch handling
response.completed/response.incomplete) to ensure no unmatched
content_block_start remains.
|
@q1352013520 请修复 CI 哈 |
/login 新增 ChatGPT 订阅账号登录
在 src/components/ConsoleOAuthFlow.tsx:461 里新增 ChatGPT account with subscription 选项,走 OpenAI/Codex device auth 流程。登录成功后写入:
新增 ChatGPT/Codex 认证管理
新文件 src/services/api/openai/chatgptAuth.ts:1 负责:
OpenAI provider 增加 ChatGPT Responses 后端
src/services/api/openai/index.ts:340 现在会判断 OPENAI_AUTH_MODE=chatgpt:
新文件 src/services/api/openai/responsesAdapter.ts:165 负责把 Responses API 的 SSE 流转换成 Anthropic 内部事件,包括文本、reasoning、tool call、usage、错误等。
新增 ChatGPT Codex 模型列表和默认模型
src/utils/model/chatgptModels.ts:10 增加了 ChatGPT Codex 模型:
默认主模型是 gpt-5.5,fast/haiku 类模型默认是 gpt-5.4-mini。
模型选择器和 effort 支持 ChatGPT Codex
src/components/ModelPicker.tsx:147 和 src/utils/effort.ts:96 增加了 xhigh effort 支持。
对 ChatGPT Codex 模型,max 会映射成 xhigh,默认 effort 是 medium。
/logout 现在会清理 ChatGPT 登录态
src/commands/logout/logout.tsx:26 现在除了原来的 Anthropic/API key 清理,还会清理 ChatGPT token 和 OPENAI_AUTH_MODE。提示文案也从 Anthropic account 改成更通用的 configured
account。
OpenAI Compatible 和 ChatGPT 订阅模式互斥
保存 OpenAI Compatible 配置时会清掉 OPENAI_AUTH_MODE,避免之前选过 ChatGPT 订阅后继续误走 ChatGPT Responses 路径。
附带一个 buddy/companion 兼容修复
src/buddy/companion.ts:137 增加了 legacy seedless companion 的 species/rarity 推断,避免旧格式 companion 在 userId roll 变化后显示身份漂移。
Need help on this PR? Tag
@codesmithwith what you need.Summary by CodeRabbit
New Features
Bug Fixes
Tests