Gemini authentication with browser-based OAuth login and model selection dropdowns#449
Gemini authentication with browser-based OAuth login and model selection dropdowns#449reikernodd wants to merge 5 commits intoclaude-code-best:mainfrom
Conversation
…ng and improve prompt language handling.
📝 WalkthroughWalkthroughThis PR adds local LLM provider support (Ollama) and Google OAuth authentication for Gemini. It extends the type system to include a ChangesLocal LLM Provider and Google OAuth Support
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
reikernodd
left a comment
There was a problem hiding this comment.
Added Google auth login method.
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/commands/provider.ts (1)
179-188:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
descriptionandargumentHintare missing the newlocalprovider.
/provider localis a valid arg now (line 70) and the settings-backed branch handles it (line 148), but the help text and argument-hint completion still advertise only the old set. Users won't discover the new option from/provider's own help or autocompletion.📝 Proposed fix
const provider = { type: 'local', name: 'provider', description: - 'Switch API provider (anthropic/openai/gemini/grok/bedrock/vertex/foundry)', + 'Switch API provider (anthropic/openai/gemini/grok/local/bedrock/vertex/foundry)', aliases: ['api'], - argumentHint: '[anthropic|openai|gemini|grok|bedrock|vertex|foundry|unset]', + argumentHint: '[anthropic|openai|gemini|grok|local|bedrock|vertex|foundry|unset]', supportsNonInteractive: true, load: () => Promise.resolve({ call }), } satisfies CommandWhile here, also update the stale comment at lines 141–142 which still lists only
anthropic,openai,geminieven thoughgrokandlocalare now in the same branch.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/commands/provider.ts` around lines 179 - 188, Update the provider metadata to include the new "local" option: modify the provider object’s description and argumentHint fields (the const provider = { ... } satisfying Command) to list "local" alongside anthropic/openai/gemini/grok/bedrock/vertex/foundry, and update the stale inline comment that enumerates providers (the comment near the settings-backed branch handling in the same file) so it also includes "grok" and "local" so help text and autocompletion reflect the actual supported providers.
🧹 Nitpick comments (2)
src/services/api/claude.ts (1)
1334-1334: 💤 Low valueOptimize repeated
getAPIProvider()call.The condition calls
getAPIProvider()twice. Consider caching the result in a variable.♻️ Proposed optimization
+ const apiProvider = getAPIProvider() - if (getAPIProvider() === 'openai' || getAPIProvider() === 'local') { + if (apiProvider === 'openai' || apiProvider === 'local') { const { queryModelOpenAI } = await import('./openai/index.js')🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/services/api/claude.ts` at line 1334, Replace the repeated calls to getAPIProvider() in the conditional with a cached local variable: call getAPIProvider() once (e.g., const provider = getAPIProvider()) and then use provider === 'openai' || provider === 'local' in the if statement; update any nearby logic that referenced getAPIProvider() in the same scope to use the new provider variable for consistency.src/utils/doctorDiagnostic.ts (1)
646-652: ⚡ Quick winHardware info: redundant
cpus()calls and lossy memory formatting.Two small refinements:
cpus()is called twice (line 647 for.length, line 648 for[0]?.model). On high-core-count machines this allocates a sizable array twice — cache it once.Math.round(... / 1024^3) + ' GB'rounds to whole GiB, so a system with 0.4 GiB free reports"0 GB"and 7.6 GiB reports"8 GB". For local-LLM capacity checks (the whole point of this section), one decimal is much more useful.♻️ Proposed refactor
+ const cpuList = cpus() + const toGiB = (bytes: number): string => `${(bytes / 1024 ** 3).toFixed(1)} GB` const diagnostic: DiagnosticInfo = { ... hardwareInfo: { - cpus: cpus().length, - cpuModel: cpus()[0]?.model || 'Unknown', - totalMem: Math.round(totalmem() / 1024 / 1024 / 1024) + ' GB', - freeMem: Math.round(freemem() / 1024 / 1024 / 1024) + ' GB', + cpus: cpuList.length, + cpuModel: cpuList[0]?.model || 'Unknown', + totalMem: toGiB(totalmem()), + freeMem: toGiB(freemem()), arch: arch(), }, }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/utils/doctorDiagnostic.ts` around lines 646 - 652, Cache the result of os.cpus() into a local variable before constructing the hardwareInfo object to avoid allocating the CPU array twice (used for cpus().length and cpus()[0]?.model) and use that cached array when setting cpus and cpuModel; also change totalMem and freeMem formatting to compute GiB with one decimal place (e.g., totalmem() / (1024**3) and freemem() / (1024**3)) and format using toFixed(1) + ' GB' so values like 0.4 GiB display as "0.4 GB" and 7.6 GiB as "7.6 GB".
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@README.md`:
- Around line 154-156: Update the README entries that incorrectly show the OAuth
credentials path with a leading slash; change any occurrences of
"/.files/OAuth.json" to ".files/OAuth.json" in README.md (around the current
lines referencing the OAuth.json path) and the corresponding entry in
README_EN.md so they match how the code in
src/services/api/gemini/google-oauth.ts (which uses path.join(process.cwd(),
'.files', 'OAuth.json')) expects the file to be located in the project root.
In `@src/components/ConsoleOAuthFlow.tsx`:
- Around line 89-94: The local_llm_pulling state currently lacks baseUrl and
code paths hardcode 'http://localhost:11434', so update the state shape for
'local_llm_pulling' to include baseUrl:string (alongside
modelName/status/percentage), replace hardcoded URLs in the pulling effect
(references: the pulling logic that builds the fetch/POST to Ollama) to use that
baseUrl, and modify all transitions into local_llm_pulling (including
handleLocalEnter and the Select transition that creates the state) to pass
oauthStatus.baseUrl into the new state so the post-pull state rebuild does not
overwrite the user-provided Base URL. Ensure every spot that previously wrote
'http://localhost:11434' now reads from the state's baseUrl.
- Around line 713-815: The new local_llm_setup branch uses multiple `as any`
casts; replace them by properly typing the local-state shape and narrowing
oauthStatus to that discriminated union (e.g., define LocalLlmSetupState =
Extract<OAuthStatus, { state: 'local_llm_setup' }>), then (1) remove the
`(oauthStatus as any).apiKey` usage and read oauthStatus.apiKey directly, (2)
make buildLocalState return LocalLlmSetupState and construct its next state as
that type instead of `as any`, (3) change doLocalSave signature to accept
LocalLlmSetupState rather than `any`, and (4) supply correctly typed payloads to
updateSettingsForSource so the `} as any` casts are unnecessary; this eliminates
the need to cast at call sites like buildLocalState(...) as any and preserves
type safety for LocalField, LOCAL_FIELDS, displayValues, buildLocalState,
doLocalSave, and setOAuthStatus.
In `@src/services/api/gemini/client.ts`:
- Around line 15-17: getGeminiBaseUrl currently always returns
DEFAULT_GEMINI_BASE_URL and no longer respects process.env.GEMINI_BASE_URL;
restore the env-override by reading process.env.GEMINI_BASE_URL (falling back to
DEFAULT_GEMINI_BASE_URL) and normalize it with the same replace(/\/+$/, '')
trimming; if the app truly requires the canonical endpoint in OAuth-only mode,
gate the env override behind the auth-mode check you use (e.g., an isOAuthOnly
or authMode flag) so getGeminiBaseUrl uses the env value except when OAuth-only
is enforced.
- Around line 52-58: Replace the unsafe map using (m: any) and the unguarded
m.name access with a type-narrowed iteration: remove the any assertion and
instead iterate over data.models (e.g., for...of or filter+map), guard each item
with checks like typeof m === 'object' && m !== null && typeof (m as any).name
=== 'string' (or use a custom isModel(obj) predicate), then push/return
m.name.replace(/^models\//, '') only for valid entries; ensure the function
returns an empty array for invalid items and does not assume m.name exists. Use
the existing data.models reference and replace the anonymous map callback with
the guarded logic.
In `@src/services/api/gemini/google-oauth.ts`:
- Around line 130-136: The code uses `as any` when calling
updateSettingsForSource for 'userSettings' with the googleOAuth payload; remove
the `as any` and pass a correctly typed object like the other
updateSettingsForSource calls: build a partial/typed settings object (matching
the expected UserSettings/Settings type) containing googleOAuth { access_token,
refresh_token: credentials.refresh_token || googleOAuth.refresh_token,
expiry_date } and pass that without assertion; apply the same change to the
similar call affecting lines around 141–143 so both calls use the proper typed
payload instead of `as any`.
- Around line 62-68: The object passed to updateSettingsForSource currently uses
a prohibited `as any` on the googleOAuth payload; replace that with a proper
typed value by either augmenting the settings type to include googleOAuth
(extend the userSettings/settings interface) or perform a safe double assertion:
cast the literal to unknown and then to the correct settings type before calling
updateSettingsForSource; reference the updateSettingsForSource call and the
googleOAuth object (access_token/refresh_token/expiry_date from tokens) and
ensure the resulting value matches the expected settings type rather than using
`as any`.
- Around line 108-114: The code uses a forbidden `as any` in
getGoogleAccessToken when reading googleOAuth from getSettings; replace it by
properly typing the settings result (extend the SettingsJson type to include a
googleOAuth property or create an interface GoogleOAuth { refresh_token?:
string; ... } and cast via `as unknown as GoogleOAuth`) and then use a
type-narrowing check (e.g., `const googleOAuth = settings.googleOAuth` with an
if guard checking `googleOAuth && googleOAuth.refresh_token`) so no `as any` is
used; update references in getGoogleAccessToken and any call sites that assume
googleOAuth to match the new typed shape.
In `@src/services/api/openai/index.ts`:
- Around line 221-225: The README is missing documentation for the LOCAL_MODEL
env var used by getAPIProvider logic (code uses process.env.LOCAL_MODEL when
provider === 'local' to select the model passed as options.model or fallback);
update the environment variables reference or the local LLM setup section to
document LOCAL_MODEL, explain its purpose (overrides options.model when using
the local provider), show expected values (model name) and any defaults, and
mention that the Zod settings schema requires configuring LOCAL_MODEL when using
the local provider so users know to set it.
In `@src/utils/localLlm.ts`:
- Around line 38-46: The POST to `${baseUrl}/api/pull` in localLlm.ts must
include the header "Content-Type: application/json" so the Ollama endpoint
correctly interprets the JSON body; update the fetch call that builds the pull
request (the one that sends JSON.stringify({ name: model })) to add headers: {
'Content-Type': 'application/json' }. Also add timeout handling to the fetch
calls used in checkOllamaStatus and listOllamaModels by creating or passing an
AbortSignal with AbortSignal.timeout(milliseconds) (or constructing an
AbortController and using setTimeout to abort) so those functions don’t hang
indefinitely when the local Ollama instance is unresponsive.
In `@src/utils/model/modelStrings.ts`:
- Around line 28-30: The expression (ALL_MODEL_CONFIGS[key] as any)[provider]
uses a forbidden any assertion; replace it with a proper typed narrowing or a
safe double-cast to a specific interface for that entry. Define or use a
ModelConfigEntry type (e.g., an object with provider keys and a firstParty
field) and cast ALL_MODEL_CONFIGS[key] via 'as unknown as ModelConfigEntry' or
narrow it with a type guard before indexing, then use (ALL_MODEL_CONFIGS[key] as
unknown as ModelConfigEntry)[provider] || ALL_MODEL_CONFIGS[key].firstParty so
you avoid any and keep type-safety for ALL_MODEL_CONFIGS, key, provider, out,
and firstParty.
In `@src/utils/swarm/teammateModel.ts`:
- Around line 9-11: The current hardcoded local fallback 'claude-opus-4-6' in
teammateModel.ts is invalid for non-Anthropic local providers; replace that
hardcoded string so local uses the same configurable fallback as elsewhere:
return process.env.LOCAL_MODEL (or process.env.LOCAL_MODEL || another documented
default) instead of 'claude-opus-4-6', and update the comment to note that local
model comes from process.env.LOCAL_MODEL; keep the surrounding logic using
getAPIProvider() and CLAUDE_OPUS_4_6_CONFIG unchanged.
---
Outside diff comments:
In `@src/commands/provider.ts`:
- Around line 179-188: Update the provider metadata to include the new "local"
option: modify the provider object’s description and argumentHint fields (the
const provider = { ... } satisfying Command) to list "local" alongside
anthropic/openai/gemini/grok/bedrock/vertex/foundry, and update the stale inline
comment that enumerates providers (the comment near the settings-backed branch
handling in the same file) so it also includes "grok" and "local" so help text
and autocompletion reflect the actual supported providers.
---
Nitpick comments:
In `@src/services/api/claude.ts`:
- Line 1334: Replace the repeated calls to getAPIProvider() in the conditional
with a cached local variable: call getAPIProvider() once (e.g., const provider =
getAPIProvider()) and then use provider === 'openai' || provider === 'local' in
the if statement; update any nearby logic that referenced getAPIProvider() in
the same scope to use the new provider variable for consistency.
In `@src/utils/doctorDiagnostic.ts`:
- Around line 646-652: Cache the result of os.cpus() into a local variable
before constructing the hardwareInfo object to avoid allocating the CPU array
twice (used for cpus().length and cpus()[0]?.model) and use that cached array
when setting cpus and cpuModel; also change totalMem and freeMem formatting to
compute GiB with one decimal place (e.g., totalmem() / (1024**3) and freemem() /
(1024**3)) and format using toFixed(1) + ' GB' so values like 0.4 GiB display as
"0.4 GB" and 7.6 GiB as "7.6 GB".
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: b193b877-0b9b-418b-a20b-cd67603548ab
📒 Files selected for processing (24)
.gitignoreREADME.mdREADME_EN.mdsrc/bootstrap/state.tssrc/commands/provider.tssrc/components/ConsoleOAuthFlow.tsxsrc/components/__tests__/ConsoleOAuthFlow.test.tsxsrc/components/messages/AttachmentMessage.tsxsrc/constants/prompts.tssrc/screens/Doctor.tsxsrc/services/api/claude.tssrc/services/api/gemini/client.tssrc/services/api/gemini/google-oauth.tssrc/services/api/openai/client.tssrc/services/api/openai/index.tssrc/utils/doctorDiagnostic.tssrc/utils/localLlm.tssrc/utils/model/configs.tssrc/utils/model/modelStrings.tssrc/utils/model/providers.tssrc/utils/settings/types.tssrc/utils/status.tsxsrc/utils/swarm/teammateModel.tstests/integration/autonomy-lifecycle-user-flow.test.ts
| 1. 在 Google Cloud Console 的 **APIs & Services > OAuth consent screen** 中配置 OAuth 客户端(User Type 设为 External)。 | ||
| 2. 下载 credentials JSON 文件并保存到项目根目录的 `/.files/OAuth.json`。 | ||
| 3. 在 `/login` 配置界面中留空 API Key 并按 Enter,程序将自动拉起浏览器完成授权并自动拉取模型列表。 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the Gemini google-oauth implementation and inspect what file path it expects for the credentials JSON.
fd -t f 'google-oauth' src/services/api/gemini 2>/dev/null
fd -t f 'gemini' src/services/api 2>/dev/null | head -50
# Search for credential path references inside the gemini auth code
rg -n -C3 'OAuth\.json|\.files|credentials' --type=ts --type=tsx -g '*gemini*'
# Also search for any mention of the documented path verbatim
rg -nF '.files/OAuth.json'Repository: claude-code-best/claude-code
Length of output: 354
🏁 Script executed:
cat -n src/services/api/gemini/google-oauth.tsRepository: claude-code-best/claude-code
Length of output: 5809
Remove the leading slash from the OAuth.json path in the README.
The code loads credentials from .files/OAuth.json (line 15 of src/services/api/gemini/google-oauth.ts uses path.join(process.cwd(), '.files', 'OAuth.json')), but the README documents it as /.files/OAuth.json. The leading slash is incorrect and may confuse users into placing the file at the filesystem root instead of the project root. Update lines 155 (and the corresponding line in README_EN.md:152) to remove the leading slash:
2. 下载 credentials JSON 文件并保存到项目根目录的 `.files/OAuth.json`。
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@README.md` around lines 154 - 156, Update the README entries that incorrectly
show the OAuth credentials path with a leading slash; change any occurrences of
"/.files/OAuth.json" to ".files/OAuth.json" in README.md (around the current
lines referencing the OAuth.json path) and the corresponding entry in
README_EN.md so they match how the code in
src/services/api/gemini/google-oauth.ts (which uses path.join(process.cwd(),
'.files', 'OAuth.json')) expects the file to be located in the project root.
| | { | ||
| state: 'local_llm_pulling'; | ||
| modelName: string; | ||
| status: string; | ||
| percentage?: number; | ||
| } |
There was a problem hiding this comment.
Hardcoded http://localhost:11434 in the pulling flow ignores the user-configured baseUrl.
Three problems with the local_llm_pulling flow:
- The
local_llm_pullingstate shape (Lines 89-94) does not carrybaseUrl, so the pulling effect has nothing to fall back on. - Lines 213, 230, 244 hardcode
'http://localhost:11434'. A user who configured Ollama on a remote host or a non-default port can complete the setup form but the subsequent pull will silently target localhost — failing or worse, hitting a different Ollama instance. - After a successful pull at Line 230, the state is rebuilt with the same hardcoded URL, overwriting whatever the user typed in the
Base URLfield.
The same applies to handleLocalEnter (Lines 779-784) and the inline transition inside the model Select (Lines 985-989) — both transition to local_llm_pulling without forwarding baseUrl.
🐛 Proposed fix
| {
state: 'local_llm_pulling';
modelName: string;
+ baseUrl: string;
status: string;
percentage?: number;
}- if (oauthStatus.state === 'local_llm_pulling') {
- const abortController = new AbortController();
- (async () => {
- try {
- for await (const progress of pullOllamaModel(
- oauthStatus.modelName,
- 'http://localhost:11434',
- abortController.signal,
- )) {
+ if (oauthStatus.state === 'local_llm_pulling') {
+ const abortController = new AbortController();
+ const { modelName, baseUrl } = oauthStatus;
+ (async () => {
+ try {
+ for await (const progress of pullOllamaModel(
+ modelName,
+ baseUrl,
+ abortController.signal,
+ )) {
...
}
- setOAuthStatus({
- state: 'local_llm_setup',
- runnerType: 'ollama',
- baseUrl: 'http://localhost:11434',
- modelName: oauthStatus.modelName,
+ setOAuthStatus({
+ state: 'local_llm_setup',
+ runnerType: 'ollama',
+ baseUrl,
+ modelName,
activeField: 'model_name',
availableModels: [],
isLoadingModels: false,
});Then update both call sites that transition into local_llm_pulling (Lines 779-784 and 985-989) to pass baseUrl: oauthStatus.baseUrl.
Also applies to: 206-255, 776-790
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/ConsoleOAuthFlow.tsx` around lines 89 - 94, The
local_llm_pulling state currently lacks baseUrl and code paths hardcode
'http://localhost:11434', so update the state shape for 'local_llm_pulling' to
include baseUrl:string (alongside modelName/status/percentage), replace
hardcoded URLs in the pulling effect (references: the pulling logic that builds
the fetch/POST to Ollama) to use that baseUrl, and modify all transitions into
local_llm_pulling (including handleLocalEnter and the Select transition that
creates the state) to pass oauthStatus.baseUrl into the new state so the
post-pull state rebuild does not overwrite the user-provided Base URL. Ensure
every spot that previously wrote 'http://localhost:11434' now reads from the
state's baseUrl.
| case 'local_llm_setup': { | ||
| type LocalField = 'runner_type' | 'base_url' | 'api_key' | 'model_name' | 'custom_model_name'; | ||
| const LOCAL_FIELDS: LocalField[] = ['runner_type', 'base_url', 'api_key', 'model_name', 'custom_model_name']; | ||
| const activeField = oauthStatus.activeField; | ||
|
|
||
| const displayValues: Record<LocalField, string> = { | ||
| runner_type: oauthStatus.runnerType, | ||
| base_url: oauthStatus.baseUrl, | ||
| api_key: (oauthStatus as any).apiKey ?? '', | ||
| model_name: oauthStatus.modelName, | ||
| custom_model_name: oauthStatus.modelName, | ||
| }; | ||
|
|
||
| const [localInputValue, setLocalInputValue] = useState(displayValues[activeField] ?? ''); | ||
| const [localInputCursorOffset, setLocalInputCursorOffset] = useState((displayValues[activeField] ?? '').length); | ||
|
|
||
| const buildLocalState = useCallback( | ||
| (field: LocalField, val: string, nextField?: LocalField): OAuthStatus => { | ||
| const newState = { ...oauthStatus } as any; | ||
| if (field === 'runner_type') { | ||
| newState.runnerType = val; | ||
| if (val === 'ollama') newState.baseUrl = 'http://localhost:11434'; | ||
| else if (val === 'lmstudio') newState.baseUrl = 'http://localhost:1234/v1'; | ||
| else if (val === 'jan') newState.baseUrl = 'http://localhost:1337/v1'; | ||
| else if (val === 'localai') newState.baseUrl = 'http://localhost:8080/v1'; | ||
| } else if (field === 'base_url') { | ||
| newState.baseUrl = val; | ||
| } else if (field === 'api_key') { | ||
| newState.apiKey = val; | ||
| } else if (field === 'model_name' || field === 'custom_model_name') { | ||
| newState.modelName = val; | ||
| } | ||
| if (nextField) newState.activeField = nextField; | ||
| return newState; | ||
| }, | ||
| [oauthStatus], | ||
| ); | ||
|
|
||
| const doLocalSave = useCallback( | ||
| async (stateToSave: any) => { | ||
| const { runnerType, baseUrl, modelName, apiKey } = stateToSave; | ||
| const env: Record<string, string> = { | ||
| LOCAL_BASE_URL: baseUrl, | ||
| LOCAL_MODEL: modelName || 'llama3.1', | ||
| LOCAL_RUNNER_TYPE: runnerType, | ||
| }; | ||
| if (apiKey) env.LOCAL_API_KEY = apiKey; | ||
|
|
||
| updateSettingsForSource('userSettings', { | ||
| modelType: 'local', | ||
| env, | ||
| } as any); | ||
|
|
||
| updateSettingsForSource('userSettings', { | ||
| model: modelName || 'llama3.1', | ||
| } as any); | ||
|
|
||
| setOAuthStatus({ state: 'success' }); | ||
| void onDone(); | ||
| }, | ||
| [onDone, setOAuthStatus], | ||
| ); | ||
|
|
||
| const handleLocalEnter = useCallback(() => { | ||
| if (activeField === 'custom_model_name' && localInputValue) { | ||
| if (oauthStatus.runnerType === 'ollama' && !oauthStatus.availableModels.includes(localInputValue)) { | ||
| setOAuthStatus({ | ||
| state: 'local_llm_pulling', | ||
| modelName: localInputValue, | ||
| status: 'Starting download...', | ||
| }); | ||
| return; | ||
| } | ||
| const nextState = buildLocalState(activeField, localInputValue); | ||
| setOAuthStatus(nextState); | ||
| doLocalSave(nextState); | ||
| return; | ||
| } | ||
|
|
||
| const idx = LOCAL_FIELDS.indexOf(activeField); | ||
| if (idx === LOCAL_FIELDS.length - 1 || activeField === 'model_name') { | ||
| const nextState = buildLocalState(activeField, localInputValue); | ||
| setOAuthStatus(nextState); | ||
| doLocalSave(nextState); | ||
| } else { | ||
| // find next interactive field (skip model_name if custom_model_name is next, but that's handled by onChange) | ||
| const next = LOCAL_FIELDS[idx + 1]!; | ||
| const nextState = buildLocalState(activeField, localInputValue, next) as any; | ||
| setOAuthStatus(nextState); | ||
| const nextVal = | ||
| nextState[ | ||
| next === 'runner_type' | ||
| ? 'runnerType' | ||
| : next === 'base_url' | ||
| ? 'baseUrl' | ||
| : next === 'api_key' | ||
| ? 'apiKey' | ||
| : 'modelName' | ||
| ]; | ||
| setLocalInputValue(nextVal ?? ''); | ||
| setLocalInputCursorOffset((nextVal ?? '').length); | ||
| } | ||
| }, [activeField, localInputValue, oauthStatus, buildLocalState, doLocalSave, setOAuthStatus]); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major | ⚡ Quick win
Drop as any casts in the new local_llm_setup case to comply with the TypeScript guidelines.
The neighboring custom_platform (Line 1036+) and openai_chat_api (Line 1243+) cases build their next-state objects without any as any. The new code regresses on this:
- Line 721:
(oauthStatus as any).apiKey ?? ''—oauthStatusis already narrowed to thelocal_llm_setupvariant, which declaresapiKey?: string. The cast is unnecessary. - Line 731:
const newState = { ...oauthStatus } as any;— defeats type checking on every property assigned below. - Line 752:
async (stateToSave: any)— should accept the discriminatedlocal_llm_setupshape (orOAuthStatusnarrowed via guard). - Lines 764, 768:
} as anyonupdateSettingsForSourcepayload. - Lines 800, 824, 850:
buildLocalState(...) as any— only needed becausebuildLocalStateitself returnsany; once it is properly typed, these go away.
As per coding guidelines: "Prohibit as any type assertions in production code; ... Use as unknown as SpecificType double assertion or interface supplementation when type mismatches occur" and "Use type guards (type narrowing) with union types instead of forced type casting".
♻️ Suggested shape (sketch)
type LocalLlmSetupState = Extract<OAuthStatus, { state: 'local_llm_setup' }>;
const buildLocalState = useCallback(
(field: LocalField, val: string, nextField?: LocalField): LocalLlmSetupState => {
const next: LocalLlmSetupState = { ...oauthStatus };
switch (field) {
case 'runner_type': {
next.runnerType = val as LocalLlmSetupState['runnerType'];
if (val === 'ollama') next.baseUrl = 'http://localhost:11434';
else if (val === 'lmstudio') next.baseUrl = 'http://localhost:1234/v1';
else if (val === 'jan') next.baseUrl = 'http://localhost:1337/v1';
else if (val === 'localai') next.baseUrl = 'http://localhost:8080/v1';
break;
}
case 'base_url':
next.baseUrl = val; break;
case 'api_key':
next.apiKey = val; break;
case 'model_name':
case 'custom_model_name':
next.modelName = val; break;
}
if (nextField) next.activeField = nextField;
return next;
},
[oauthStatus],
);
const doLocalSave = useCallback(
async (stateToSave: LocalLlmSetupState) => { … },
[onDone, setOAuthStatus],
);The same change removes the need for the as any at the call sites, and (oauthStatus as any).apiKey becomes plain oauthStatus.apiKey.
The same comment also applies to the small set of as any casts added in the Gemini case (Lines 1611-1612 default: return s; is unreachable for typed inputs; Lines 1740/1767/1781/1902/1915/1921/1927 use indexOf(activeField as any) — as GeminiField / proper guards would suffice; Lines 1704/1706 add another } as any payload).
As per coding guidelines: "Prohibit as any type assertions in production code" and "Use type guards (type narrowing) with union types instead of forced type casting".
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/components/ConsoleOAuthFlow.tsx` around lines 713 - 815, The new
local_llm_setup branch uses multiple `as any` casts; replace them by properly
typing the local-state shape and narrowing oauthStatus to that discriminated
union (e.g., define LocalLlmSetupState = Extract<OAuthStatus, { state:
'local_llm_setup' }>), then (1) remove the `(oauthStatus as any).apiKey` usage
and read oauthStatus.apiKey directly, (2) make buildLocalState return
LocalLlmSetupState and construct its next state as that type instead of `as
any`, (3) change doLocalSave signature to accept LocalLlmSetupState rather than
`any`, and (4) supply correctly typed payloads to updateSettingsForSource so the
`} as any` casts are unnecessary; this eliminates the need to cast at call sites
like buildLocalState(...) as any and preserves type safety for LocalField,
LOCAL_FIELDS, displayValues, buildLocalState, doLocalSave, and setOAuthStatus.
| function getGeminiBaseUrl(): string { | ||
| return (process.env.GEMINI_BASE_URL || DEFAULT_GEMINI_BASE_URL).replace( | ||
| /\/+$/, | ||
| '', | ||
| ) | ||
| return DEFAULT_GEMINI_BASE_URL.replace(/\/+$/, '') | ||
| } |
There was a problem hiding this comment.
Regression: GEMINI_BASE_URL env override removed without deprecation.
getGeminiBaseUrl() now always returns the hardcoded default and no longer honors process.env.GEMINI_BASE_URL. This is a breaking change for users who previously pointed Gemini at a proxy, internal mirror, or Vertex-compatible relay. It also contradicts the description in src/utils/settings/types.ts line 373 which still documents GEMINI_BASE_URL as an optional configuration variable for modelType: 'gemini'.
If the intent is that OAuth-only mode requires the canonical endpoint, gate the override on auth mode rather than removing it outright; otherwise restore the env read.
🛡️ Proposed fix to restore env override
function getGeminiBaseUrl(): string {
- return DEFAULT_GEMINI_BASE_URL.replace(/\/+$/, '')
+ const url = process.env.GEMINI_BASE_URL || DEFAULT_GEMINI_BASE_URL
+ return url.replace(/\/+$/, '')
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/api/gemini/client.ts` around lines 15 - 17, getGeminiBaseUrl
currently always returns DEFAULT_GEMINI_BASE_URL and no longer respects
process.env.GEMINI_BASE_URL; restore the env-override by reading
process.env.GEMINI_BASE_URL (falling back to DEFAULT_GEMINI_BASE_URL) and
normalize it with the same replace(/\/+$/, '') trimming; if the app truly
requires the canonical endpoint in OAuth-only mode, gate the env override behind
the auth-mode check you use (e.g., an isOAuthOnly or authMode flag) so
getGeminiBaseUrl uses the env value except when OAuth-only is enforced.
| const data = await response.json() | ||
| if (!data || !Array.isArray(data.models)) { | ||
| return [] | ||
| } | ||
|
|
||
| return data.models.map((m: any) => m.name.replace(/^models\//, '')) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does the Gemini ListModels REST API guarantee name is always present on each model entry, and what's the response schema for /v1beta/models?
💡 Result:
Yes—per the official Gemini API model schema, the Model.name field is required (so clients should expect it to be present on each model entry). However, there are reports of Gemini-compatible /v1beta/models implementations returning incomplete metadata in practice, so you should not assume 100% presence without defensive handling. The response schema for GET /v1beta/models (models.list) is a paginated list: { "models": [ { /* Model object */ } ], "nextPageToken": "string" // omitted if no more pages } For each entry’s Model object, the schema specifies a required name field (resource name of the model), plus other fields (e.g., version, supportedGenerationMethods, displayName, etc.). Evidence: - The Model schema marks name as “Required.” [1] - The /v1beta/models list response uses { "models": [Model], "nextPageToken": string } and indicates nextPageToken is omitted when there are no more pages. [1] - A public issue reports that Gemini-compatible /v1beta/models can return incomplete Model metadata (including supportedGenerationMethods null/omitted), illustrating that real-world responses may deviate from the schema’s required fields. [2]
Citations:
- 1: https://ai.google.dev/api/models?hl=de
- 2: Gemini-compatible /v1beta/models returns incomplete Model metadata (supportedGenerationMethods=null) QuantumNous/new-api#3364
🏁 Script executed:
# Check if file exists and read lines 52-58
if [ -f "src/services/api/gemini/client.ts" ]; then
echo "=== File exists. Reading lines 45-65 for context ==="
sed -n '45,65p' src/services/api/gemini/client.ts
else
echo "File not found at src/services/api/gemini/client.ts"
# Search for similar files
find . -name "*gemini*" -type f 2>/dev/null | head -20
fiRepository: claude-code-best/claude-code
Length of output: 723
🏁 Script executed:
# Also check the entire function to understand context
if [ -f "src/services/api/gemini/client.ts" ]; then
echo "=== Full file analysis ==="
wc -l src/services/api/gemini/client.ts
fiRepository: claude-code-best/claude-code
Length of output: 137
🏁 Script executed:
# Read the entire file to understand imports and type context
cat -n src/services/api/gemini/client.tsRepository: claude-code-best/claude-code
Length of output: 5078
🏁 Script executed:
# Check if `@ant/model-provider` types include Gemini API response definitions
fd -type f "model-provider" --maxdepth 5 2>/dev/null | head -20
find . -path "*/model-provider*" -type f 2>/dev/null | head -20Repository: claude-code-best/claude-code
Length of output: 1382
🏁 Script executed:
# Search for any type definitions or interfaces related to Gemini models in the codebase
rg "GeminiModel|gemini.*model" -t ts -t tsx --heading -A 3 -B 1 2>/dev/null | head -40Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
# Check the Gemini types file
cat packages/@ant/model-provider/src/providers/gemini/types.tsRepository: claude-code-best/claude-code
Length of output: 1854
🏁 Script executed:
# Search for ListModels response types or similar
rg "ListModels|models.*response|models.*list" -t ts -A 2 -B 1 | head -50Repository: claude-code-best/claude-code
Length of output: 1776
🏁 Script executed:
# Check how other API clients handle model listing (OpenAI, Grok)
rg "map.*\(.*any\)" -t ts --heading | head -30Repository: claude-code-best/claude-code
Length of output: 1284
Remove any type assertion and add type narrowing with guard for m.name.
The parameter (m: any) violates the production code guideline: "Prohibit as any type assertions in production code." Additionally, the code assumes m.name exists and is a string without validation—it will throw if the API response deviates from the spec (which real-world Gemini-compatible implementations do).
Replace the unsafe map with type-narrowed iteration:
Suggested fix
- return data.models.map((m: any) => m.name.replace(/^models\//, ''))
+ return data.models
+ .map(m => (m && typeof m === 'object' ? (m as Record<string, unknown>).name : undefined))
+ .filter((name): name is string => typeof name === 'string')
+ .map(name => name.replace(/^models\//, ''))🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/api/gemini/client.ts` around lines 52 - 58, Replace the unsafe
map using (m: any) and the unguarded m.name access with a type-narrowed
iteration: remove the any assertion and instead iterate over data.models (e.g.,
for...of or filter+map), guard each item with checks like typeof m === 'object'
&& m !== null && typeof (m as any).name === 'string' (or use a custom
isModel(obj) predicate), then push/return m.name.replace(/^models\//, '') only
for valid entries; ensure the function returns an empty array for invalid items
and does not assume m.name exists. Use the existing data.models reference and
replace the anonymous map callback with the guarded logic.
| updateSettingsForSource('userSettings', { | ||
| googleOAuth: { | ||
| access_token: credentials.access_token, | ||
| refresh_token: credentials.refresh_token || googleOAuth.refresh_token, | ||
| expiry_date: credentials.expiry_date, | ||
| }, | ||
| } as any) |
There was a problem hiding this comment.
Remove as any type assertions.
Lines 136 and 143 violate the coding guidelines by using as any. Use the same fix pattern as the earlier updateSettingsForSource calls.
🔧 Proposed fix
updateSettingsForSource('userSettings', {
googleOAuth: {
access_token: credentials.access_token,
refresh_token: credentials.refresh_token || googleOAuth.refresh_token,
expiry_date: credentials.expiry_date,
},
- } as any)
+ } as unknown as Parameters<typeof updateSettingsForSource>[1])
// ...
updateSettingsForSource('userSettings', {
googleOAuth: undefined,
- } as any)
+ } as unknown as Parameters<typeof updateSettingsForSource>[1])As per coding guidelines: "Prohibit as any type assertions in production code."
Also applies to: 141-143
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/api/gemini/google-oauth.ts` around lines 130 - 136, The code
uses `as any` when calling updateSettingsForSource for 'userSettings' with the
googleOAuth payload; remove the `as any` and pass a correctly typed object like
the other updateSettingsForSource calls: build a partial/typed settings object
(matching the expected UserSettings/Settings type) containing googleOAuth {
access_token, refresh_token: credentials.refresh_token ||
googleOAuth.refresh_token, expiry_date } and pass that without assertion; apply
the same change to the similar call affecting lines around 141–143 so both calls
use the proper typed payload instead of `as any`.
| const provider = getAPIProvider() | ||
| const isLocal = provider === 'local' | ||
| const openaiModel = isLocal | ||
| ? process.env.LOCAL_MODEL || options.model | ||
| : resolveOpenAIModel(options.model) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Search for LOCAL_MODEL documentation
# Check for LOCAL_MODEL in documentation files
rg -n "LOCAL_MODEL" --type=md
# Check for LOCAL_MODEL in env var docs
rg -n "LOCAL_MODEL" --glob "**/env*" --glob "**/README*"Repository: claude-code-best/claude-code
Length of output: 54
🏁 Script executed:
#!/bin/bash
# Read the specific file and lines to verify the code snippet
echo "=== Checking lines 221-225 of src/services/api/openai/index.ts ==="
sed -n '221,225p' src/services/api/openai/index.ts
echo -e "\n=== Searching for all LOCAL_MODEL references ==="
rg -i "local_model" -n
echo -e "\n=== Checking for .env files and environment documentation ==="
fd -e env -e example -e template | head -20
rg -n "LOCAL_MODEL\|environment variable" --type=md --type=txtRepository: claude-code-best/claude-code
Length of output: 1017
🏁 Script executed:
#!/bin/bash
# Check the context around line 373 in types.ts where LOCAL_MODEL is documented
echo "=== Documentation in src/utils/settings/types.ts (lines 365-385) ==="
sed -n '365,385p' src/utils/settings/types.ts
echo -e "\n=== Check file structure to understand what types.ts is ==="
head -30 src/utils/settings/types.ts
echo -e "\n=== Check if there's a .env.example or .env.sample file ==="
find . -name ".env*" -o -name "*env*.example" -o -name "*env*.template" 2>/dev/null | head -10Repository: claude-code-best/claude-code
Length of output: 2534
🏁 Script executed:
#!/bin/bash
# Check for README files and other user-facing documentation
echo "=== Looking for README files ==="
find . -maxdepth 2 -name "README*" -o -name "readme*" | head -10
echo -e "\n=== Check if ConsoleOAuthFlow.tsx provides context ==="
sed -n '750,760p' src/components/ConsoleOAuthFlow.tsx
echo -e "\n=== Search for environment variable documentation in README ==="
fd -e md | xargs rg -l "OPENAI_API_KEY\|GEMINI_API_KEY\|environment" 2>/dev/null | head -5Repository: claude-code-best/claude-code
Length of output: 647
🏁 Script executed:
#!/bin/bash
# Check the README files for environment variable documentation
echo "=== Checking README.md for environment variables section ==="
rg -n "environment\|LOCAL_MODEL\|OPENAI_API_KEY\|LOCAL_BASE_URL" README.md | head -20
echo -e "\n=== Checking README_EN.md for environment variables section ==="
rg -n "environment\|LOCAL_MODEL\|OPENAI_API_KEY\|LOCAL_BASE_URL" README_EN.md | head -20
echo -e "\n=== Check if there's any documentation about configuration ==="
rg -n "configure\|setup\|configuration" README.md | head -10Repository: claude-code-best/claude-code
Length of output: 262
🏁 Script executed:
#!/bin/bash
# Let me check the content of README.md to understand what documentation exists
echo "=== First 100 lines of README.md ==="
head -100 README.md
echo -e "\n=== Search for any configuration or setup section ==="
grep -n -i "setup\|install\|environment\|configuration\|local" README.md | head -20Repository: claude-code-best/claude-code
Length of output: 8739
Add LOCAL_MODEL environment variable to documentation.
The code uses process.env.LOCAL_MODEL for local provider model selection, but this environment variable is not documented in the README. While the Zod schema in src/utils/settings/types.ts mentions it should be configured when using the local provider, users reading the README won't discover this requirement. Add LOCAL_MODEL to an environment variables reference section or expand the local LLM setup documentation.
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/services/api/openai/index.ts` around lines 221 - 225, The README is
missing documentation for the LOCAL_MODEL env var used by getAPIProvider logic
(code uses process.env.LOCAL_MODEL when provider === 'local' to select the model
passed as options.model or fallback); update the environment variables reference
or the local LLM setup section to document LOCAL_MODEL, explain its purpose
(overrides options.model when using the local provider), show expected values
(model name) and any defaults, and mention that the Zod settings schema requires
configuring LOCAL_MODEL when using the local provider so users know to set it.
| const response = await fetch(`${baseUrl}/api/pull`, { | ||
| method: 'POST', | ||
| body: JSON.stringify({ name: model }), | ||
| signal, | ||
| }) | ||
|
|
||
| if (!response.ok) { | ||
| throw new Error(`Failed to pull model: ${response.statusText}`) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What headers does the Ollama /api/pull endpoint require for POST requests?
💡 Result:
The Ollama /api/pull endpoint for POST requests requires the Content-Type: application/json header, as the request body is a JSON object specifying the model to pull (e.g., {"model": "llama3.2"}) [1][2]. Official documentation shows curl examples using -d with JSON data, which implies this header, and OpenAPI schema specifies application/json content [1]. JavaScript fetch examples explicitly set {'Content-Type': 'application/json'} [2]. No authentication headers are required for local Ollama instances (http://localhost:11434); authentication (e.g., Authorization: Bearer) is only needed for ollama.com API or private models [3]. Custom headers may be used in proxied setups but are not required by the endpoint itself [4][5].
Citations:
- 1: https://docs.ollama.com/api/pull
- 2: https://mintlify.com/ollama/ollama/api/endpoints/pull
- 3: https://docs.ollama.com/api/authentication
- 4: Support for Custom Headers in Ollama Server Configuration (Zero Trust Configuration) open-webui/open-webui#16686
- 5: Added optional headers to Ollama initialisation ollama/ollama-js#138
🏁 Script executed:
cat -n src/utils/localLlm.ts | head -100Repository: claude-code-best/claude-code
Length of output: 3112
Add Content-Type: application/json header to the /api/pull POST request.
The Ollama /api/pull endpoint requires the Content-Type: application/json header when sending a JSON body. Without it, the request may be rejected or misinterpreted by Ollama, reverse proxies, or compatible servers. Additionally, consider adding timeout handling via AbortSignal.timeout() to the fetch calls in checkOllamaStatus and listOllamaModels to prevent indefinite UI hangs if the local Ollama instance becomes unresponsive.
Suggested fix
const response = await fetch(`${baseUrl}/api/pull`, {
method: 'POST',
+ headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ name: model }),
signal,
})🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/utils/localLlm.ts` around lines 38 - 46, The POST to
`${baseUrl}/api/pull` in localLlm.ts must include the header "Content-Type:
application/json" so the Ollama endpoint correctly interprets the JSON body;
update the fetch call that builds the pull request (the one that sends
JSON.stringify({ name: model })) to add headers: { 'Content-Type':
'application/json' }. Also add timeout handling to the fetch calls used in
checkOllamaStatus and listOllamaModels by creating or passing an AbortSignal
with AbortSignal.timeout(milliseconds) (or constructing an AbortController and
using setTimeout to abort) so those functions don’t hang indefinitely when the
local Ollama instance is unresponsive.
| out[key] = | ||
| (ALL_MODEL_CONFIGS[key] as any)[provider] || | ||
| ALL_MODEL_CONFIGS[key].firstParty |
There was a problem hiding this comment.
Remove as any type assertion.
The type assertion violates coding guidelines. Use proper type narrowing or type guards instead.
🔧 Proposed fix
function getBuiltinModelStrings(provider: APIProvider): ModelStrings {
const out = {} as ModelStrings
for (const key of MODEL_KEYS) {
- out[key] =
- (ALL_MODEL_CONFIGS[key] as any)[provider] ||
- ALL_MODEL_CONFIGS[key].firstParty
+ const config = ALL_MODEL_CONFIGS[key]
+ const providerSpecific = (config as Record<string, unknown>)[provider]
+ out[key] = (typeof providerSpecific === 'string' ? providerSpecific : null) || config.firstParty
}
return out
}As per coding guidelines: "Prohibit as any type assertions in production code; use as unknown as SpecificType double assertion or interface supplementation when type mismatches occur."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| out[key] = | |
| (ALL_MODEL_CONFIGS[key] as any)[provider] || | |
| ALL_MODEL_CONFIGS[key].firstParty | |
| function getBuiltinModelStrings(provider: APIProvider): ModelStrings { | |
| const out = {} as ModelStrings | |
| for (const key of MODEL_KEYS) { | |
| const config = ALL_MODEL_CONFIGS[key] | |
| const providerSpecific = (config as Record<string, unknown>)[provider] | |
| out[key] = (typeof providerSpecific === 'string' ? providerSpecific : null) || config.firstParty | |
| } | |
| return out | |
| } |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/utils/model/modelStrings.ts` around lines 28 - 30, The expression
(ALL_MODEL_CONFIGS[key] as any)[provider] uses a forbidden any assertion;
replace it with a proper typed narrowing or a safe double-cast to a specific
interface for that entry. Define or use a ModelConfigEntry type (e.g., an object
with provider keys and a firstParty field) and cast ALL_MODEL_CONFIGS[key] via
'as unknown as ModelConfigEntry' or narrow it with a type guard before indexing,
then use (ALL_MODEL_CONFIGS[key] as unknown as ModelConfigEntry)[provider] ||
ALL_MODEL_CONFIGS[key].firstParty so you avoid any and keep type-safety for
ALL_MODEL_CONFIGS, key, provider, out, and firstParty.
| const provider = getAPIProvider() | ||
| if (provider === 'local') return 'claude-opus-4-6' // Fallback for local | ||
| return CLAUDE_OPUS_4_6_CONFIG[provider] |
There was a problem hiding this comment.
Questionable hardcoded model fallback for local provider.
The hardcoded fallback 'claude-opus-4-6' doesn't make sense for local providers, which typically use different model identifiers (e.g., llama3.1, mistral, qwen). This could cause errors when the local provider tries to use a Anthropic-specific model ID.
Consider either:
- Using
process.env.LOCAL_MODELas the fallback (consistent withsrc/services/api/openai/index.ts) - Documenting that local providers should explicitly configure
teammateDefaultModelin/config - Returning a more generic fallback or throwing an error to force configuration
🔧 Proposed fix (Option 1)
export function getHardcodedTeammateModelFallback(): string {
const provider = getAPIProvider()
- if (provider === 'local') return 'claude-opus-4-6' // Fallback for local
+ if (provider === 'local') {
+ return process.env.LOCAL_MODEL || 'llama3.1' // Use LOCAL_MODEL env var or sensible local default
+ }
return CLAUDE_OPUS_4_6_CONFIG[provider]
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const provider = getAPIProvider() | |
| if (provider === 'local') return 'claude-opus-4-6' // Fallback for local | |
| return CLAUDE_OPUS_4_6_CONFIG[provider] | |
| const provider = getAPIProvider() | |
| if (provider === 'local') { | |
| return process.env.LOCAL_MODEL || 'llama3.1' // Use LOCAL_MODEL env var or sensible local default | |
| } | |
| return CLAUDE_OPUS_4_6_CONFIG[provider] |
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@src/utils/swarm/teammateModel.ts` around lines 9 - 11, The current hardcoded
local fallback 'claude-opus-4-6' in teammateModel.ts is invalid for
non-Anthropic local providers; replace that hardcoded string so local uses the
same configurable fallback as elsewhere: return process.env.LOCAL_MODEL (or
process.env.LOCAL_MODEL || another documented default) instead of
'claude-opus-4-6', and update the comment to note that local model comes from
process.env.LOCAL_MODEL; keep the surrounding logic using getAPIProvider() and
CLAUDE_OPUS_4_6_CONFIG unchanged.
|
Errors to be fixed soon. |
Need help on this PR? Tag
@codesmithwith what you need.Summary by CodeRabbit
Release Notes
New Features
Documentation
Bug Fixes