Skip to content

Commit b6610e6

Browse files
committed
fix(assistant): 回应 PR #293 CR — 匿名短路不抛错 + ZHIPU_API_KEY 校验
Copilot 在 PR #293 指出两点,follow-up 修复: - **app/api/chat/route.ts**:原先用 `throw new Error("Anonymous request")` 触发 fallback,副作用是每个匿名请求都被 catch 打成带 stack 的 "Java Backend unavailable" warn,生产日志会刷爆。改成显式 if-else 分支 直接短路到本地推理,不抛错,只打一行 info 级日志。顺带清理了多余的 x-satoken 三元展开。 - **lib/ai/providers/intern.ts**:原先把 process.env.ZHIPU_API_KEY 直接 喂给 createOpenAICompatible,漏配时下游会 401/500,UI 上依旧表现为 "unauthorized" —— 这正是 issue #285 要根治的症状,不能再放任。加了 显式校验,缺失时抛出带 Vercel 配置指引的错误。
1 parent b2ca53a commit b6610e6

2 files changed

Lines changed: 65 additions & 48 deletions

File tree

app/api/chat/route.ts

Lines changed: 52 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -35,60 +35,65 @@ export async function POST(req: Request) {
3535
// ====== 尝试优雅降级代理到 Java 后端 ======
3636
// Java 后端 /openai/responses/stream 带 @SaCheckLogin,匿名请求必 401;
3737
// 直接跳过代理省掉 5s 超时,也避免 401 文案被上游误显示为"unauthorized"。
38-
const hasAuthToken = Boolean(req.headers.get("x-satoken"));
39-
try {
40-
if (!hasAuthToken) {
41-
throw new Error("Anonymous request, skip backend proxy.");
42-
}
43-
const backendUrl = process.env.BACKEND_URL;
44-
if (!backendUrl) throw new Error("BACKEND_URL is not configured.");
38+
// 匿名分支走显式 if 短路,不进 try/catch —— 否则每个匿名请求都会被 catch
39+
// 打成 "Java Backend unavailable" 带 stack 的 warn,生产日志会刷爆
40+
// (Copilot CR #1)。
41+
const satoken = req.headers.get("x-satoken");
42+
if (!satoken) {
43+
console.log(
44+
"[Chat Fallback Proxy] ⏭️ Anonymous request, skip backend proxy, use local inference.",
45+
);
46+
} else {
47+
try {
48+
const backendUrl = process.env.BACKEND_URL;
49+
if (!backendUrl) throw new Error("BACKEND_URL is not configured.");
4550

46-
const controller = new AbortController();
47-
const timeoutId = setTimeout(() => controller.abort(), 5000); // 5秒超时
51+
const controller = new AbortController();
52+
const timeoutId = setTimeout(() => controller.abort(), 5000); // 5秒超时
4853

49-
// 原封不动把前端的参数丢给 Java
50-
let proxyRes: Response;
51-
try {
52-
proxyRes = await fetch(`${backendUrl}/openai/responses/stream`, {
53-
method: "POST",
54-
headers: {
55-
"Content-Type": "application/json",
56-
// 浏览器侧用 x-satoken 传递 token,转发给后端时改回后端期望的 satoken
57-
...(req.headers.get("x-satoken")
58-
? { satoken: req.headers.get("x-satoken")! }
59-
: {}),
60-
},
61-
body: await proxyReq.text(),
62-
signal: controller.signal,
63-
});
64-
} finally {
65-
// 无论成功还是抛出(网络错误/超时中断),都清除定时器
66-
clearTimeout(timeoutId);
67-
}
54+
// 原封不动把前端的参数丢给 Java
55+
let proxyRes: Response;
56+
try {
57+
proxyRes = await fetch(`${backendUrl}/openai/responses/stream`, {
58+
method: "POST",
59+
headers: {
60+
"Content-Type": "application/json",
61+
// 浏览器侧用 x-satoken 传递 token,转发给后端时改回后端期望的 satoken
62+
satoken,
63+
},
64+
body: await proxyReq.text(),
65+
signal: controller.signal,
66+
});
67+
} finally {
68+
// 无论成功还是抛出(网络错误/超时中断),都清除定时器
69+
clearTimeout(timeoutId);
70+
}
6871

69-
// 如果 Java 后端返回成功,则直接把它的流传回浏览器,提前结束
70-
if (proxyRes.ok && proxyRes.body) {
71-
console.log(
72-
"[Chat Fallback Proxy] 🚀 Java Backend responded successfully. Piping stream...",
73-
);
74-
return new Response(proxyRes.body, {
75-
headers: {
76-
"Content-Type":
77-
proxyRes.headers.get("Content-Type") || "text/plain; charset=utf-8",
78-
},
79-
});
80-
} else {
72+
// 如果 Java 后端返回成功,则直接把它的流传回浏览器,提前结束
73+
if (proxyRes.ok && proxyRes.body) {
74+
console.log(
75+
"[Chat Fallback Proxy] 🚀 Java Backend responded successfully. Piping stream...",
76+
);
77+
return new Response(proxyRes.body, {
78+
headers: {
79+
"Content-Type":
80+
proxyRes.headers.get("Content-Type") ||
81+
"text/plain; charset=utf-8",
82+
},
83+
});
84+
} else {
85+
console.warn(
86+
`[Chat Fallback Proxy] ⚠️ Java Backend returned status: ${proxyRes.status}, fallback to local Next.js inference.`,
87+
);
88+
}
89+
} catch (error) {
8190
console.warn(
82-
`[Chat Fallback Proxy] ⚠️ Java Backend returned status: ${proxyRes.status}, fallback to local Next.js inference.`,
91+
`[Chat Fallback Proxy] ❌ Java Backend unavailable or timed out, fallback to local Next.js inference. Error:`,
92+
error,
8393
);
8494
}
85-
} catch (error) {
86-
console.warn(
87-
`[Chat Fallback Proxy] ❌ Java Backend unavailable or timed out, fallback to local Next.js inference. Error:`,
88-
error,
89-
);
9095
}
91-
// ====== 代理失败,继续往下走,启用备选方案(本地直连 AI)======
96+
// ====== 代理失败/匿名短路,继续往下走,启用备选方案(本地直连 AI)======
9297

9398
try {
9499
// 先把 body 消费掉,再并行验证用户身份

lib/ai/providers/intern.ts

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,22 @@ export function createInternModel() {
2323
return deepseek("deepseek-chat");
2424
}
2525

26+
// 显式校验 ZHIPU_API_KEY:若漏配,下游 401 又会在 UI 上变成 "unauthorized"
27+
// 透传 —— 正好绕回 issue #285 原本要修的症状。在这里早抛出带指引的错误,
28+
// 运维看日志一眼知道补哪个 env var,避免二次塌房(Copilot CR #2)。
29+
const zhipuApiKey = process.env.ZHIPU_API_KEY;
30+
if (!zhipuApiKey || zhipuApiKey.trim() === "") {
31+
throw new Error(
32+
"Missing required environment variable ZHIPU_API_KEY. " +
33+
"配置位置:Vercel Project Settings → Environment Variables。" +
34+
"免费 key 从 https://open.bigmodel.cn/ 获取。",
35+
);
36+
}
37+
2638
const glm = createOpenAICompatible({
2739
name: "zhipu",
2840
baseURL: "https://open.bigmodel.cn/api/paas/v4/",
29-
apiKey: process.env.ZHIPU_API_KEY,
41+
apiKey: zhipuApiKey,
3042
});
3143

3244
return glm("glm-4.6v-flash");

0 commit comments

Comments
 (0)