Compare commits

...

54 Commits

Author SHA1 Message Date
CJACK.
87e1b05e8e Merge pull request #313 from CJackHwang/dev
toolcall优化补丁
2026-04-26 09:53:54 +08:00
CJACK.
f6df01d3aa Merge pull request #312 from CJackHwang/codex/docs-4x-architecture
Codex/docs 4x architecture
2026-04-26 09:49:49 +08:00
CJACK
0fb1bc6611 工具优化 2026-04-26 09:44:59 +08:00
CJACK
0bfddf7943 1 2026-04-26 09:17:40 +08:00
CJACK.
2adbdd069c Merge pull request #310 from CJackHwang/dev
others
2026-04-26 08:44:20 +08:00
CJACK
40b8182984 docs: update architecture diagrams for 4.x 2026-04-26 08:40:41 +08:00
CJACK
66c2944be2 docs: update architecture diagrams for 4.x 2026-04-26 08:40:00 +08:00
CJACK.
193351ac19 Merge pull request #308 from CJackHwang/dependabot/npm_and_yarn/webui/npm_and_yarn-754666cf41
chore(deps-dev): bump postcss from 8.5.8 to 8.5.10 in /webui in the npm_and_yarn group across 1 directory
2026-04-26 08:36:58 +08:00
dependabot[bot]
a3b21c6b76 chore(deps-dev): bump postcss
Bumps the npm_and_yarn group with 1 update in the /webui directory: [postcss](https://github.com/postcss/postcss).


Updates `postcss` from 8.5.8 to 8.5.10
- [Release notes](https://github.com/postcss/postcss/releases)
- [Changelog](https://github.com/postcss/postcss/blob/main/CHANGELOG.md)
- [Commits](https://github.com/postcss/postcss/compare/8.5.8...8.5.10)

---
updated-dependencies:
- dependency-name: postcss
  dependency-version: 8.5.10
  dependency-type: direct:development
  dependency-group: npm_and_yarn
...

Signed-off-by: dependabot[bot] <support@github.com>
2026-04-26 00:34:47 +00:00
CJACK.
573c717a5d Merge pull request #307 from CJackHwang/dev
[codex] 4.0.0 refactor first preview
2026-04-26 08:33:48 +08:00
CJACK
40c61949e8 align vercel stream finalization with go 2026-04-26 08:29:23 +08:00
CJACK
7bff2c1bab refactor(toolcall): 动态生成工具调用示例,基于实际可用工具名
- 将硬编码的工具示例名改为从请求实际声明的工具名中选取
- 按类别(读取/写入执行/交互/嵌套)智能匹配示例工具
- 执行类工具脚本内容使用正确的参数名(command/cmd),避免误用文件写入参数
- 当工具不足时自动省略对应的示例段落,避免把不可用工具名写入 prompt
- 同步更新 prompt-compatibility.md 文档说明
2026-04-26 07:54:01 +08:00
CJACK
4c83f36089 强制启用文件拆分(实际模型忽略) 2026-04-26 07:31:19 +08:00
CJACK
abc96a37d8 refactor backend API structure 2026-04-26 06:58:20 +08:00
CJACK
8a91fef6ab update doc 2026-04-26 04:58:35 +08:00
CJACK
df61f06d9a 归一化优化 2026-04-26 04:44:55 +08:00
CJACK
7475defeca fix: align tool call protocol and thinking controls 2026-04-26 04:26:51 +08:00
CJACK
f13ad231ac 全局统一映射 2026-04-26 01:58:15 +08:00
CJACK
1b0e8cbadb Tighten XML tool call parsing and upstream empty handling 2026-04-26 01:17:16 +08:00
CJACK
a44afb335a Relax CORS preflight handling across interfaces 2026-04-26 00:37:25 +08:00
CJACK
f1ba805173 fix: fully mask web secret previews 2026-04-26 00:10:59 +08:00
CJACK
131ca7d398 feat: revamp DeepSeek v4 model handling
- replace legacy DeepSeek ids with the new deepseek-v4 model family\n- move thinking control to request parameters and preserve assistant reasoning content\n- switch history split to IGNORE transcript injection and map upload auth failures to 401\n- update admin defaults, API docs, samples, and tests for the new model scheme
2026-04-26 00:02:14 +08:00
CJACK.
ed9efc5858 Merge pull request #303 from Topkill/main
fix(webui): 替换uuid依赖,解决http环境下无法生成密钥的问题。
2026-04-25 20:31:41 +08:00
CJACK.
603e309721 Merge pull request #299 from MuziIsabel/fix/strip-content-encoding-in-proxy-go
fix: strip content-encoding header in proxyToGo to prevent Brotli decode error
2026-04-25 20:31:28 +08:00
topkill
c4cdce46c2 fix(webui): 替换uuid依赖,解决http环境下无法生成密钥的问题。
原生的`crypto.randomUUID()`只支持https和localhost的安全上下文环境。
2026-04-25 19:21:38 +08:00
MuziIsabel
de9d128545 fix: strip content-encoding header in proxyToGo to prevent Brotli decode error
Node fetch auto-decompresses upstream responses, but proxy_go.js was
forwarding the original content-encoding header (e.g. br/gzip) to clients.
Clients then tried to decompress already-decompressed data and failed.
Filter out content-encoding alongside content-length.
2026-04-25 12:10:28 +08:00
CJACK.
e4a4b0ac0b Merge pull request #290 from CJackHwang/dev
修复3.6.0docker构建问题
2026-04-23 20:35:18 +08:00
CJACK.
22e3f32c43 Merge pull request #289 from jacob-sheng/fix-webui-config-docker-build
Fix webui batch import template Docker build
2026-04-23 20:32:22 +08:00
阿钖
9a24b8dcc2 Fix webui config template Docker build 2026-04-23 12:28:01 +00:00
CJACK.
68ccbd3785 Merge pull request #287 from CJackHwang/main
Add Code of Conduct and improve security policy clarity
2026-04-23 19:43:17 +08:00
CJACK.
845fc1453e Bump version from 3.6.0 to 3.6.1 2026-04-23 19:41:08 +08:00
CJACK.
fe486d0078 Revise SECURITY.md for clarity and detail
Updated the security policy to clarify supported versions, reporting procedures, and vulnerability definitions.
2026-04-23 19:39:15 +08:00
CJACK.
d5c186b312 Add Contributor Covenant Code of Conduct 2026-04-23 19:22:47 +08:00
CJACK.
4cec942fff Merge pull request #286 from ouqiting/fix_chat_histroy
feat: 对话记录支持保存并展示 HISTORY 内容
2026-04-23 16:30:16 +08:00
ouqiting
9a404e75fc feat: 对话记录支持保存并展示 HISTORY 内容 2026-04-23 14:47:43 +08:00
CJACK.
d2c6445cfc Merge pull request #284 from CJackHwang/dev
非常大的更新(3.6.0)
2026-04-23 08:18:37 +08:00
CJACK.
b6fba47bcf feat: prepend strong instruction override to history prompt to ensure context adherence 2026-04-22 20:53:35 +00:00
CJACK.
e8d1aee7ad chore: update gitignore and documentation files 2026-04-22 20:23:32 +00:00
CJACK.
5cf56e7628 fix: reset tool call state between separate tool blocks to ensure unique IDs across stream segments 2026-04-22 20:10:06 +00:00
CJACK.
c291d333c4 feat: extract and inject assistant reasoning content into history split prompts 2026-04-22 19:56:28 +00:00
CJACK.
2788e20f05 feat: implement history split functionality to optimize context usage and add corresponding UI settings 2026-04-22 18:23:09 +00:00
CJACK.
f178000d69 docs: clarify config template synchronization and archive contents in README files 2026-04-22 17:36:18 +00:00
CJACK.
e840743295 refactor: centralize batch import templates and enable config file access in Vite 2026-04-22 17:30:39 +00:00
CJACK.
77484bf813 feat: add account editing functionality with UI modal and backend handler 2026-04-22 17:20:44 +00:00
CJACK.
f14969eca5 feat: implement API key metadata preservation and make chat history migration best-effort 2026-04-22 16:59:10 +00:00
CJACK.
fe8a6bd3cd refactor: improve chat history persistence reliability with metadata-only migration, error handling, and optimized file updates 2026-04-22 16:22:04 +00:00
CJACK.
797ab77873 Merge pull request #279 from CJackHwang/codex/add-api-key-name-handling-in-webui
feat(account): add structured API key and account name/remark support
2026-04-22 23:52:54 +08:00
CJACK.
8f09e3b381 feat: implement API key management with reconciliation and add update key endpoint 2026-04-22 15:51:43 +00:00
CJACK.
3a79b07d33 Merge pull request #282 from livesRan/fix/citation-link-mapping-pr
修复搜索场景下 citation 标签偶发未替换问题(FINISHED 后继续收集引用元数据)
2026-04-22 23:31:32 +08:00
CJACK.
df13f35f43 Merge pull request #281 from ouqiting/main
给webui新增“对话记录”
2026-04-22 23:30:59 +08:00
ouqiting
4422f989be fix: satisfy staticcheck QF1007 2026-04-22 20:28:08 +08:00
songguoliang
6052a8d1e2 修复搜索场景 citation 偶发未替换 2026-04-22 19:03:07 +08:00
ouqiting
f125c7ab83 增加“对话记录” 2026-04-22 15:17:10 +08:00
CJACK.
8ff923cd77 feat(account): add key/account name and remark metadata 2026-04-22 01:43:20 +08:00
328 changed files with 14052 additions and 4424 deletions

5
.gitignore vendored
View File

@@ -62,3 +62,8 @@ CLAUDE.local.md
# Local tool bootstrap cache
.tmp/
# Chat history
data/
.codex
.roomodes

View File

@@ -21,3 +21,9 @@ These rules apply to all agent-made changes in this repository.
- Keep changes additive and tightly scoped to the requested feature or bugfix.
- Do not mix unrelated refactors into feature PRs unless they are required to make the change pass gates.
## Documentation Sync
- When business logic or user-visible behavior changes, update the corresponding documentation in the same change.
- `docs/prompt-compatibility.md` is the source-of-truth document for the “API -> pure-text web-chat context” compatibility flow.
- If a change affects message normalization, tool prompt injection, prompt-visible tool history, file/reference handling, history split, or completion payload assembly, update `docs/prompt-compatibility.md` in the same change.

142
API.en.md
View File

@@ -31,13 +31,13 @@ Docs: [Overview](README.en.md) / [Architecture](docs/ARCHITECTURE.en.md) / [Depl
| Base URL | `http://localhost:5001` or your deployment domain |
| Default Content-Type | `application/json` |
| Health probes | `GET /healthz`, `GET /readyz` |
| CORS | Enabled (`Access-Control-Allow-Origin: *`, allows `Content-Type`, `Authorization`, `X-API-Key`, `X-Ds2-Target-Account`, `X-Vercel-Protection-Bypass`) |
| CORS | Enabled (uniformly covers `/v1/*`, `/anthropic/*`, `/v1beta/models/*`, and `/admin/*`; echoes the browser `Origin` when present, otherwise `*`; default allow-list includes `Content-Type`, `Authorization`, `X-API-Key`, `X-Ds2-Target-Account`, `X-Ds2-Source`, `X-Vercel-Protection-Bypass`, `X-Goog-Api-Key`, `Anthropic-Version`, `Anthropic-Beta`, and also accepts third-party preflight-requested headers such as `x-stainless-*`; `/v1/chat/completions` on Vercel Node Runtime matches the same behavior; internal-only `X-Ds2-Internal-Token` remains blocked) |
### 3.0 Adapter-Layer Notes
- OpenAI / Claude / Gemini protocols are now mounted on one shared `chi` router tree assembled in `internal/server/router.go`.
- Adapter responsibilities are streamlined to: **request normalization → DeepSeek invocation → protocol-shaped rendering**, reducing legacy split-logic paths.
- Tool-calling semantics are aligned between Go and Node runtime: parsing is now centered on XML/Markup-family tool syntax (`<tool_call>` / `<function_call>` / `<invoke>` / `tool_use` / antml variants), plus stream-time anti-leak filtering.
- Tool-calling semantics are aligned between Go and Node runtime: the only executable model-output syntax is the canonical XML tool block `<tool_calls>` `<invoke name="...">` `<parameter name="...">`, plus stream-time anti-leak filtering.
- `Admin API` separates static config from runtime policy: `/admin/config*` for configuration state, `/admin/settings*` for runtime behavior.
---
@@ -130,7 +130,8 @@ Gemini-compatible clients can also send `x-goog-api-key`, `?key=`, or `?api_key=
| POST | `/admin/settings/password` | Admin | Update admin password and invalidate old JWTs |
| POST | `/admin/config/import` | Admin | Import config (merge/replace) |
| GET | `/admin/config/export` | Admin | Export full config (`config`/`json`/`base64`) |
| POST | `/admin/keys` | Admin | Add API key |
| POST | `/admin/keys` | Admin | Add API key (optional `name`/`remark`) |
| PUT | `/admin/keys/{key}` | Admin | Update API key metadata |
| DELETE | `/admin/keys/{key}` | Admin | Delete API key |
| GET | `/admin/proxies` | Admin | List proxies |
| POST | `/admin/proxies` | Admin | Add proxy |
@@ -139,6 +140,7 @@ Gemini-compatible clients can also send `x-goog-api-key`, `?key=`, or `?api_key=
| POST | `/admin/proxies/test` | Admin | Test proxy connectivity |
| GET | `/admin/accounts` | Admin | Paginated account list |
| POST | `/admin/accounts` | Admin | Add account |
| PUT | `/admin/accounts/{identifier}` | Admin | Update account name/remark |
| DELETE | `/admin/accounts/{identifier}` | Admin | Delete account |
| PUT | `/admin/accounts/{identifier}/proxy` | Admin | Bind/unbind proxy for an account |
| GET | `/admin/queue/status` | Admin | Account queue status |
@@ -156,6 +158,11 @@ Gemini-compatible clients can also send `x-goog-api-key`, `?key=`, or `?api_key=
| GET | `/admin/export` | Admin | Export config JSON/Base64 |
| GET | `/admin/dev/captures` | Admin | Read local packet-capture entries |
| DELETE | `/admin/dev/captures` | Admin | Clear local packet-capture entries |
| GET | `/admin/chat-history` | Admin | Read server-side conversation history |
| DELETE | `/admin/chat-history` | Admin | Clear server-side conversation history |
| GET | `/admin/chat-history/{id}` | Admin | Read one server-side conversation entry |
| DELETE | `/admin/chat-history/{id}` | Admin | Delete one server-side conversation entry |
| PUT | `/admin/chat-history/settings` | Admin | Update conversation history retention limit |
| GET | `/admin/version` | Admin | Check current version and latest Release |
---
@@ -188,18 +195,12 @@ No auth required. Returns the currently supported DeepSeek native model list.
{
"object": "list",
"data": [
{"id": "deepseek-chat", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-reasoner", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-chat-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-reasoner-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-chat", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-reasoner", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-chat-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-reasoner-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-chat", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-reasoner", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-chat-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-reasoner-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []}
{"id": "deepseek-v4-flash", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-pro", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-flash-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-pro-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-vision", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-vision-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []}
]
}
```
@@ -215,12 +216,15 @@ For `chat` / `responses` / `embeddings`, DS2API follows a wide-input/strict-outp
3. If still unmatched, fall back by known family heuristics (`o*`, `gpt-*`, `claude-*`, etc.).
4. If still unmatched, return `invalid_request_error`.
Current built-in default aliases (excerpt):
Built-in aliases come from `internal/config/models.go`; `config.model_aliases` can override or add mappings at runtime. Excerpt:
- OpenAI: `gpt-4o`, `gpt-4.1`, `gpt-4.1-mini`, `gpt-4.1-nano`, `gpt-5`, `gpt-5-mini`, `gpt-5-codex`
- OpenAI reasoning: `o1`, `o1-mini`, `o3`, `o3-mini`
- Claude: `claude-sonnet-4-5`, `claude-haiku-4-5`, `claude-opus-4-6` (plus compatibility aliases `claude-3-5-sonnet` / `claude-3-5-haiku` / `claude-3-opus`)
- Gemini: `gemini-2.5-pro`, `gemini-2.5-flash`
- OpenAI / Codex: `gpt-4o`, `gpt-4.1`, `gpt-5`, `gpt-5.5`, `gpt-5-codex`, `gpt-5.3-codex`, `codex-mini-latest`
- OpenAI reasoning: `o1`, `o3`, `o3-deep-research`, `o4-mini`
- Claude: `claude-opus-4-6`, `claude-sonnet-4-6`, `claude-haiku-4-5`, `claude-3-5-sonnet-latest`
- Gemini: `gemini-2.5-pro`, `gemini-2.5-flash`, `gemini-pro-vision`
- Other compatibility families: `llama-*`, `qwen-*`, `mistral-*`, and `command-*` fall back through family heuristics
Retired historical families such as `claude-1.*`, `claude-2.*`, `claude-instant-*`, and `gpt-3.5*` are explicitly rejected.
### `POST /v1/chat/completions`
@@ -235,7 +239,7 @@ Content-Type: application/json
| Field | Type | Required | Notes |
| --- | --- | --- | --- |
| `model` | string | ✅ | DeepSeek native models + common aliases (`gpt-5`, `gpt-5-mini`, `gpt-5-codex`, `o3`, `claude-opus-4-6`, `gemini-2.5-pro`, `gemini-2.5-flash`, etc.) |
| `model` | string | ✅ | DeepSeek native models + common aliases (`gpt-5.5`, `gpt-5.4-mini`, `gpt-5.3-codex`, `o3`, `claude-opus-4-6`, `gemini-2.5-pro`, `gemini-2.5-flash`, etc.) |
| `messages` | array | ✅ | OpenAI-style messages |
| `stream` | boolean | ❌ | Default `false` |
| `tools` | array | ❌ | Function calling schema |
@@ -248,14 +252,14 @@ Content-Type: application/json
"id": "<chat_session_id>",
"object": "chat.completion",
"created": 1738400000,
"model": "deepseek-reasoner",
"model": "deepseek-v4-pro",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "final response",
"reasoning_content": "reasoning trace (reasoner models)"
"reasoning_content": "reasoning trace (when thinking is enabled)"
},
"finish_reason": "stop"
}
@@ -290,7 +294,7 @@ data: [DONE]
**Field notes**:
- First delta includes `role: assistant`
- `deepseek-reasoner` / `deepseek-reasoner-search` models emit `delta.reasoning_content`
- When thinking is enabled, the stream may emit `delta.reasoning_content`
- Text emits `delta.content`
- Last chunk includes `finish_reason` and `usage`
- Token counting prefers pass-through from upstream DeepSeek SSE (`accumulated_token_usage` / `token_usage`), and only falls back to local estimation when upstream usage is absent
@@ -330,7 +334,7 @@ When `tools` is present, DS2API performs anti-leak handling:
Additional notes:
- The parser currently follows XML/Markup-family tool payloads (`<tool_call>`, `<function_call>`, `<invoke>`, `tool_use`, antml variants). Standalone JSON `tool_calls` payloads are not treated as executable tool calls by default.
- The parser currently treats only canonical XML tool blocks (`<tool_calls>` / `<invoke name="...">` / `<parameter name="...">`) as executable tool calls. Legacy `<tools>`, `<tool_call>`, `<tool_name>`, `<param>`, `<function_call>`, `tool_use`, antml variants, and standalone JSON `tool_calls` payloads are treated as plain text.
- `tool_calls` shown inside fenced markdown code blocks (for example, ```json ... ```) are treated as examples, not executable calls.
---
@@ -442,17 +446,17 @@ No auth required.
{
"object": "list",
"data": [
{"id": "claude-sonnet-4-5", "object": "model", "created": 1715635200, "owned_by": "anthropic"},
{"id": "claude-sonnet-4-6", "object": "model", "created": 1715635200, "owned_by": "anthropic"},
{"id": "claude-haiku-4-5", "object": "model", "created": 1715635200, "owned_by": "anthropic"},
{"id": "claude-opus-4-6", "object": "model", "created": 1715635200, "owned_by": "anthropic"}
],
"first_id": "claude-opus-4-6",
"last_id": "claude-instant-1.0",
"last_id": "claude-3-haiku-20240307",
"has_more": false
}
```
> Note: the example is partial; besides the current primary aliases, the real response also includes Claude 4.x snapshots plus historical 3.x / 2.x / 1.x IDs and common aliases.
> Note: the example is partial; besides the current primary aliases, the real response also includes Claude 4.x snapshots plus historical 3.x IDs and common aliases.
### `POST /anthropic/v1/messages`
@@ -470,7 +474,7 @@ anthropic-version: 2023-06-01
| Field | Type | Required | Notes |
| --- | --- | --- | --- |
| `model` | string | ✅ | For example `claude-sonnet-4-5` / `claude-opus-4-6` / `claude-haiku-4-5` (compatible with `claude-3-5-haiku-latest`), plus historical Claude model IDs |
| `model` | string | ✅ | For example `claude-sonnet-4-6` / `claude-opus-4-6` / `claude-haiku-4-5` (compatible with `claude-3-5-haiku-latest`), plus historical Claude model IDs |
| `messages` | array | ✅ | Claude-style messages |
| `max_tokens` | number | ❌ | Auto-filled to `8192` when omitted; not strictly enforced by upstream bridge |
| `stream` | boolean | ❌ | Default `false` |
@@ -484,7 +488,7 @@ anthropic-version: 2023-06-01
"id": "msg_1738400000000000000",
"type": "message",
"role": "assistant",
"model": "claude-sonnet-4-5",
"model": "claude-sonnet-4-6",
"content": [
{"type": "text", "text": "response"}
],
@@ -538,7 +542,7 @@ data: {"type":"message_stop"}
```json
{
"model": "claude-sonnet-4-5",
"model": "claude-sonnet-4-6",
"messages": [
{"role": "user", "content": "Hello"}
]
@@ -643,11 +647,15 @@ Returns Vercel preconfiguration status.
### `GET /admin/config`
Returns sanitized config.
Returns sanitized config, including both `keys` and `api_keys`.
```json
{
"keys": ["k1", "k2"],
"api_keys": [
{"key": "k1", "name": "Primary", "remark": "Production"},
{"key": "k2", "name": "Backup", "remark": "Load test"}
],
"env_backed": false,
"env_source_present": true,
"env_writeback_enabled": true,
@@ -662,28 +670,33 @@ Returns sanitized config.
"token_preview": "abcde..."
}
],
"claude_mapping": {
"fast": "deepseek-chat",
"slow": "deepseek-reasoner"
"model_aliases": {
"claude-sonnet-4-6": "deepseek-v4-flash",
"claude-opus-4-6": "deepseek-v4-pro"
}
}
```
### `POST /admin/config`
Only updates `keys`, `accounts`, and `claude_mapping`.
Only updates `keys`, `api_keys`, `accounts`, and `model_aliases`.
If both `api_keys` and `keys` are sent, the structured `api_keys` entries win so `name` / `remark` metadata is preserved; `keys` remains a legacy fallback.
**Request**:
```json
{
"keys": ["k1", "k2"],
"api_keys": [
{"key": "k1", "name": "Primary", "remark": "Production"},
{"key": "k2", "name": "Backup", "remark": "Load test"}
],
"accounts": [
{"email": "user@example.com", "password": "pwd", "token": ""}
],
"claude_mapping": {
"fast": "deepseek-chat",
"slow": "deepseek-reasoner"
"model_aliases": {
"claude-sonnet-4-6": "deepseek-v4-flash",
"claude-opus-4-6": "deepseek-v4-pro"
}
}
```
@@ -698,7 +711,8 @@ Reads runtime settings and status, including:
- `compat` (`wide_input_strict_output`, `strip_reference_markers`)
- `responses` / `embeddings`
- `auto_delete` (`mode`: `none` / `single` / `all`; legacy `sessions=true` is still treated as `all`)
- `claude_mapping` / `model_aliases`
- `history_split` (`enabled` always returns `true`, `trigger_after_turns`)
- `model_aliases`
- `env_backed`, `needs_vercel_sync`
- `toolcall` policy is fixed to `feature_match + high` and is no longer returned or editable via settings
@@ -712,7 +726,7 @@ Hot-updates runtime settings. Supported fields:
- `responses.store_ttl_seconds`
- `embeddings.provider`
- `auto_delete.mode`
- `claude_mapping`
- `history_split.trigger_after_turns` (`history_split.enabled` is forced on globally; legacy client writes are stored as `true`)
- `model_aliases`
- `toolcall` policy is fixed and is no longer writable through settings
@@ -737,9 +751,9 @@ Imports full config with:
The request can send config directly, or wrapped as `{"config": {...}, "mode":"merge"}`.
Query params `?mode=merge` / `?mode=replace` are also supported.
Import accepts `keys`, `accounts`, `claude_mapping` / `claude_model_mapping`, `model_aliases`, `admin`, `runtime`, `responses`, `embeddings`, and `auto_delete`; legacy `toolcall` fields are ignored.
`replace` mode replaces the full config shape while preserving Vercel sync metadata. `merge` mode merges `keys`, `api_keys`, `accounts`, and `model_aliases`, and overwrites non-empty fields under `admin`, `runtime`, `responses`, and `embeddings`. Manage `compat`, `auto_delete`, and `history_split` via `/admin/settings` or the config file; legacy `toolcall` fields are ignored.
> `compat` fields are managed via `/admin/settings` or the config file; this import endpoint does not update `compat`.
> Note: `merge` mode does not update `compat`, `auto_delete`, or `history_split`.
### `GET /admin/config/export`
@@ -748,7 +762,17 @@ Exports full config in three forms: `config`, `json`, and `base64`.
### `POST /admin/keys`
```json
{"key": "new-api-key"}
{"key": "new-api-key", "name": "Primary", "remark": "Production"}
```
**Response**: `{"success": true, "total_keys": 3}`
### `PUT /admin/keys/{key}`
Updates the `name` / `remark` of the specified API key. The path `key` is read-only and cannot be changed.
```json
{"name": "Backup", "remark": "Load test"}
```
**Response**: `{"success": true, "total_keys": 3}`
@@ -819,6 +843,16 @@ Returned items also include `test_status`, usually `ok` or `failed`.
**Response**: `{"success": true, "total_accounts": 6}`
### `PUT /admin/accounts/{identifier}`
Updates the `name` / `remark` of the specified account. The path `identifier` can be email or mobile and cannot be changed.
```json
{"name": "Primary account", "remark": "Shared with the team"}
```
**Response**: `{"success": true, "total_accounts": 6}`
### `DELETE /admin/accounts/{identifier}`
`identifier` can be email, mobile, or the synthetic id for token-only accounts (`token:<hash>`).
@@ -868,7 +902,7 @@ Updates proxy binding for a specific account.
| Field | Required | Notes |
| --- | --- | --- |
| `identifier` | ✅ | email / mobile / token-only synthetic id |
| `model` | ❌ | default `deepseek-chat` |
| `model` | ❌ | default `deepseek-v4-flash` |
| `message` | ❌ | if empty, only session creation is tested |
**Response**:
@@ -879,7 +913,7 @@ Updates proxy binding for a specific account.
"success": true,
"response_time": 1240,
"message": "API test successful (session creation only)",
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"session_count": 0,
"config_writable": true
}
@@ -950,7 +984,7 @@ Test API availability through the service itself.
| Field | Required | Default |
| --- | --- | --- |
| `model` | ❌ | `deepseek-chat` |
| `model` | ❌ | `deepseek-v4-flash` |
| `message` | ❌ | `你好` |
| `api_key` | ❌ | First key in config |
@@ -974,7 +1008,7 @@ Common request fields:
| --- | --- | --- | --- |
| `message` | No | `你好` | Convenience single-turn user message |
| `messages` | No | Auto-derived from `message` | OpenAI-style message array |
| `model` | No | `deepseek-chat` | Target model |
| `model` | No | `deepseek-v4-flash` | Target model |
| `stream` | No | `true` | Recommended to keep streaming enabled so raw SSE is recorded |
| `api_key` | No | First configured key | Business API key to use |
| `sample_id` | No | Auto-generated | Sample directory name |
@@ -1184,7 +1218,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"messages": [{"role": "user", "content": "Hello"}],
"stream": false
}'
@@ -1197,7 +1231,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-reasoner",
"model": "deepseek-v4-pro",
"messages": [{"role": "user", "content": "Explain quantum entanglement"}],
"stream": true
}'
@@ -1235,7 +1269,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat-search",
"model": "deepseek-v4-flash-search",
"messages": [{"role": "user", "content": "Latest news today"}],
"stream": true
}'
@@ -1248,7 +1282,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"messages": [{"role": "user", "content": "What is the weather in Beijing?"}],
"tools": [
{
@@ -1309,7 +1343,7 @@ curl http://localhost:5001/anthropic/v1/messages \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-5",
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "Hello"}]
}'
@@ -1346,7 +1380,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "X-Ds2-Target-Account: user@example.com" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"messages": [{"role": "user", "content": "Hello"}]
}'
```

149
API.md
View File

@@ -31,13 +31,13 @@
| Base URL | `http://localhost:5001` 或你的部署域名 |
| 默认 Content-Type | `application/json` |
| 健康检查 | `GET /healthz``GET /readyz` |
| CORS | 已启用(`Access-Control-Allow-Origin: *`允许 `Content-Type`, `Authorization`, `X-API-Key`, `X-Ds2-Target-Account`, `X-Vercel-Protection-Bypass` |
| CORS | 已启用(统一覆盖 `/v1/*``/anthropic/*``/v1beta/models/*``/admin/*`;浏览器有 `Origin` 时回显该 Origin否则为 `*`;默认允许 `Content-Type`, `Authorization`, `X-API-Key`, `X-Ds2-Target-Account`, `X-Ds2-Source`, `X-Vercel-Protection-Bypass`, `X-Goog-Api-Key`, `Anthropic-Version`, `Anthropic-Beta`,并会放行预检里声明的第三方请求头,如 `x-stainless-*`Vercel 上 `/v1/chat/completions` 的 Node Runtime 也对齐相同行为;内部专用头 `X-Ds2-Internal-Token` 仍被拦截 |
### 3.0 接口适配层说明
- OpenAI / Claude / Gemini 三套协议已统一挂在同一 `chi` 路由树上,由 `internal/server/router.go` 负责装配。
- 适配器层职责收敛为:**请求归一化 → DeepSeek 调用 → 协议形态渲染**,减少历史版本中“同能力多处实现”的分叉。
- Tool Calling 的解析策略在 Go 与 Node Runtime 间保持一致:当前以 XML/Markup 家族解析为主(含 `<tool_call>` / `<function_call>` / `<invoke>` / `tool_use` / antml 变体),并在流式场景执行防泄漏筛分。
- Tool Calling 的解析策略在 Go 与 Node Runtime 间保持一致:当前唯一可执行的模型输出语法是 canonical XML 工具块 `<tool_calls>` `<invoke name="...">` `<parameter name="...">`,并在流式场景执行防泄漏筛分。
- `Admin API` 将配置与运行时策略分开:`/admin/config*` 管静态配置,`/admin/settings*` 管运行时行为。
---
@@ -130,7 +130,8 @@ Gemini 兼容客户端还可以使用 `x-goog-api-key`、`?key=` 或 `?api_key=`
| POST | `/admin/settings/password` | Admin | 更新 Admin 密码并使旧 JWT 失效 |
| POST | `/admin/config/import` | Admin | 导入配置merge/replace |
| GET | `/admin/config/export` | Admin | 导出完整配置(含 `config`/`json`/`base64` |
| POST | `/admin/keys` | Admin | 添加 API key |
| POST | `/admin/keys` | Admin | 添加 API key(可附 name/remark |
| PUT | `/admin/keys/{key}` | Admin | 更新 API key 备注信息 |
| DELETE | `/admin/keys/{key}` | Admin | 删除 API key |
| GET | `/admin/proxies` | Admin | 代理列表 |
| POST | `/admin/proxies` | Admin | 添加代理 |
@@ -139,6 +140,7 @@ Gemini 兼容客户端还可以使用 `x-goog-api-key`、`?key=` 或 `?api_key=`
| POST | `/admin/proxies/test` | Admin | 测试代理连通性 |
| GET | `/admin/accounts` | Admin | 分页账号列表 |
| POST | `/admin/accounts` | Admin | 添加账号 |
| PUT | `/admin/accounts/{identifier}` | Admin | 更新账号 name/remark |
| DELETE | `/admin/accounts/{identifier}` | Admin | 删除账号 |
| PUT | `/admin/accounts/{identifier}/proxy` | Admin | 为账号绑定/解绑代理 |
| GET | `/admin/queue/status` | Admin | 账号队列状态 |
@@ -156,6 +158,11 @@ Gemini 兼容客户端还可以使用 `x-goog-api-key`、`?key=` 或 `?api_key=`
| GET | `/admin/export` | Admin | 导出配置 JSON/Base64 |
| GET | `/admin/dev/captures` | Admin | 查看本地抓包记录 |
| DELETE | `/admin/dev/captures` | Admin | 清空本地抓包记录 |
| GET | `/admin/chat-history` | Admin | 查看服务器端对话记录 |
| DELETE | `/admin/chat-history` | Admin | 清空服务器端对话记录 |
| GET | `/admin/chat-history/{id}` | Admin | 查看单条服务器端对话记录 |
| DELETE | `/admin/chat-history/{id}` | Admin | 删除单条服务器端对话记录 |
| PUT | `/admin/chat-history/settings` | Admin | 更新对话记录保留条数 |
| GET | `/admin/version` | Admin | 查询当前版本与最新 Release |
---
@@ -188,18 +195,12 @@ Gemini 兼容客户端还可以使用 `x-goog-api-key`、`?key=` 或 `?api_key=`
{
"object": "list",
"data": [
{"id": "deepseek-chat", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-reasoner", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-chat-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-reasoner-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-chat", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-reasoner", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-chat-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-expert-reasoner-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-chat", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-reasoner", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-chat-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-vision-reasoner-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []}
{"id": "deepseek-v4-flash", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-pro", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-flash-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-pro-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-vision", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
{"id": "deepseek-v4-vision-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []}
]
}
```
@@ -215,12 +216,15 @@ Gemini 兼容客户端还可以使用 `x-goog-api-key`、`?key=` 或 `?api_key=`
3. 未命中时按模型家族规则回退(如 `o*``gpt-*``claude-*`)。
4. 仍未命中则返回 `invalid_request_error`
当前内置默认 alias节选
当前内置默认 alias 来自 `internal/config/models.go``config.model_aliases` 会在运行时覆盖或补充同名映射。节选:
- OpenAI`gpt-4o``gpt-4.1``gpt-4.1-mini``gpt-4.1-nano``gpt-5``gpt-5-mini``gpt-5-codex`
- OpenAI Reasoning`o1``o1-mini``o3``o3-mini`
- Claude`claude-sonnet-4-5``claude-haiku-4-5``claude-opus-4-6`(及 `claude-3-5-sonnet` / `claude-3-5-haiku` / `claude-3-opus` 兼容别名)
- Gemini`gemini-2.5-pro``gemini-2.5-flash`
- OpenAI / Codex`gpt-4o``gpt-4.1``gpt-5``gpt-5.5``gpt-5-codex``gpt-5.3-codex``codex-mini-latest`
- OpenAI reasoning`o1``o3``o3-deep-research``o4-mini`
- Claude`claude-opus-4-6``claude-sonnet-4-6``claude-haiku-4-5``claude-3-5-sonnet-latest`
- Gemini`gemini-2.5-pro``gemini-2.5-flash``gemini-pro-vision`
- 其他兼容族:`llama-*``qwen-*``mistral-*``command-*` 会按家族启发式回退
退役历史模型(如 `claude-1.*``claude-2.*``claude-instant-*``gpt-3.5*`)会被显式拒绝。
### `POST /v1/chat/completions`
@@ -235,7 +239,7 @@ Content-Type: application/json
| 字段 | 类型 | 必填 | 说明 |
| --- | --- | --- | --- |
| `model` | string | ✅ | 支持 DeepSeek 原生模型 + 常见 alias`gpt-5``gpt-5-mini``gpt-5-codex``o3``claude-opus-4-6``gemini-2.5-pro``gemini-2.5-flash` 等) |
| `model` | string | ✅ | 支持 DeepSeek 原生模型 + 常见 alias`gpt-5.5``gpt-5.4-mini``gpt-5.3-codex``o3``claude-opus-4-6``claude-sonnet-4-6``gemini-2.5-pro``gemini-2.5-flash` 等) |
| `messages` | array | ✅ | OpenAI 风格消息数组 |
| `stream` | boolean | ❌ | 默认 `false` |
| `tools` | array | ❌ | Function Calling 定义 |
@@ -248,14 +252,14 @@ Content-Type: application/json
"id": "<chat_session_id>",
"object": "chat.completion",
"created": 1738400000,
"model": "deepseek-reasoner",
"model": "deepseek-v4-pro",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "最终回复",
"reasoning_content": "思考内容(reasoner 模型"
"reasoning_content": "思考内容(开启 thinking 时"
},
"finish_reason": "stop"
}
@@ -290,7 +294,7 @@ data: [DONE]
**字段说明**
- 首个 delta 包含 `role: assistant`
- `deepseek-reasoner` / `deepseek-reasoner-search` 模型输出 `delta.reasoning_content`
- 开启 thinking 时会输出 `delta.reasoning_content`
- 普通文本输出 `delta.content`
- 最后一段包含 `finish_reason``usage`
- token 计数优先透传上游 DeepSeek SSE`accumulated_token_usage` / `token_usage`);仅在上游缺失时回退本地估算
@@ -331,7 +335,7 @@ data: [DONE]
补充说明:
- **非代码块上下文**下,工具负载即使与普通文本混合,也会按特征识别并产出可执行 tool call前后普通文本仍可透传
- 解析器当前走 XML/Markup 家族(包含 `<tool_call>``<function_call>``<invoke>``tool_use`、antml 风格纯 JSON `tool_calls` 片段默认不会直接作为可执行调用解析
- 解析器当前只把 canonical XML 工具块(`<tool_calls>` / `<invoke name="...">` / `<parameter name="...">`)作为可执行调用解析;旧式 `<tools>``<tool_call>``<tool_name>``<param>``<function_call>``tool_use`、antml 风格纯 JSON `tool_calls` 片段默认都会按普通文本处理
- Markdown fenced code block例如 ```json ... ```)中的 `tool_calls` 仅视为示例文本,不会被执行。
---
@@ -443,17 +447,17 @@ data: [DONE]
{
"object": "list",
"data": [
{"id": "claude-sonnet-4-5", "object": "model", "created": 1715635200, "owned_by": "anthropic"},
{"id": "claude-sonnet-4-6", "object": "model", "created": 1715635200, "owned_by": "anthropic"},
{"id": "claude-haiku-4-5", "object": "model", "created": 1715635200, "owned_by": "anthropic"},
{"id": "claude-opus-4-6", "object": "model", "created": 1715635200, "owned_by": "anthropic"}
],
"first_id": "claude-opus-4-6",
"last_id": "claude-instant-1.0",
"last_id": "claude-3-haiku-20240307",
"has_more": false
}
```
> 说明:示例仅展示部分模型;实际返回除当前主别名外,还包含 Claude 4.x snapshots以及 3.x / 2.x / 1.x 历史模型 ID 与常见别名。
> 说明:示例仅展示部分模型;实际返回除当前主别名外,还包含 Claude 4.x snapshots以及 3.x 历史模型 ID 与常见别名。
### `POST /anthropic/v1/messages`
@@ -471,7 +475,7 @@ anthropic-version: 2023-06-01
| 字段 | 类型 | 必填 | 说明 |
| --- | --- | --- | --- |
| `model` | string | ✅ | 例如 `claude-sonnet-4-5` / `claude-opus-4-6` / `claude-haiku-4-5`(兼容 `claude-3-5-haiku-latest`),并支持历史 Claude 模型 ID |
| `model` | string | ✅ | 例如 `claude-sonnet-4-6` / `claude-opus-4-6` / `claude-haiku-4-5`(兼容 `claude-sonnet-4-5``claude-3-5-haiku-latest`),并支持历史 Claude 模型 ID |
| `messages` | array | ✅ | Claude 风格消息数组 |
| `max_tokens` | number | ❌ | 缺省自动补 `8192`;当前实现不会硬性截断上游输出 |
| `stream` | boolean | ❌ | 默认 `false` |
@@ -485,7 +489,7 @@ anthropic-version: 2023-06-01
"id": "msg_1738400000000000000",
"type": "message",
"role": "assistant",
"model": "claude-sonnet-4-5",
"model": "claude-sonnet-4-6",
"content": [
{"type": "text", "text": "回复内容"}
],
@@ -539,7 +543,7 @@ data: {"type":"message_stop"}
```json
{
"model": "claude-sonnet-4-5",
"model": "claude-sonnet-4-6",
"messages": [
{"role": "user", "content": "你好"}
]
@@ -644,11 +648,15 @@ data: {"type":"message_stop"}
### `GET /admin/config`
返回脱敏后的配置。
返回脱敏后的配置,包含 `keys``api_keys`
```json
{
"keys": ["k1", "k2"],
"api_keys": [
{"key": "k1", "name": "主 Key", "remark": "生产流量"},
{"key": "k2", "name": "备用 Key", "remark": "压测"}
],
"env_backed": false,
"env_source_present": true,
"env_writeback_enabled": true,
@@ -663,28 +671,33 @@ data: {"type":"message_stop"}
"token_preview": "abcde..."
}
],
"claude_mapping": {
"fast": "deepseek-chat",
"slow": "deepseek-reasoner"
"model_aliases": {
"claude-sonnet-4-6": "deepseek-v4-flash",
"claude-opus-4-6": "deepseek-v4-pro"
}
}
```
### `POST /admin/config`
只更新 `keys``accounts``claude_mapping`
只更新 `keys``api_keys``accounts``model_aliases`
如果同时发送 `api_keys``keys`,优先保留 `api_keys` 中的结构化 `name` / `remark``keys` 仅作为旧格式兼容回退。
**请求**
```json
{
"keys": ["k1", "k2"],
"api_keys": [
{"key": "k1", "name": "主 Key", "remark": "生产流量"},
{"key": "k2", "name": "备用 Key", "remark": "压测"}
],
"accounts": [
{"email": "user@example.com", "password": "pwd", "token": ""}
],
"claude_mapping": {
"fast": "deepseek-chat",
"slow": "deepseek-reasoner"
"model_aliases": {
"claude-sonnet-4-6": "deepseek-v4-flash",
"claude-opus-4-6": "deepseek-v4-pro"
}
}
```
@@ -699,7 +712,8 @@ data: {"type":"message_stop"}
- `compat``wide_input_strict_output``strip_reference_markers`
- `responses` / `embeddings`
- `auto_delete``mode``none` / `single` / `all`;旧配置 `sessions=true` 仍按 `all` 处理)
- `claude_mapping` / `model_aliases`
- `history_split``enabled` 固定返回 `true``trigger_after_turns`
- `model_aliases`
- `env_backed``needs_vercel_sync`
- `toolcall` 策略已固定为 `feature_match + high`,不再通过 settings 返回或修改
@@ -713,7 +727,7 @@ data: {"type":"message_stop"}
- `responses.store_ttl_seconds`
- `embeddings.provider`
- `auto_delete.mode`
- `claude_mapping`
- `history_split.trigger_after_turns``history_split.enabled` 已全局强制开启;旧客户端传入时会被保存为 `true`
- `model_aliases`
- `toolcall` 策略已固定,不再作为可写入字段
@@ -738,18 +752,33 @@ data: {"type":"message_stop"}
请求可直接传配置对象,或使用 `{"config": {...}, "mode":"merge"}` 包裹格式。
也支持在查询参数里传 `?mode=merge` / `?mode=replace`
导入时会接受 `keys``accounts``claude_mapping` / `claude_model_mapping``model_aliases``admin``runtime``responses``embeddings``auto_delete` 等字段`toolcall` 相关字段会被忽略。
`replace` 模式会按完整配置结构替换(保留 Vercel 同步元信息);`merge` 模式会合并 `keys``api_keys``accounts``model_aliases`,并覆盖 `admin``runtime``responses``embeddings` 中的非空字段。`compat``auto_delete``history_split` 建议通过 `/admin/settings` 或配置文件管理`toolcall` 相关字段会被忽略。
> `compat` 相关字段请通过 `/admin/settings` 或配置文件管理;该导入接口不会更新 `compat`。
> 注意:`merge` 模式不会更新 `compat`、`auto_delete`、`history_split`。
### `GET /admin/config/export`
导出完整配置,返回 `config``json``base64` 三种格式。
响应示例:
> 注:`_vercel_sync_hash` 和 `_vercel_sync_time` 为内部同步元数据字段,用于 Vercel 配置漂移检测。
### `POST /admin/keys`
```json
{"key": "new-api-key"}
{"key": "new-api-key", "name": "主 Key", "remark": "生产流量"}
```
**响应**`{"success": true, "total_keys": 3}`
### `PUT /admin/keys/{key}`
更新指定 API key 的 `name` / `remark`,路径参数中的 `key` 为只读标识,不可修改。
```json
{"name": "备用 Key", "remark": "压测"}
```
**响应**`{"success": true, "total_keys": 3}`
@@ -818,6 +847,16 @@ data: {"type":"message_stop"}
**响应**`{"success": true, "total_accounts": 6}`
### `PUT /admin/accounts/{identifier}`
更新指定账号的 `name` / `remark`。路径参数中的 `identifier` 可以是 email 或 mobile且不可修改。
```json
{"name": "主账号", "remark": "团队共享"}
```
**响应**`{"success": true, "total_accounts": 6}`
### `DELETE /admin/accounts/{identifier}`
`identifier` 可为 email、mobile或 token-only 账号的合成标识(`token:<hash>`)。
@@ -867,7 +906,7 @@ data: {"type":"message_stop"}
| 字段 | 必填 | 说明 |
| --- | --- | --- |
| `identifier` | ✅ | email / mobile / token-only 合成标识 |
| `model` | ❌ | 默认 `deepseek-chat` |
| `model` | ❌ | 默认 `deepseek-v4-flash` |
| `message` | ❌ | 空字符串时仅测试会话创建 |
**响应**
@@ -878,7 +917,7 @@ data: {"type":"message_stop"}
"success": true,
"response_time": 1240,
"message": "API 测试成功(仅会话创建)",
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"session_count": 0,
"config_writable": true
}
@@ -948,7 +987,7 @@ data: {"type":"message_stop"}
| 字段 | 必填 | 默认值 |
| --- | --- | --- |
| `model` | ❌ | `deepseek-chat` |
| `model` | ❌ | `deepseek-v4-flash` |
| `message` | ❌ | `你好` |
| `api_key` | ❌ | 配置中第一个 key |
@@ -972,7 +1011,7 @@ data: {"type":"message_stop"}
| --- | --- | --- | --- |
| `message` | 否 | `你好` | 便捷单轮用户消息 |
| `messages` | 否 | 自动由 `message` 生成 | OpenAI 风格消息数组 |
| `model` | 否 | `deepseek-chat` | 目标模型 |
| `model` | 否 | `deepseek-v4-flash` | 目标模型 |
| `stream` | 否 | `true` | 建议保留流式,以记录原始 SSE |
| `api_key` | 否 | 配置中第一个 key | 调用业务接口使用的 key |
| `sample_id` | 否 | 自动生成 | 样本目录名 |
@@ -1182,7 +1221,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"messages": [{"role": "user", "content": "你好"}],
"stream": false
}'
@@ -1195,7 +1234,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-reasoner",
"model": "deepseek-v4-pro",
"messages": [{"role": "user", "content": "解释一下量子纠缠"}],
"stream": true
}'
@@ -1208,7 +1247,7 @@ curl http://localhost:5001/v1/responses \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-5-codex",
"model": "gpt-5.3-codex",
"input": "写一个 golang 的 hello world",
"stream": true
}'
@@ -1233,7 +1272,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat-search",
"model": "deepseek-v4-flash-search",
"messages": [{"role": "user", "content": "今天的新闻"}],
"stream": true
}'
@@ -1246,7 +1285,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"messages": [{"role": "user", "content": "北京今天天气怎么样?"}],
"tools": [
{
@@ -1307,7 +1346,7 @@ curl http://localhost:5001/anthropic/v1/messages \
-H "Content-Type: application/json" \
-H "anthropic-version: 2023-06-01" \
-d '{
"model": "claude-sonnet-4-5",
"model": "claude-sonnet-4-6",
"max_tokens": 1024,
"messages": [{"role": "user", "content": "你好"}]
}'
@@ -1344,7 +1383,7 @@ curl http://localhost:5001/v1/chat/completions \
-H "X-Ds2-Target-Account: user@example.com" \
-H "Content-Type: application/json" \
-d '{
"model": "deepseek-chat",
"model": "deepseek-v4-flash",
"messages": [{"role": "user", "content": "你好"}]
}'
```

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
cjackhwang@qq.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@@ -3,6 +3,7 @@ FROM node:24 AS webui-builder
WORKDIR /app/webui
COPY webui/package.json webui/package-lock.json ./
RUN npm ci
COPY config.example.json /app/config.example.json
COPY webui ./
RUN npm run build

222
README.MD
View File

@@ -35,25 +35,27 @@ flowchart LR
Client["🖥️ 客户端 / SDK\n(OpenAI / Claude / Gemini)"]
Upstream["☁️ DeepSeek API"]
subgraph DS2API["DS2API 3.x统一 OpenAI 内核)"]
subgraph DS2API["DS2API 4.x模块化 HTTP surface + PromptCompat 内核)"]
Router["chi Router + 中间件\n(RequestID / RealIP / Logger / Recoverer / CORS)"]
subgraph Adapters["协议适配层"]
OA["OpenAI\n/v1/*"]
subgraph HTTP["HTTP API surface"]
OA["OpenAI\nchat / responses / files / embeddings"]
CA["Claude\n/anthropic/* + /v1/messages"]
GA["Gemini\n/v1beta/models/* + /v1/models/*"]
Admin["Admin API\n/admin/*"]
Admin["Admin API\n资源子包"]
WebUI["WebUI\n/admin静态托管"]
Vercel["Vercel Node Stream\n/v1/chat/completions"]
end
subgraph Runtime["运行时核心能力"]
Bridge["CLIProxy 转换桥\n(多协议 <-> OpenAI)"]
OAEngine["OpenAI ChatCompletions\n(统一工具调用与流式语义)"]
Compat["PromptCompat\n(API -> 网页纯文本上下文)"]
Chat["Chat / Responses Runtime\n(统一工具调用与流式语义)"]
Auth["Auth Resolver\n(API key / bearer / x-goog-api-key)"]
Pool["Account Pool + Queue\n(并发槽位 + 等待队列)"]
DSClient["DeepSeek Client\n(Session / Auth / HTTP)"]
Pow["PoW 实现\n(纯 Go 毫秒级)"]
DSClient["DeepSeek Client\n(Session / Auth / Completion / Files)"]
Pow["PoW 实现\n(纯 Go)"]
Tool["Tool Sieve\n(Go/Node 语义对齐)"]
History["History Split\n(长历史文件化)"]
end
end
@@ -61,19 +63,23 @@ flowchart LR
Router --> OA & CA & GA
Router --> Admin
Router --> WebUI
Router --> Vercel
OA --> OAEngine
CA & GA --> Bridge
Bridge --> OAEngine
OAEngine --> Auth
OAEngine -.账号轮询.-> Pool
OAEngine -.工具调用解析.-> Tool
OAEngine -.PoW 计算.-> Pow
OA --> Compat
CA & GA --> Compat
Compat --> Chat
Compat -.长历史.-> History
Vercel -.Go prepare.-> Chat
Vercel -.Node SSE.-> Tool
Chat --> Auth
Chat -.账号轮询.-> Pool
Chat -.工具调用解析.-> Tool
Chat -.PoW 计算.-> Pow
Auth --> DSClient
DSClient --> Upstream
Upstream --> DSClient
OAEngine --> Bridge
Bridge --> Client
Chat --> Client
Vercel --> Client
```
详细架构拆分与目录职责见 [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md)。
@@ -89,12 +95,13 @@ flowchart LR
| OpenAI 兼容 | `GET /v1/models`、`GET /v1/models/{id}`、`POST /v1/chat/completions`、`POST /v1/responses`、`GET /v1/responses/{response_id}`、`POST /v1/embeddings`、`POST /v1/files` |
| Claude 兼容 | `GET /anthropic/v1/models`、`POST /anthropic/v1/messages`、`POST /anthropic/v1/messages/count_tokens`(及快捷路径 `/v1/messages`、`/messages` |
| Gemini 兼容 | `POST /v1beta/models/{model}:generateContent`、`POST /v1beta/models/{model}:streamGenerateContent`(及 `/v1/models/{model}:*` 路径) |
| 统一 CORS 兼容 | `/v1/*`、`/anthropic/*`、`/v1beta/models/*`、`/admin/*` 统一走同一套 CORS 策略Vercel 上 `/v1/chat/completions` 的 Node Runtime 也对齐相同放行规则,尽量减少第三方预检请求头限制 |
| 多账号轮询 | 自动 token 刷新、邮箱/手机号双登录方式 |
| 并发队列控制 | 每账号 in-flight 上限 + 等待队列,动态计算建议并发值 |
| DeepSeek PoW | 纯 Go 高性能实现DeepSeekHashV1毫秒级响应 |
| Tool Calling | 防泄漏处理:非代码块高置信特征识别、`delta.tool_calls` 早发、结构化增量输出 |
| Admin API | 配置管理、运行时设置热更新、代理管理、账号测试 / 批量测试、会话清理、导入导出、Vercel 同步、版本检查 |
| WebUI 管理台 | `/admin` 单页应用(中英文双语、深色模式) |
| WebUI 管理台 | `/admin` 单页应用(中英文双语、深色模式,支持查看服务器端对话记录 |
| 运维探针 | `GET /healthz`(存活)、`GET /readyz`(就绪) |
## 平台兼容矩阵
@@ -114,38 +121,32 @@ flowchart LR
| 模型类型 | 模型 ID | thinking | search |
| --- | --- | --- | --- |
| default | `deepseek-chat` | ❌ | ❌ |
| default | `deepseek-reasoner` | ✅ | ❌ |
| default | `deepseek-chat-search` | | ✅ |
| default | `deepseek-reasoner-search` | | ✅ |
| expert | `deepseek-expert-chat` | ❌ | ❌ |
| expert | `deepseek-expert-reasoner` | ✅ | |
| expert | `deepseek-expert-chat-search` | ❌ | ✅ |
| expert | `deepseek-expert-reasoner-search` | ✅ | ✅ |
| vision | `deepseek-vision-chat` | ❌ | ❌ |
| vision | `deepseek-vision-reasoner` | ✅ | ❌ |
| vision | `deepseek-vision-chat-search` | ❌ | ✅ |
| vision | `deepseek-vision-reasoner-search` | ✅ | ✅ |
| default | `deepseek-v4-flash` | 默认开启,可由请求参数控制 | ❌ |
| expert | `deepseek-v4-pro` | 默认开启,可由请求参数控制 | ❌ |
| default | `deepseek-v4-flash-search` | 默认开启,可由请求参数控制 | ✅ |
| expert | `deepseek-v4-pro-search` | 默认开启,可由请求参数控制 | ✅ |
| vision | `deepseek-v4-vision` | 默认开启,可由请求参数控制 | ❌ |
| vision | `deepseek-v4-vision-search` | 默认开启,可由请求参数控制 | |
除原生模型外,也支持常见 alias 输入(如 `gpt-5`、`gpt-5-mini`、`gpt-5-codex`、`gpt-4.1`、`o3`、`claude-opus-4-6`、`claude-sonnet-4-5`、`gemini-2.5-pro`、`gemini-2.5-flash` 等),但 `/v1/models` 返回的是规范化后的 DeepSeek 原生模型 ID
除原生模型外,也支持常见 alias 输入(如 `gpt-4.1`、`gpt-5`、`gpt-5-codex`、`o3`、`claude-*`、`gemini-*` 等),但 `/v1/models` 返回的是规范化后的 DeepSeek 原生模型 ID。完整 alias 行为以 [API.md](API.md#模型-alias-解析策略) 和 `config.example.json` 为准
### Claude 接口(`GET /anthropic/v1/models`
| 当前常用模型 | 默认映射 |
| --- | --- |
| `claude-sonnet-4-5` | `deepseek-chat` |
| `claude-haiku-4-5`(兼容 `claude-3-5-haiku-latest` | `deepseek-chat` |
| `claude-opus-4-6` | `deepseek-reasoner` |
| `claude-sonnet-4-6` | `deepseek-v4-flash` |
| `claude-haiku-4-5`(兼容 `claude-3-5-haiku-latest` | `deepseek-v4-flash` |
| `claude-opus-4-6` | `deepseek-v4-pro` |
可通过配置中的 `claude_mapping` 或 `claude_model_mapping` 覆盖映射关系。
`/anthropic/v1/models` 除上述当前主别名外,还会返回 Claude 4.x snapshots,以及 3.x / 2.x / 1.x 历史模型 ID 与常见 alias便于旧客户端直接兼容。
可通过配置中的 `model_aliases` 覆盖映射关系。
`/anthropic/v1/models` 除上述主别名外,还会返回 Claude 4.x snapshots、3.x 历史模型 ID 与常见 alias便于旧客户端直接兼容。
#### Claude Code 接入避坑(实测)
- `ANTHROPIC_BASE_URL` 推荐直接指向 DS2API 根地址(例如 `http://127.0.0.1:5001`Claude Code 会请求 `/v1/messages?beta=true`。
- `ANTHROPIC_API_KEY` 需要与 `config.json` 中 `keys` 一致;建议同时保留常规 key 与 `sk-ant-*` 形态 key兼容不同客户端校验习惯。
- 若系统设置了代理,建议对 DS2API 地址配置 `NO_PROXY=127.0.0.1,localhost,<你的主机IP>`,避免本地回环请求被代理拦截。
- 如遇“工具调用输出成文本、未执行”问题,请优先检查模型输出是否为受支持的 XML/Markup 工具块(例如 `<tool_call>` / `<function_call>` / `<invoke>` / `tool_use`),而不是纯 JSON `tool_calls` 片段。
- 如遇“工具调用输出成文本、未执行”问题,请优先检查模型输出是否为当前唯一受支持的 XML 工具块:`<tool_calls><invoke name="..."><parameter name="...">...`,而不是旧式 `<tools>` / `<tool_call>` / `<tool_name>` / `<param>`、`<function_call>`、`tool_use`纯 JSON `tool_calls` 片段。
### Gemini 接口
@@ -175,6 +176,8 @@ cp config.example.json config.json
- 本地运行:直接读取 `config.json`
- Docker / Vercel由 `config.json` 生成 `DS2API_CONFIG_JSON`Base64注入环境变量也可以直接写原始 JSON
WebUI 管理台里的“全量配置模板”也直接复用同一份 `config.example.json`,所以更新示例文件后,前端模板会自动保持一致。
### 方式一:下载 Release 构建包
每次发布 Release 时GitHub Actions 会自动构建多平台二进制包:
@@ -237,7 +240,7 @@ cp config.example.json config.json
base64 < config.json | tr -d '\n'
```
> **流式说明**`/v1/chat/completions` 在 Vercel 上默认走 `api/chat-stream.js`Node Runtime以保证实时 SSE。鉴权、账号选择、会话/PoW 准备仍由 Go 内部 prepare 接口完成;流式响应(含 `tools`)在 Node 侧执行与 Go 对齐的输出组装与防泄漏处理。
> **流式说明**`/v1/chat/completions` 在 Vercel 上默认走 `api/chat-stream.js`Node Runtime以保证实时 SSE。鉴权、账号选择、会话/PoW 准备仍由 Go 内部 prepare 接口完成;流式响应(含 `tools`)在 Node 侧执行与 Go 对齐的输出组装与防泄漏处理。虽然这里只有 OpenAI chat 流式走 Node但 CORS 放行策略仍与 Go 主路由保持一致,统一覆盖第三方客户端预检场景。
详细部署说明请参阅 [部署指南](docs/DEPLOY.md)。
@@ -266,101 +269,18 @@ go run ./cmd/ds2api
## 配置说明
### `config.json` 示例
`README` 只保留快速入口,完整字段请以 [config.example.json](config.example.json) 为模板,并参考 [部署指南](docs/DEPLOY.md#0-前置要求) 与 [API 配置最佳实践](API.md#配置最佳实践)。
```json
{
"keys": ["your-api-key-1", "your-api-key-2"],
"accounts": [
{
"email": "user@example.com",
"password": "your-password"
},
{
"mobile": "12345678901",
"password": "your-password"
}
],
"model_aliases": {
"gpt-4o": "deepseek-chat",
"gpt-5": "deepseek-chat",
"gpt-5-mini": "deepseek-chat",
"gpt-5-codex": "deepseek-reasoner",
"o3": "deepseek-reasoner",
"claude-opus-4-6": "deepseek-reasoner",
"gemini-2.5-flash": "deepseek-chat"
},
"compat": {
"wide_input_strict_output": true,
"strip_reference_markers": true
},
"responses": {
"store_ttl_seconds": 900
},
"embeddings": {
"provider": "deterministic"
},
"claude_mapping": {
"fast": "deepseek-chat",
"slow": "deepseek-reasoner"
},
"admin": {
"jwt_expire_hours": 24
},
"runtime": {
"account_max_inflight": 2,
"account_max_queue": 0,
"global_max_inflight": 0,
"token_refresh_interval_hours": 6
},
"auto_delete": {
"mode": "none"
}
}
```
常用字段:
- `keys`API 访问密钥列表,客户端通过 `Authorization: Bearer <key>` 鉴权
- `accounts`DeepSeek 账号列表,支持 `email` 或 `mobile` 登录
- `token`:配置文件中即使填写也会在加载时被清空(不会从 `config.json` 读取 token实际 token 仅在运行时内存中维护并自动刷新
- `model_aliases`:常见模型名(如 GPT/Codex/Claude到 DeepSeek 模型的映射
- `compat.wide_input_strict_output`:建议保持 `true`(当前实现默认宽进严出)
- `compat.strip_reference_markers`:建议保持 `true`,用于清理可见输出中的引用/标记
- `toolcall`:旧字段,当前实现已固定为特征匹配 + 高置信早发;即使保留在配置里也会被忽略
- `responses.store_ttl_seconds``/v1/responses/{id}` 的内存缓存 TTL
- `embeddings.provider`embedding 提供方(当前内置 `deterministic/mock/builtin`
- `claude_mapping`:字典中 `fast`/`slow` 后缀映射到对应 DeepSeek 模型(兼容读取 `claude_model_mapping`
- `admin`管理后台设置JWT 过期时间、密码哈希等),可通过 Admin Settings API 热更新
- `runtime`:运行时参数(并发限制、队列大小、托管账号 token 刷新间隔),可通过 Admin Settings API 热更新;`account_max_queue=0`/`global_max_inflight=0` 表示按推荐值自动计算,`token_refresh_interval_hours=6` 为默认强制重登间隔
- `auto_delete.mode`:请求结束后如何清理 DeepSeek 远端聊天记录,支持 `none`(默认,不删除)、`single`(仅删除当前会话)、`all`(清空全部会话);旧配置里的 `auto_delete.sessions=true` 仍会被视为 `all`
- `keys` / `api_keys`:客户端访问密钥,`api_keys` 支持 `name` 与 `remark` 元信息,`keys` 继续兼容。
- `accounts`DeepSeek 托管账号,支持 `email` 或 `mobile` 登录,可配置代理、名称和备注。
- `model_aliases`OpenAI / Claude / Gemini 共用的模型 alias 映射。
- `runtime`:账号并发、队列与 token 刷新策略,可通过 Admin Settings 热更新。
- `auto_delete.mode`:请求结束后的远端会话清理策略,支持 `none` / `single` / `all`。
- `history_split`:多轮历史拆分策略,已全局强制开启;可调整触发阈值,避免长历史全部内联进 prompt。
### 环境变量
| 变量 | 用途 | 默认值 |
| --- | --- | --- |
| `PORT` | 服务端口 | `5001` |
| `LOG_LEVEL` | 日志级别 | `INFO`(可选:`DEBUG`/`WARN`/`ERROR` |
| `DS2API_ADMIN_KEY` | Admin 登录密钥 | `admin` |
| `DS2API_JWT_SECRET` | Admin JWT 签名密钥 | 等同 `DS2API_ADMIN_KEY` |
| `DS2API_JWT_EXPIRE_HOURS` | Admin JWT 过期小时数 | `24` |
| `DS2API_CONFIG_PATH` | 配置文件路径 | `config.json` |
| `DS2API_CONFIG_JSON` | 直接注入配置JSON 或 Base64 | — |
| `DS2API_ENV_WRITEBACK` | 环境变量模式下自动写回配置文件并切换文件模式(`1/true/yes/on` | 关闭 |
| `DS2API_STATIC_ADMIN_DIR` | 管理台静态文件目录 | `static/admin` |
| `DS2API_AUTO_BUILD_WEBUI` | 启动时自动构建 WebUI | 本地开启Vercel 关闭 |
| `DS2API_DEV_PACKET_CAPTURE` | 本地开发抓包开关(记录最近会话请求/响应体) | 本地非 Vercel 默认开启 |
| `DS2API_DEV_PACKET_CAPTURE_LIMIT` | 本地抓包保留条数(超出自动淘汰) | `20` |
| `DS2API_DEV_PACKET_CAPTURE_MAX_BODY_BYTES` | 单条响应体最大记录字节数 | `5242880` |
| `DS2API_ACCOUNT_MAX_INFLIGHT` | 每账号最大并发 in-flight 请求数 | `2` |
| `DS2API_ACCOUNT_MAX_QUEUE` | 等待队列上限 | `recommended_concurrency` |
| `DS2API_GLOBAL_MAX_INFLIGHT` | 全局最大 in-flight 请求数 | `recommended_concurrency` |
| `DS2API_VERCEL_INTERNAL_SECRET` | Vercel 混合流式内部鉴权密钥 | 回退用 `DS2API_ADMIN_KEY` |
| `DS2API_VERCEL_STREAM_LEASE_TTL_SECONDS` | 流式 lease 过期秒数 | `900` |
| `VERCEL_TOKEN` | Vercel 同步 token | — |
| `VERCEL_PROJECT_ID` | Vercel 项目 ID | — |
| `VERCEL_TEAM_ID` | Vercel 团队 ID | — |
| `DS2API_VERCEL_PROTECTION_BYPASS` | Vercel 部署保护绕过密钥(内部 Node→Go 调用) | — |
> 提示:当检测到 `DS2API_CONFIG_JSON` 时,管理台会显示当前模式风险与自动持久化状态(含 `DS2API_CONFIG_PATH` 路径与模式切换说明)。
环境变量完整列表见 [部署指南](docs/DEPLOY.md),接口鉴权规则见 [API.md](API.md#鉴权规则)。
## 鉴权模式
@@ -392,7 +312,7 @@ Gemini 路由还可以使用 `x-goog-api-key`,或在没有认证头时使用 `
当请求中带 `tools` 时DS2API 会做防泄漏处理与结构化转译:
1. 只在**非代码块上下文**启用执行型 toolcall 识别(代码块示例默认不触发)
2. 解析层当前以 XML/Markup 家族为准(`<tool_call>` / `<function_call>` / `<invoke>` / `tool_use` / antml 变体纯 JSON `tool_calls` 片段默认不作为可执行调用解析
2. 解析层当前只把 canonical XML 工具块视为可执行调用:`<tool_calls>` `<invoke name="...">` → `<parameter name="...">`;旧式 `<tools>` / `<tool_call>` / `<tool_name>` / `<param>`、`<function_call>`、`tool_use` / antml 变体纯 JSON `tool_calls` 片段都会按普通文本处理
3. `responses` 流式严格使用官方 item 生命周期事件(`response.output_item.*`、`response.content_part.*`、`response.function_call_arguments.*`
4. `responses` 支持并执行 `tool_choice``auto`/`none`/`required`/强制函数);`required` 违规时非流式返回 `422`,流式返回 `response.failed`
5. 客户端请求哪种协议就按该协议返回工具调用OpenAI/Claude/Gemini 各自原生结构);模型侧优先约束输出规范 XML再由兼容层转译
@@ -443,44 +363,18 @@ go run ./cmd/ds2api
## 测试
```bash
# 单元测试Go + Node
./tests/scripts/run-unit-all.sh
# 一键端到端全链路测试(真实账号,生成完整请求/响应日志)
./tests/scripts/run-live.sh
# 或自定义参数
go run ./cmd/ds2api-tests \
--config config.json \
--admin-key admin \
--out artifacts/testsuite \
--timeout 120 \
--retries 2
```
```bash
# 发布前阻断门禁
./tests/scripts/check-stage6-manual-smoke.sh
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/run-unit-all.sh
npm ci --prefix webui && npm run build --prefix webui
```
## 测试
详细测试指南请参阅 [docs/TESTING.md](docs/TESTING.md)。
### 快速测试命令
```bash
# 运行所有单元测试
go test ./...
# 本地 PR 门禁
./scripts/lint.sh
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/run-unit-all.sh
npm run build --prefix webui
# 运行 tool calls 相关测试(调试工具调用问题
go test -v -run 'TestParseToolCalls|TestRepair' ./internal/toolcall/
# 运行端到端测试
# 端到端全链路测试(真实账号,生成完整请求/响应日志
./tests/scripts/run-live.sh
```
@@ -491,7 +385,7 @@ go test -v -run 'TestParseToolCalls|TestRepair' ./internal/toolcall/
- **触发条件**:仅在 GitHub Release `published` 时触发(普通 push 不会触发)
- **构建产物**:多平台二进制包(`linux/amd64`、`linux/arm64`、`darwin/amd64`、`darwin/arm64`、`windows/amd64`+ `sha256sums.txt`
- **容器镜像发布**:仅推送到 GHCR`ghcr.io/cjackhwang/ds2api`
- **每个压缩包包含**`ds2api` 可执行文件、`static/admin`、WASM 文件(同时支持内置 fallback、配置示例、README、LICENSE
- **每个压缩包包含**`ds2api` 可执行文件、`static/admin`、WASM 文件(同时支持内置 fallback`config.example.json` 配置示例、README、LICENSE
## 免责声明

View File

@@ -33,25 +33,27 @@ flowchart LR
Client["🖥️ Clients / SDKs\n(OpenAI / Claude / Gemini)"]
Upstream["☁️ DeepSeek API"]
subgraph DS2API["DS2API 3.x (Unified OpenAI Core)"]
subgraph DS2API["DS2API 4.x (Modular HTTP Surface + PromptCompat Core)"]
Router["chi Router + Middleware\n(RequestID / RealIP / Logger / Recoverer / CORS)"]
subgraph Adapters["Protocol Adapters"]
OA["OpenAI\n/v1/*"]
subgraph HTTP["HTTP API Surface"]
OA["OpenAI\nchat / responses / files / embeddings"]
CA["Claude\n/anthropic/* + /v1/messages"]
GA["Gemini\n/v1beta/models/* + /v1/models/*"]
Admin["Admin API\n/admin/*"]
Admin["Admin API\nresource packages"]
WebUI["WebUI\n/admin (static hosting)"]
Vercel["Vercel Node Stream\n/v1/chat/completions"]
end
subgraph Runtime["Runtime + Core Capabilities"]
Bridge["CLIProxy Bridge\n(multi-protocol <-> OpenAI)"]
OAEngine["OpenAI ChatCompletions\n(unified tools + stream semantics)"]
Compat["PromptCompat\n(API -> web-chat plain text context)"]
Chat["Chat / Responses Runtime\n(unified tools + stream semantics)"]
Auth["Auth Resolver\n(API key / bearer / x-goog-api-key)"]
Pool["Account Pool + Queue\n(in-flight slots + wait queue)"]
DSClient["DeepSeek Client\n(session / auth / HTTP)"]
Pow["PoW Solver\n(Pure Go ms-level)"]
DSClient["DeepSeek Client\n(session / auth / completion / files)"]
Pow["PoW Solver\n(Pure Go)"]
Tool["Tool Sieve\n(Go/Node semantic parity)"]
History["History Split\n(long history as files)"]
end
end
@@ -59,19 +61,23 @@ flowchart LR
Router --> OA & CA & GA
Router --> Admin
Router --> WebUI
Router --> Vercel
OA --> OAEngine
CA & GA --> Bridge
Bridge --> OAEngine
OAEngine --> Auth
OAEngine -.account rotation.-> Pool
OAEngine -.tool-call parsing.-> Tool
OAEngine -.PoW solving.-> Pow
OA --> Compat
CA & GA --> Compat
Compat --> Chat
Compat -.long history.-> History
Vercel -.Go prepare.-> Chat
Vercel -.Node SSE.-> Tool
Chat --> Auth
Chat -.account rotation.-> Pool
Chat -.tool-call parsing.-> Tool
Chat -.PoW solving.-> Pow
Auth --> DSClient
DSClient --> Upstream
Upstream --> DSClient
OAEngine --> Bridge
Bridge --> Client
Chat --> Client
Vercel --> Client
```
For the full module-by-module architecture and directory responsibilities, see [docs/ARCHITECTURE.en.md](docs/ARCHITECTURE.en.md).
@@ -87,12 +93,13 @@ For the full module-by-module architecture and directory responsibilities, see [
| OpenAI compatible | `GET /v1/models`, `GET /v1/models/{id}`, `POST /v1/chat/completions`, `POST /v1/responses`, `GET /v1/responses/{response_id}`, `POST /v1/embeddings`, `POST /v1/files` |
| Claude compatible | `GET /anthropic/v1/models`, `POST /anthropic/v1/messages`, `POST /anthropic/v1/messages/count_tokens` (plus shortcut paths `/v1/messages`, `/messages`) |
| Gemini compatible | `POST /v1beta/models/{model}:generateContent`, `POST /v1beta/models/{model}:streamGenerateContent` (plus `/v1/models/{model}:*` paths) |
| Unified CORS compatibility | `/v1/*`, `/anthropic/*`, `/v1beta/models/*`, and `/admin/*` share one CORS policy; on Vercel, the Node Runtime for `/v1/chat/completions` mirrors the same relaxed preflight behavior for third-party clients |
| Multi-account rotation | Auto token refresh, email/mobile dual login |
| Concurrency control | Per-account in-flight limit + waiting queue, dynamic recommended concurrency |
| DeepSeek PoW | Pure Go high-performance solver (DeepSeekHashV1), ms-level response |
| Tool Calling | Anti-leak handling: non-code-block feature match, early `delta.tool_calls`, structured incremental output |
| Admin API | Config management, runtime settings hot-reload, proxy management, account testing/batch test, session cleanup, import/export, Vercel sync, version check |
| WebUI Admin Panel | SPA at `/admin` (bilingual Chinese/English, dark mode) |
| WebUI Admin Panel | SPA at `/admin` (bilingual Chinese/English, dark mode, with server-side conversation history) |
| Health Probes | `GET /healthz` (liveness), `GET /readyz` (readiness) |
## Platform Compatibility Matrix
@@ -112,38 +119,32 @@ For the full module-by-module architecture and directory responsibilities, see [
| Family | Model ID | thinking | search |
| --- | --- | --- | --- |
| default | `deepseek-chat` | ❌ | ❌ |
| default | `deepseek-reasoner` | ✅ | ❌ |
| default | `deepseek-chat-search` | | ✅ |
| default | `deepseek-reasoner-search` | | ✅ |
| expert | `deepseek-expert-chat` | ❌ | ❌ |
| expert | `deepseek-expert-reasoner` | ✅ | |
| expert | `deepseek-expert-chat-search` | ❌ | ✅ |
| expert | `deepseek-expert-reasoner-search` | ✅ | ✅ |
| vision | `deepseek-vision-chat` | ❌ | ❌ |
| vision | `deepseek-vision-reasoner` | ✅ | ❌ |
| vision | `deepseek-vision-chat-search` | ❌ | ✅ |
| vision | `deepseek-vision-reasoner-search` | ✅ | ✅ |
| default | `deepseek-v4-flash` | enabled by default, request-controlled | ❌ |
| expert | `deepseek-v4-pro` | enabled by default, request-controlled | ❌ |
| default | `deepseek-v4-flash-search` | enabled by default, request-controlled | ✅ |
| expert | `deepseek-v4-pro-search` | enabled by default, request-controlled | ✅ |
| vision | `deepseek-v4-vision` | enabled by default, request-controlled | ❌ |
| vision | `deepseek-v4-vision-search` | enabled by default, request-controlled | |
Besides native IDs, DS2API also accepts common aliases as input (for example `gpt-5`, `gpt-5-mini`, `gpt-5-codex`, `gpt-4.1`, `o3`, `claude-opus-4-6`, `claude-sonnet-4-5`, `gemini-2.5-pro`, `gemini-2.5-flash`), but `/v1/models` returns normalized DeepSeek native model IDs.
Besides native IDs, DS2API also accepts common aliases as input (for example `gpt-4.1`, `gpt-5`, `gpt-5-codex`, `o3`, `claude-*`, `gemini-*`), but `/v1/models` returns normalized DeepSeek native model IDs. The complete alias behavior is documented in [API.en.md](API.en.md#model-alias-resolution) and `config.example.json`.
### Claude Endpoint (`GET /anthropic/v1/models`)
| Current common model | Default Mapping |
| --- | --- |
| `claude-sonnet-4-5` | `deepseek-chat` |
| `claude-haiku-4-5` (compatible with `claude-3-5-haiku-latest`) | `deepseek-chat` |
| `claude-opus-4-6` | `deepseek-reasoner` |
| `claude-sonnet-4-6` | `deepseek-v4-flash` |
| `claude-haiku-4-5` (compatible with `claude-3-5-haiku-latest`) | `deepseek-v4-flash` |
| `claude-opus-4-6` | `deepseek-v4-pro` |
Override mapping via `claude_mapping` or `claude_model_mapping` in config.
Besides the current primary aliases above, `/anthropic/v1/models` also returns Claude 4.x snapshots plus historical 3.x / 2.x / 1.x IDs and common aliases for legacy client compatibility.
Override mapping via the global `model_aliases` config.
Besides the primary aliases above, `/anthropic/v1/models` also returns Claude 4.x snapshots plus historical 3.x IDs and common aliases for legacy client compatibility.
#### Claude Code integration pitfalls (validated)
- Set `ANTHROPIC_BASE_URL` to the DS2API root URL (for example `http://127.0.0.1:5001`). Claude Code sends requests to `/v1/messages?beta=true`.
- `ANTHROPIC_API_KEY` must match an entry in `keys` from `config.json`. Keeping both a regular key and an `sk-ant-*` style key improves client compatibility.
- If your environment has proxy variables, set `NO_PROXY=127.0.0.1,localhost,<your_host_ip>` for DS2API to avoid proxy interception of local traffic.
- If tool calls are rendered as plain text and not executed, first verify the model output uses supported XML/Markup tool blocks (`<tool_call>` / `<function_call>` / `<invoke>` / `tool_use`) rather than standalone JSON `tool_calls`.
- If tool calls are rendered as plain text and not executed, first verify the model output uses the only supported XML block: `<tool_calls><invoke name="..."><parameter name="...">...`, not legacy `<tools>` / `<tool_call>` / `<tool_name>` / `<param>`, `<function_call>`, `tool_use`, or standalone JSON `tool_calls`.
### Gemini Endpoint
@@ -173,6 +174,8 @@ Recommended per deployment mode:
- Local run: read `config.json` directly
- Docker / Vercel: generate Base64 from `config.json` and inject as `DS2API_CONFIG_JSON`, or paste raw JSON directly
The WebUI admin panels “Full configuration template” is loaded from the same `config.example.json`, so updating that file keeps the frontend template in sync.
### Option 1: Download Release Binaries
GitHub Actions automatically builds multi-platform archives on each Release:
@@ -235,7 +238,7 @@ Recommended: convert `config.json` to Base64 locally, then paste into `DS2API_CO
base64 < config.json | tr -d '\n'
```
> **Streaming note**: `/v1/chat/completions` on Vercel is routed to `api/chat-stream.js` (Node Runtime) for real-time SSE. Auth, account selection, and session/PoW preparation are still handled by the Go internal prepare endpoint; streaming output (including `tools`) is assembled on Node with Go-aligned anti-leak handling.
> **Streaming note**: `/v1/chat/completions` on Vercel is routed to `api/chat-stream.js` (Node Runtime) for real-time SSE. Auth, account selection, and session/PoW preparation are still handled by the Go internal prepare endpoint; streaming output (including `tools`) is assembled on Node with Go-aligned anti-leak handling. This is the only interface family currently routed through Node, and its CORS allow behavior is kept aligned with the Go router so third-party preflight handling stays unified.
For detailed deployment instructions, see the [Deployment Guide](docs/DEPLOY.en.md).
@@ -264,101 +267,18 @@ The server actually binds to `0.0.0.0:5001`, so devices on the same LAN can usua
## Configuration
### `config.json` Example
`README` keeps only the onboarding path. Use [config.example.json](config.example.json) as the field template, and see the [deployment guide](docs/DEPLOY.en.md#0-prerequisites) plus [API configuration notes](API.en.md#configuration-best-practice) for full details.
```json
{
"keys": ["your-api-key-1", "your-api-key-2"],
"accounts": [
{
"email": "user@example.com",
"password": "your-password"
},
{
"mobile": "12345678901",
"password": "your-password"
}
],
"model_aliases": {
"gpt-4o": "deepseek-chat",
"gpt-5": "deepseek-chat",
"gpt-5-mini": "deepseek-chat",
"gpt-5-codex": "deepseek-reasoner",
"o3": "deepseek-reasoner",
"claude-opus-4-6": "deepseek-reasoner",
"gemini-2.5-flash": "deepseek-chat"
},
"compat": {
"wide_input_strict_output": true,
"strip_reference_markers": true
},
"responses": {
"store_ttl_seconds": 900
},
"embeddings": {
"provider": "deterministic"
},
"claude_mapping": {
"fast": "deepseek-chat",
"slow": "deepseek-reasoner"
},
"admin": {
"jwt_expire_hours": 24
},
"runtime": {
"account_max_inflight": 2,
"account_max_queue": 0,
"global_max_inflight": 0,
"token_refresh_interval_hours": 6
},
"auto_delete": {
"mode": "none"
}
}
```
Common fields:
- `keys`: API access keys; clients authenticate via `Authorization: Bearer <key>`
- `accounts`: DeepSeek account list, supports `email` or `mobile` login
- `token`: Even if set in `config.json`, it is cleared during load (DS2API does not read persisted tokens from config); runtime tokens are maintained/refreshed in memory only
- `model_aliases`: Map common model names (GPT/Codex/Claude) to DeepSeek models
- `compat.wide_input_strict_output`: Keep `true` (current default policy)
- `compat.strip_reference_markers`: Keep `true`; it strips reference markers from visible output
- `toolcall`: Legacy field; the current behavior is fixed to feature matching + high-confidence early emit, and any config value is ignored
- `responses.store_ttl_seconds`: In-memory TTL for `/v1/responses/{id}`
- `embeddings.provider`: Embeddings provider (`deterministic/mock/builtin` built-in)
- `claude_mapping`: Maps `fast`/`slow` suffixes to corresponding DeepSeek models (still compatible with `claude_model_mapping`)
- `admin`: Admin panel settings (JWT expiry, password hash, etc.), hot-reloadable via Admin Settings API
- `runtime`: Runtime parameters (concurrency limits, queue sizes, managed token refresh interval), hot-reloadable via Admin Settings API; `account_max_queue=0`/`global_max_inflight=0` means auto-calculate from recommended values, `token_refresh_interval_hours=6` is the default forced re-login interval
- `auto_delete.mode`: How to clean up DeepSeek remote chat records after each request completes. Supported values: `none` (default, no deletion), `single` (delete only the current session), `all` (delete all sessions); legacy `auto_delete.sessions=true` is still treated as `all`
- `keys` / `api_keys`: client API keys; `api_keys` adds `name` and `remark` metadata while `keys` remains compatible.
- `accounts`: managed DeepSeek accounts, supporting `email` or `mobile` login plus proxy/name/remark metadata.
- `model_aliases`: one shared alias map for OpenAI / Claude / Gemini model names.
- `runtime`: account concurrency, queueing, and token refresh behavior, hot-reloadable via Admin Settings.
- `auto_delete.mode`: remote session cleanup after each request, supporting `none` / `single` / `all`.
- `history_split`: multi-turn history split policy, now forced on globally; tune its trigger threshold to avoid inlining all long history into the prompt.
### Environment Variables
| Variable | Purpose | Default |
| --- | --- | --- |
| `PORT` | Service port | `5001` |
| `LOG_LEVEL` | Log level | `INFO` (`DEBUG`/`WARN`/`ERROR`) |
| `DS2API_ADMIN_KEY` | Admin login key | `admin` |
| `DS2API_JWT_SECRET` | Admin JWT signing secret | Same as `DS2API_ADMIN_KEY` |
| `DS2API_JWT_EXPIRE_HOURS` | Admin JWT TTL in hours | `24` |
| `DS2API_CONFIG_PATH` | Config file path | `config.json` |
| `DS2API_CONFIG_JSON` | Inline config (JSON or Base64) | — |
| `DS2API_ENV_WRITEBACK` | Auto-write env-backed config to file and transition to file mode (`1/true/yes/on`) | Disabled |
| `DS2API_STATIC_ADMIN_DIR` | Admin static assets dir | `static/admin` |
| `DS2API_AUTO_BUILD_WEBUI` | Auto-build WebUI on startup | Enabled locally, disabled on Vercel |
| `DS2API_ACCOUNT_MAX_INFLIGHT` | Max in-flight requests per account | `2` |
| `DS2API_ACCOUNT_MAX_QUEUE` | Waiting queue limit | `recommended_concurrency` |
| `DS2API_GLOBAL_MAX_INFLIGHT` | Global max in-flight requests | `recommended_concurrency` |
| `DS2API_VERCEL_INTERNAL_SECRET` | Vercel hybrid streaming internal auth | Falls back to `DS2API_ADMIN_KEY` |
| `DS2API_VERCEL_STREAM_LEASE_TTL_SECONDS` | Stream lease TTL seconds | `900` |
| `DS2API_DEV_PACKET_CAPTURE` | Local dev packet capture switch (record recent request/response bodies) | Enabled by default on non-Vercel local runtime |
| `DS2API_DEV_PACKET_CAPTURE_LIMIT` | Number of captured sessions to retain (auto-evict overflow) | `20` |
| `DS2API_DEV_PACKET_CAPTURE_MAX_BODY_BYTES` | Max recorded bytes per captured response body | `5242880` |
| `VERCEL_TOKEN` | Vercel sync token | — |
| `VERCEL_PROJECT_ID` | Vercel project ID | — |
| `VERCEL_TEAM_ID` | Vercel team ID | — |
| `DS2API_VERCEL_PROTECTION_BYPASS` | Vercel deployment protection bypass for internal Node→Go calls | — |
> Note: when `DS2API_CONFIG_JSON` is detected, the Admin UI shows mode risk and auto-persistence status (including `DS2API_CONFIG_PATH` and mode-transition hints).
For the full environment variable list, see [docs/DEPLOY.en.md](docs/DEPLOY.en.md). For auth behavior, see [API.en.md](API.en.md#authentication).
## Authentication Modes
@@ -390,7 +310,7 @@ Queue limit = DS2API_ACCOUNT_MAX_QUEUE (default = recommended concurrency)
When `tools` is present in the request, DS2API performs anti-leak handling:
1. Toolcall feature matching is enabled only in **non-code-block context** (fenced examples are ignored)
2. The parser currently targets XML/Markup-family tool syntax (`<tool_call>` / `<function_call>` / `<invoke>` / `tool_use` / antml variants); standalone JSON `tool_calls` payloads are not treated as executable calls by default
2. The parser now treats only the canonical XML wrapper as executable tool-calling syntax: `<tool_calls>` `<invoke name="...">``<parameter name="...">`; legacy `<tools>` / `<tool_call>` / `<tool_name>` / `<param>`, `<function_call>`, `tool_use`, antml variants, and standalone JSON `tool_calls` payloads are treated as plain text
3. `responses` streaming strictly uses official item lifecycle events (`response.output_item.*`, `response.content_part.*`, `response.function_call_arguments.*`)
4. `responses` supports and enforces `tool_choice` (`auto`/`none`/`required`/forced function); `required` violations return `422` for non-stream and `response.failed` for stream
5. The output protocol follows the client request (OpenAI / Claude / Gemini native shapes); model-side prompting can prefer XML, and the compatibility layer handles the protocol-specific translation
@@ -439,28 +359,19 @@ The save endpoint can target a chain by `query`, `chain_key`, or `capture_id`. E
## Testing
```bash
# Unit tests (Go + Node)
./tests/scripts/run-unit-all.sh
For the full testing guide, see [docs/TESTING.md](docs/TESTING.md).
# One-command live end-to-end tests (real accounts, full request/response logs)
./tests/scripts/run-live.sh
# Or with custom flags
go run ./cmd/ds2api-tests \
--config config.json \
--admin-key admin \
--out artifacts/testsuite \
--timeout 120 \
--retries 2
```
Quick commands:
```bash
# Release-blocking gates
./tests/scripts/check-stage6-manual-smoke.sh
# Local PR gates
./scripts/lint.sh
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/run-unit-all.sh
npm ci --prefix webui && npm run build --prefix webui
npm run build --prefix webui
# Live end-to-end tests (real accounts, full request/response logs)
./tests/scripts/run-live.sh
```
## Release Artifact Automation (GitHub Actions)
@@ -470,7 +381,7 @@ Workflow: `.github/workflows/release-artifacts.yml`
- **Trigger**: only on GitHub Release `published` (normal pushes do not trigger builds)
- **Outputs**: multi-platform archives (`linux/amd64`, `linux/arm64`, `darwin/amd64`, `darwin/arm64`, `windows/amd64`) + `sha256sums.txt`
- **Container publishing**: GHCR only (`ghcr.io/cjackhwang/ds2api`)
- **Each archive includes**: `ds2api` executable, `static/admin`, WASM file (with embedded fallback support), config template, README, LICENSE
- **Each archive includes**: `ds2api` executable, `static/admin`, WASM file (with embedded fallback support), `config.example.json`-based config template, README, LICENSE
## Disclaimer

65
SECURITY.md Normal file
View File

@@ -0,0 +1,65 @@
# Security Policy
## Supported Versions
**Only the latest version** receives security updates.
If you are using an older version, please upgrade to the latest release.
| Version | Supported |
| -------------- | ------------------ |
| latest | :white_check_mark: |
| < latest | :x: |
> **Why?** This project is maintained by a single developer. Keeping only one active version ensures fast response times and avoids legacy maintenance overhead.
## What is a Security Vulnerability?
A **security vulnerability** is a bug that can be exploited to compromise:
- Data confidentiality (e.g., leaking secrets, user data)
- Data integrity (e.g., unauthorized modification)
- System availability (e.g., remote crash, denial of service)
- Privilege escalation (e.g., normal user gains admin rights)
**Examples**: SQL injection, command injection, path traversal, authentication bypass, insecure deserialization, sensitive data exposure.
**What is NOT a security vulnerability?**
Regular bugs like crashes (without exploit potential), incorrect return values, performance issues, missing features, or documentation typos. Please report those via **GitHub Issues** publicly.
## Reporting a Vulnerability
If you believe you have found a security vulnerability, **please do NOT open a public issue**.
Instead, send an email to: **cjackhwang@qq.com**
Please include as much as possible:
- A clear description of the issue
- Steps to reproduce (code / input / environment)
- Potential impact (what could an attacker do?)
- Suggested fix (if any)
You can expect:
- **Initial response** within 3 business days (acknowledgment)
- **Confirmation or clarification** within 7 days
- **Fix or decision** within 14 days (depending on complexity)
## What to Expect After Reporting
| Outcome | What happens |
| ------------------ | ------------- |
| **Accepted** | I will develop a fix, release a patch version, and may credit you in the release notes (unless you prefer anonymity). |
| **Declined** | I will explain why (e.g., not a security issue, already fixed, out of scope, or requires a larger redesign). |
| **Need more info** | I will ask follow-up questions. If no response within 14 days, the report may be considered stale. |
## Disclosure Policy
- Vulnerabilities will be **fixed privately** and then released as a new version.
- After the fix is released, I will typically publish a short security advisory (via GitHub Security Advisories) without revealing exploit details.
- Public disclosure can be coordinated if you request it.
## Recognition
I appreciate security researchers who follow responsible disclosure. Contributors who report valid, previously unknown vulnerabilities may be acknowledged in the project's README or release notes (unless they prefer to stay anonymous).
---
*Thank you for helping keep this project safe!*

View File

@@ -1 +1 @@
3.5.2
4.0.0

View File

@@ -5,14 +5,29 @@
"your-api-key-1",
"your-api-key-2"
],
"api_keys": [
{
"key": "your-api-key-1",
"name": "主 API Key",
"remark": "给 OpenAI 客户端使用"
},
{
"key": "your-api-key-2",
"name": "备用 API Key",
"remark": "压测或临时调试"
}
],
"accounts": [
{
"_comment": "邮箱登录方式",
"name": "主账号",
"remark": "优先用于生产流量",
"email": "example1@example.com",
"password": "your-password-1"
},
{
"_comment": "邮箱登录方式 - 账号2",
"name": "备用账号",
"email": "example2@example.com",
"password": "your-password-2"
},
@@ -23,9 +38,10 @@
}
],
"model_aliases": {
"gpt-4o": "deepseek-chat",
"gpt-5-codex": "deepseek-reasoner",
"o3": "deepseek-reasoner"
"gpt-4o": "deepseek-v4-flash",
"gpt-5.5": "deepseek-v4-flash",
"gpt-5.3-codex": "deepseek-v4-pro",
"o3": "deepseek-v4-pro"
},
"compat": {
"wide_input_strict_output": true,
@@ -34,13 +50,13 @@
"responses": {
"store_ttl_seconds": 900
},
"history_split": {
"enabled": true,
"trigger_after_turns": 1
},
"embeddings": {
"provider": "deterministic"
},
"claude_mapping": {
"fast": "deepseek-chat",
"slow": "deepseek-reasoner"
},
"admin": {
"jwt_expire_hours": 24
},

View File

@@ -4,9 +4,9 @@ Language: [中文](ARCHITECTURE.md) | [English](ARCHITECTURE.en.md)
> This file is the single architecture source for directory layout, module boundaries, and execution flow.
## 1. Top-level Layout (expanded)
## 1. Top-level Layout (core directories)
> Notes: this is the **fully expanded** project directory list (excluding metadata/dependency dirs such as `.git/` and `webui/node_modules/`), with each folder annotated by purpose.
> Notes: this lists the main business directories (excluding metadata/dependency dirs such as `.git/` and `webui/node_modules/`), with each folder annotated by purpose. Newly added directories should be verified from the code tree rather than treated as a per-file inventory here.
```text
ds2api/
@@ -21,34 +21,46 @@ ds2api/
├── docs/ # Project documentation
├── internal/ # Core implementation (non-public packages)
│ ├── account/ # Account pool, inflight slots, waiting queue
│ ├── adapter/ # Multi-protocol adapters
│ │ ├── claude/ # Claude protocol adapter
│ │ ├── gemini/ # Gemini protocol adapter
│ │ └── openai/ # OpenAI adapter and shared execution core
│ ├── admin/ # Admin API (config/accounts/ops)
│ ├── auth/ # Auth/JWT/credential resolution
│ ├── chathistory/ # Server-side conversation history storage/query
│ ├── claudeconv/ # Claude message conversion helpers
│ ├── compat/ # Compatibility and regression helpers
│ ├── config/ # Config loading/validation/hot reload
│ ├── deepseek/ # DeepSeek upstream client capabilities
│ ├── deepseek/ # DeepSeek upstream client/protocol/transport
│ │ ├── client/ # Login/session/completion/upload/delete calls
│ │ ├── protocol/ # DeepSeek URLs, constants, skip path/pattern
│ │ └── transport/ # DeepSeek transport details
│ ├── devcapture/ # Dev capture and troubleshooting
│ ├── format/ # Response formatting layer
│ │ ├── claude/ # Claude output formatting
│ │ └── openai/ # OpenAI output formatting
│ ├── httpapi/ # HTTP surfaces: OpenAI/Claude/Gemini/Admin
│ │ ├── admin/ # Admin API root assembly and resource packages
│ │ ├── claude/ # Claude HTTP protocol adapter
│ │ ├── gemini/ # Gemini HTTP protocol adapter
│ │ └── openai/ # OpenAI HTTP surface
│ │ ├── chat/ # Chat Completions execution entrypoint
│ │ ├── responses/ # Responses API and response store
│ │ ├── files/ # Files API and inline-file preprocessing
│ │ ├── embeddings/ # Embeddings API
│ │ ├── history/ # OpenAI history split
│ │ └── shared/ # OpenAI HTTP errors/models/tool formatting
│ ├── js/ # Node runtime related logic
│ │ ├── chat-stream/ # Node streaming bridge
│ │ ├── helpers/ # JS helper modules
│ │ │ └── stream-tool-sieve/ # JS implementation of tool sieve
│ │ └── shared/ # Shared semantics between Go/Node
│ ├── prompt/ # Prompt composition
│ ├── promptcompat/ # API request -> DeepSeek web-chat plain-text compatibility
│ ├── rawsample/ # Raw sample read/write and management
│ ├── server/ # Router and middleware assembly
│ │ └── data/ # Router/runtime helper data
│ ├── sse/ # SSE parsing utilities
│ ├── stream/ # Unified stream consumption engine
│ ├── testsuite/ # Testsuite execution framework
│ ├── textclean/ # Text cleanup
│ ├── toolcall/ # Tool-call parsing and repair
│ ├── toolstream/ # Go streaming tool-call anti-leak and delta detection
│ ├── translatorcliproxy/ # Cross-protocol translation bridge
│ ├── util/ # Shared utility helpers
│ ├── version/ # Version query/compare
@@ -90,34 +102,82 @@ ds2api/
```mermaid
flowchart LR
C[Client/SDK] --> R[internal/server/router.go]
R --> OA[OpenAI Adapter]
R --> CA[Claude Adapter]
R --> GA[Gemini Adapter]
R --> AD[Admin API]
C[Client / SDK] --> R[internal/server/router.go]
CA --> BR[translatorcliproxy]
GA --> BR
BR --> CORE[internal/adapter/openai ChatCompletions]
OA --> CORE
subgraph HTTP[HTTP API surface]
OA[internal/httpapi/openai]
CHAT[openai/chat]
RESP[openai/responses]
FILES[openai/files + embeddings]
CA[internal/httpapi/claude]
GA[internal/httpapi/gemini]
AD[internal/httpapi/admin/*]
WEB[internal/webui static admin]
end
CORE --> AUTH[internal/auth + config key/account resolver]
CORE --> POOL[internal/account queue + concurrency]
CORE --> TOOL[internal/toolcall parser + sieve]
CORE --> DS[internal/deepseek client]
subgraph COMPAT[Prompt compatibility core]
PC[internal/promptcompat]
PROMPT[internal/prompt]
HIST[internal/httpapi/openai/history]
end
subgraph RUNTIME[Shared runtime]
AUTH[internal/auth]
POOL[internal/account queue + concurrency]
STREAM[internal/stream + internal/sse]
TOOL[internal/toolcall + internal/toolstream]
DS[internal/deepseek/client]
POW[pow + internal/deepseek/protocol]
end
subgraph NODE[Vercel Node Runtime]
NCS[api/chat-stream.js]
JS[internal/js/chat-stream + stream-tool-sieve]
end
R --> OA --> CHAT
OA --> RESP
OA --> FILES
R --> CA
R --> GA
R --> AD
R --> WEB
R -.Vercel stream.-> NCS
CA --> PC
GA --> PC
CHAT --> PC
RESP --> PC
PC --> PROMPT
PC -.long history.-> HIST
PC --> AUTH
NCS -.Go prepare/release.-> CHAT
NCS --> JS
JS --> TOOL
AUTH --> POOL
CHAT --> STREAM
RESP --> STREAM
STREAM --> TOOL
POOL --> DS
DS --> POW
DS --> U[DeepSeek upstream]
```
## 3. Responsibilities in `internal/`
- `internal/server`: router tree + middlewares (health, protocol routes, Admin/WebUI).
- `internal/adapter/openai`: shared execution core (chat/responses/embeddings + tool semantics).
- `internal/adapter/{claude,gemini}`: protocol wrappers only (no duplicated upstream execution).
- `internal/httpapi/openai/*`: OpenAI HTTP surface split into chat, responses, files, embeddings, history, and shared packages; chat/responses share the promptcompat, stream, and toolcall semantics.
- `internal/httpapi/{claude,gemini}`: protocol wrappers that normalize into the same prompt compatibility semantics without duplicating upstream execution.
- `internal/promptcompat`: compatibility core for turning OpenAI/Claude/Gemini requests into DeepSeek web-chat plain-text context.
- `internal/translatorcliproxy`: structure translation between Claude/Gemini and OpenAI.
- `internal/deepseek`: upstream request/session/PoW/SSE handling.
- `internal/stream` + `internal/sse`: stream parsing and incremental assembly.
- `internal/toolcall`: XML/Markup-family tool-call parsing + anti-leak sieve (`<tool_call>` / `<function_call>` / `<invoke>` / `tool_use` / antml variants).
- `internal/admin`: config/accounts/vercel sync/version/dev-capture endpoints.
- `internal/deepseek/{client,protocol,transport}`: upstream requests, sessions, PoW adaptation, protocol constants, and transport details.
- `internal/js/chat-stream` + `api/chat-stream.js`: Vercel Node streaming bridge; Go prepare/release owns auth, account lease, and completion payload assembly, while Node relays real-time SSE with Go-aligned finalization and tool sieve semantics.
- `internal/stream` + `internal/sse`: Go stream parsing and incremental assembly.
- `internal/toolcall` + `internal/toolstream`: canonical XML tool-call parsing + anti-leak sieve (the only executable format is `<tool_calls>` / `<invoke name="...">` / `<parameter name="...">`).
- `internal/httpapi/admin/*`: Admin API root assembly plus auth/accounts/config/settings/proxies/rawsamples/vercel/history/devcapture/version resource packages.
- `internal/chathistory`: server-side conversation history persistence, pagination, detail lookup, and retention policy.
- `internal/config`: config loading/validation + runtime settings hot-reload.
- `internal/account`: managed account pool, inflight slots, waiting queue.

View File

@@ -4,9 +4,9 @@
> 本文档用于集中维护“代码目录结构 + 模块边界 + 主链路调用关系”。
## 1. 顶层目录结构(展开
## 1. 顶层目录结构(核心目录
> 说明:以下为仓库内业务相关目录的**完整展开**(排除 `.git/` 与 `webui/node_modules/` 这类依赖/元数据目录),并标注每个文件夹作用。
> 说明:以下为仓库内主要业务目录(排除 `.git/` 与 `webui/node_modules/` 这类依赖/元数据目录),并标注每个文件夹作用。新增目录以代码为准,不要求在本文做逐文件展开。
```text
ds2api/
@@ -21,34 +21,46 @@ ds2api/
├── docs/ # 项目文档目录
├── internal/ # 核心业务实现(不对外暴露)
│ ├── account/ # 账号池、并发槽位、等待队列
│ ├── adapter/ # 多协议适配层
│ │ ├── claude/ # Claude 协议适配
│ │ ├── gemini/ # Gemini 协议适配
│ │ └── openai/ # OpenAI 协议与统一执行核心
│ ├── admin/ # Admin API配置/账号/运维)
│ ├── auth/ # 鉴权/JWT/凭证解析
│ ├── chathistory/ # 服务器端对话记录存储与查询
│ ├── claudeconv/ # Claude 消息格式转换工具
│ ├── compat/ # 兼容性辅助与回归支持
│ ├── config/ # 配置加载、校验、热更新
│ ├── deepseek/ # DeepSeek 上游客户端能力
│ ├── deepseek/ # DeepSeek 上游 client/protocol/transport
│ │ ├── client/ # 登录、会话、completion、上传/删除等上游调用
│ │ ├── protocol/ # DeepSeek URL、常量、skip path/pattern
│ │ └── transport/ # DeepSeek 传输层细节
│ ├── devcapture/ # 开发抓包与调试采集
│ ├── format/ # 响应格式化层
│ │ ├── claude/ # Claude 输出格式化
│ │ └── openai/ # OpenAI 输出格式化
│ ├── httpapi/ # HTTP surfaceOpenAI/Claude/Gemini/Admin
│ │ ├── admin/ # Admin API 根装配与资源子包
│ │ ├── claude/ # Claude HTTP 协议适配
│ │ ├── gemini/ # Gemini HTTP 协议适配
│ │ └── openai/ # OpenAI HTTP surface
│ │ ├── chat/ # Chat Completions 执行入口
│ │ ├── responses/ # Responses API 与 response store
│ │ ├── files/ # Files API 与 inline file 预处理
│ │ ├── embeddings/ # Embeddings API
│ │ ├── history/ # OpenAI history split
│ │ └── shared/ # OpenAI HTTP 公共错误/模型/工具格式
│ ├── js/ # Node Runtime 相关逻辑
│ │ ├── chat-stream/ # Node 流式输出桥接
│ │ ├── helpers/ # JS 辅助函数
│ │ │ └── stream-tool-sieve/ # Tool sieve JS 实现
│ │ └── shared/ # Go/Node 共用语义片段
│ ├── prompt/ # Prompt 组装
│ ├── promptcompat/ # API 请求到 DeepSeek 网页纯文本上下文兼容层
│ ├── rawsample/ # raw sample 读写与管理
│ ├── server/ # 路由与中间件装配
│ │ └── data/ # 路由/运行时辅助数据
│ ├── sse/ # SSE 解析工具
│ ├── stream/ # 统一流式消费引擎
│ ├── testsuite/ # 测试集执行框架
│ ├── textclean/ # 文本清洗
│ ├── toolcall/ # 工具调用解析与修复
│ ├── toolstream/ # Go 流式 tool call 防泄漏与增量检测
│ ├── translatorcliproxy/ # 多协议互转桥
│ ├── util/ # 通用工具函数
│ ├── version/ # 版本查询/比较
@@ -90,34 +102,82 @@ ds2api/
```mermaid
flowchart LR
C[Client/SDK] --> R[internal/server/router.go]
R --> OA[OpenAI Adapter]
R --> CA[Claude Adapter]
R --> GA[Gemini Adapter]
R --> AD[Admin API]
C[Client / SDK] --> R[internal/server/router.go]
CA --> BR[translatorcliproxy]
GA --> BR
BR --> CORE[internal/adapter/openai ChatCompletions]
OA --> CORE
subgraph HTTP[HTTP API surface]
OA[internal/httpapi/openai]
CHAT[openai/chat]
RESP[openai/responses]
FILES[openai/files + embeddings]
CA[internal/httpapi/claude]
GA[internal/httpapi/gemini]
AD[internal/httpapi/admin/*]
WEB[internal/webui static admin]
end
CORE --> AUTH[internal/auth + config key/account resolver]
CORE --> POOL[internal/account queue + concurrency]
CORE --> TOOL[internal/toolcall parser + sieve]
CORE --> DS[internal/deepseek client]
subgraph COMPAT[Prompt compatibility core]
PC[internal/promptcompat]
PROMPT[internal/prompt]
HIST[internal/httpapi/openai/history]
end
subgraph RUNTIME[Shared runtime]
AUTH[internal/auth]
POOL[internal/account queue + concurrency]
STREAM[internal/stream + internal/sse]
TOOL[internal/toolcall + internal/toolstream]
DS[internal/deepseek/client]
POW[pow + internal/deepseek/protocol]
end
subgraph NODE[Vercel Node Runtime]
NCS[api/chat-stream.js]
JS[internal/js/chat-stream + stream-tool-sieve]
end
R --> OA --> CHAT
OA --> RESP
OA --> FILES
R --> CA
R --> GA
R --> AD
R --> WEB
R -.Vercel stream.-> NCS
CA --> PC
GA --> PC
CHAT --> PC
RESP --> PC
PC --> PROMPT
PC -.长历史.-> HIST
PC --> AUTH
NCS -.Go prepare/release.-> CHAT
NCS --> JS
JS --> TOOL
AUTH --> POOL
CHAT --> STREAM
RESP --> STREAM
STREAM --> TOOL
POOL --> DS
DS --> POW
DS --> U[DeepSeek upstream]
```
## 3. internal/ 子模块职责
- `internal/server`路由树和中间件挂载健康检查、协议入口、Admin/WebUI
- `internal/adapter/openai`统一执行内核(chat/responses/embeddingstool calling 语义
- `internal/adapter/{claude,gemini}`:协议输入输出适配,不重复实现上游调用逻辑。
- `internal/httpapi/openai/*`OpenAI HTTP surfacechatresponses、files、embeddings、history、shared 拆分chat/responses 共享 promptcompat、stream、toolcall 等核心语义。
- `internal/httpapi/{claude,gemini}`:协议输入输出适配,归一到同一套 prompt compatibility 语义,不重复实现上游调用逻辑。
- `internal/promptcompat`OpenAI/Claude/Gemini 请求到 DeepSeek 网页纯文本上下文的兼容内核。
- `internal/translatorcliproxy`Claude/Gemini 与 OpenAI 结构互转。
- `internal/deepseek`上游请求、会话、PoW、SSE 消费
- `internal/stream` + `internal/sse`:流式解析与增量处理
- `internal/toolcall`:以 XML/Markup 家族为核心的工具调用解析与防泄漏筛分(`<tool_call>` / `<function_call>` / `<invoke>` / `tool_use` / antml 变体)
- `internal/admin`配置管理、账号管理、Vercel 同步、版本检查、开发抓包
- `internal/deepseek/{client,protocol,transport}`上游请求、会话、PoW 适配、协议常量与传输层
- `internal/js/chat-stream` + `api/chat-stream.js`Vercel Node 流式桥Go prepare/release 管理鉴权、账号租约和 completion payloadNode 侧负责实时 SSE 转发并保持 Go 对齐的终结态和 tool sieve 语义
- `internal/stream` + `internal/sse`Go 流式解析与增量处理
- `internal/toolcall` + `internal/toolstream`canonical XML 工具调用解析与防泄漏筛分(唯一可执行格式:`<tool_calls>` / `<invoke name="...">` / `<parameter name="...">`
- `internal/httpapi/admin/*`Admin API 根装配与 auth/accounts/config/settings/proxies/rawsamples/vercel/history/devcapture/version 等资源子包。
- `internal/chathistory`:服务器端对话记录持久化、分页、单条详情和保留策略。
- `internal/config`:配置加载、校验、运行时 settings 热更新。
- `internal/account`:托管账号池、并发槽位、等待队列。

View File

@@ -59,10 +59,12 @@ docker-compose -f docker-compose.dev.yml up
| Language | Standards |
| --- | --- |
| **Go** | Run `./scripts/lint.sh` (gofmt + golangci-lint) and ensure `go test ./...` passes before committing |
| **Go** | Run `gofmt -w` after editing Go files; before committing, run `./scripts/lint.sh` (format check + golangci-lint) |
| **JavaScript/React** | Follow existing project style (functional components) |
| **Commit messages** | Use semantic prefixes: `feat:`, `fix:`, `docs:`, `refactor:`, `style:`, `perf:`, `chore:` |
Do not silently ignore cleanup errors from I/O-style calls such as `Close`, `Flush`, or `Sync`; return them when possible, otherwise log them explicitly.
## Submitting a PR
1. Fork the repo
@@ -85,10 +87,13 @@ Manually build WebUI to `static/admin/`:
## Running Tests
```bash
# Go + Node unit tests (recommended)
# Local PR gates (kept aligned with the quality-gates workflow)
./scripts/lint.sh
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/run-unit-all.sh
npm run build --prefix webui
# End-to-end live tests (real accounts)
# End-to-end live tests (real accounts; recommended for releases or high-risk changes)
./tests/scripts/run-live.sh
```

View File

@@ -59,10 +59,12 @@ docker-compose -f docker-compose.dev.yml up
| 语言 | 规范 |
| --- | --- |
| **Go** | 提交前运行 `./scripts/lint.sh`(包含 gofmt+golangci-lint并确保 `go test ./...` 通过 |
| **Go** | 修改 Go 文件后运行 `gofmt -w`提交前运行 `./scripts/lint.sh`(包含格式化检查和 golangci-lint |
| **JavaScript/React** | 保持现有代码风格(函数组件) |
| **提交信息** | 使用语义化前缀:`feat:``fix:``docs:``refactor:``style:``perf:``chore:` |
I/O 类清理调用(如 `Close``Flush``Sync`)的错误不要直接忽略;无法向上返回时请显式记录日志。
## 提交 PR
1. Fork 仓库
@@ -85,10 +87,13 @@ docker-compose -f docker-compose.dev.yml up
## 运行测试
```bash
# Go + Node 单元测试(推荐
# PR 本地门禁(与 quality-gates 工作流保持一致
./scripts/lint.sh
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/run-unit-all.sh
npm run build --prefix webui
# 端到端全链路测试(真实账号)
# 端到端全链路测试(真实账号,发布或高风险改动时建议执行
./tests/scripts/run-live.sh
```

View File

@@ -259,12 +259,13 @@ VERCEL_TEAM_ID=team_xxxxxxxxxxxx # optional for personal accounts
| `DS2API_ENV_WRITEBACK` | When `DS2API_CONFIG_JSON` is present, auto-write to `DS2API_CONFIG_PATH` and switch to file-backed mode after success (`1/true/yes/on`) | Disabled |
| `DS2API_VERCEL_INTERNAL_SECRET` | Hybrid streaming internal auth | Falls back to `DS2API_ADMIN_KEY` |
| `DS2API_VERCEL_STREAM_LEASE_TTL_SECONDS` | Stream lease TTL | `900` |
| `DS2API_RAW_STREAM_SAMPLE_ROOT` | Raw stream sample root for saving/reading samples | `tests/raw_stream_samples` |
| `VERCEL_TOKEN` | Vercel sync token | — |
| `VERCEL_PROJECT_ID` | Vercel project ID | — |
| `VERCEL_TEAM_ID` | Vercel team ID | — |
| `DS2API_VERCEL_PROTECTION_BYPASS` | Deployment protection bypass for internal Node→Go calls | — |
### 3.3 Vercel Architecture
### 3.4 Vercel Architecture
```text
Request ──────┐
@@ -300,13 +301,14 @@ Vercel Go Runtime applies platform-level response buffering, so this project use
- `api/chat-stream.js` falls back to Go entry (`?__go=1`) for non-stream requests only
- Streaming requests (including requests with `tools`) stay on the Node path and use Go-aligned tool-call anti-leak handling
- The Node stream path also mirrors Go finalization semantics: empty visible output returns the same shaped error SSE, and empty `content_filter` returns a `content_filter` error
- WebUI non-stream test calls `?__go=1` directly to avoid Node hop timeout on long requests
#### Function Duration
`vercel.json` sets `maxDuration: 300` for both `api/chat-stream.js` and `api/index.go` (subject to your Vercel plan limits).
### 3.4 Vercel Troubleshooting
### 3.5 Vercel Troubleshooting
#### Go Build Failure
@@ -350,7 +352,7 @@ If API responses return Vercel HTML `Authentication Required`:
- **Option B**: Add `x-vercel-protection-bypass` header to requests
- **Option C**: Set `VERCEL_AUTOMATION_BYPASS_SECRET` (or `DS2API_VERCEL_PROTECTION_BYPASS`) for internal Node→Go calls
### 3.5 Build Artifacts Not Committed
### 3.6 Build Artifacts Not Committed
- `static/admin` directory is not in Git
- Vercel / Docker automatically generate WebUI assets during build
@@ -546,7 +548,7 @@ curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:5001/admin
curl http://127.0.0.1:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"model":"deepseek-chat","messages":[{"role":"user","content":"hello"}]}'
-d '{"model":"deepseek-v4-flash","messages":[{"role":"user","content":"hello"}]}'
```
---
@@ -577,4 +579,4 @@ The testsuite automatically performs:
- ✅ Live scenario verification (OpenAI/Claude/Admin/concurrency/toolcall/streaming)
- ✅ Full request/response artifact logging for debugging
For detailed testsuite documentation, see [TESTING.md](TESTING.md).
For detailed testsuite documentation, see [TESTING.md](TESTING.md). The fixed local PR gates are listed in [TESTING.md](TESTING.md#pr-门禁--pr-gates).

View File

@@ -259,12 +259,23 @@ VERCEL_TEAM_ID=team_xxxxxxxxxxxx # 个人账号可留空
| `DS2API_ENV_WRITEBACK` | 检测到 `DS2API_CONFIG_JSON` 时自动写入 `DS2API_CONFIG_PATH`,并在成功后转为文件模式(`1/true/yes/on` | 关闭 |
| `DS2API_VERCEL_INTERNAL_SECRET` | 混合流式内部鉴权 | 回退用 `DS2API_ADMIN_KEY` |
| `DS2API_VERCEL_STREAM_LEASE_TTL_SECONDS` | 流式 lease TTL | `900` |
| `DS2API_RAW_STREAM_SAMPLE_ROOT` | raw stream 样本保存/读取根目录 | `tests/raw_stream_samples` |
| `VERCEL_TOKEN` | Vercel 同步 token | — |
| `VERCEL_PROJECT_ID` | Vercel 项目 ID | — |
| `VERCEL_TEAM_ID` | Vercel 团队 ID | — |
| `DS2API_VERCEL_PROTECTION_BYPASS` | 部署保护绕过密钥(内部 Node→Go 调用) | — |
### 3.3 Vercel 架构说明
### 3.3 运行时行为配置(通过 Admin API 设置)
部分运行时行为无法通过环境变量直接配置,需要在部署后通过 Admin API 设置,例如:
- **自动删除会话模式** (`auto_delete.mode`):支持 `none` / `single` / `all`,默认为 `none`。可通过 `PUT /admin/settings` 更新。
- **每账号并发上限** (`account_max_inflight`):环境变量已支持,但也可通过 Admin API 热更新。
- **全局并发上限** (`global_max_inflight`):同上。
详细说明参见 [API.md](../API.md#admin-接口) 中 `/admin/settings` 部分。
### 3.4 Vercel 架构说明
```text
请求 ─────┐
@@ -300,13 +311,14 @@ api/index.go api/chat-stream.js
- `api/chat-stream.js` 仅对非流式请求回退到 Go 入口(`?__go=1`
- 流式请求(包括带 `tools`)走 Node 路径,并执行与 Go 对齐的 tool-call 防泄漏处理
- Node 流式路径同时对齐 Go 的终结态语义:空可见输出会返回同形状错误 SSE`content_filter` 会返回 `content_filter` 错误
- WebUI 的"非流式测试"直接请求 `?__go=1`,避免 Node 中转造成长请求超时
#### 函数时长
`vercel.json` 已将 `api/chat-stream.js``api/index.go``maxDuration` 设为 `300`(受 Vercel 套餐上限约束)。
### 3.4 Vercel 常见报错排查
### 3.5 Vercel 常见报错排查
#### Go 构建失败
@@ -350,7 +362,7 @@ No Output Directory named "public" found after the Build completed.
- **方案 B**:请求中添加 `x-vercel-protection-bypass`
- **方案 C**:设置 `VERCEL_AUTOMATION_BYPASS_SECRET`(或 `DS2API_VERCEL_PROTECTION_BYPASS`),仅影响内部 Node→Go 调用
### 3.5 仓库不提交构建产物
### 3.6 仓库不提交构建产物
- `static/admin` 目录不在 Git 中
- Vercel / Docker 构建阶段自动生成 WebUI 静态文件
@@ -546,7 +558,7 @@ curl -s -o /dev/null -w "%{http_code}" http://127.0.0.1:5001/admin
curl http://127.0.0.1:5001/v1/chat/completions \
-H "Authorization: Bearer your-api-key" \
-H "Content-Type: application/json" \
-d '{"model":"deepseek-chat","messages":[{"role":"user","content":"hello"}]}'
-d '{"model":"deepseek-v4-flash","messages":[{"role":"user","content":"hello"}]}'
```
---
@@ -577,4 +589,4 @@ go run ./cmd/ds2api-tests \
- ✅ 真实调用场景验证OpenAI/Claude/Admin/并发/toolcall/流式)
- ✅ 全量请求与响应日志落盘(用于故障复盘)
详细测试集说明参阅 [TESTING.md](TESTING.md)。
详细测试集说明参阅 [TESTING.md](TESTING.md)。PR 前的固定本地门禁以 [TESTING.md](TESTING.md#pr-门禁--pr-gates) 为准。

View File

@@ -15,14 +15,17 @@
### 专题文档
- [API -> 网页对话纯文本兼容主链路说明](./prompt-compatibility.md)
- [Tool Calling 统一语义](./toolcall-semantics.md)
- [DeepSeek SSE 行为结构说明(逆向观察)](./DeepSeekSSE行为结构说明-2026-04-05.md)
### 文档维护约定
- 文档更新必须以实际代码实现为依据:总路由装配看 `internal/server/router.go`,协议/resource 路由看 `internal/httpapi/*/**/routes.go``internal/httpapi/admin/handler.go`,配置默认值看 `internal/config/*`,模型/alias 看 `internal/config/models.go`prompt 兼容链路看 `docs/prompt-compatibility.md` 列出的代码入口。
- `README.MD` / `README.en.md`:面向首次接触用户,保留“是什么 + 怎么快速跑起来”。
- `docs/ARCHITECTURE*.md`:面向开发者,集中维护项目结构、模块职责与调用链。
- `API*.md`:面向客户端接入者,聚焦接口行为、鉴权和示例。
- `docs/prompt-compatibility.md`面向维护者集中维护“API -> 网页对话纯文本上下文”的统一兼容语义;相关行为修改时必须同步更新。
- 其他 `docs/*.md`:主题化说明,避免在多个文档重复粘贴同一段内容。
---
@@ -42,12 +45,15 @@ Recommended reading order:
### Topical docs
- [API -> pure-text web-chat compatibility pipeline](./prompt-compatibility.md)
- [Tool-calling unified semantics](./toolcall-semantics.md)
- [DeepSeek SSE behavior notes (reverse-engineered)](./DeepSeekSSE行为结构说明-2026-04-05.md)
### Maintenance conventions
- Documentation updates must be grounded in the actual implementation: root routing lives in `internal/server/router.go`, protocol/resource routes live in `internal/httpapi/*/**/routes.go` and `internal/httpapi/admin/handler.go`, config defaults in `internal/config/*`, models/aliases in `internal/config/models.go`, and the prompt compatibility pipeline in the code entrypoints listed by `docs/prompt-compatibility.md`.
- `README.MD` / `README.en.md`: onboarding-oriented (“what + quick start”).
- `docs/ARCHITECTURE*.md`: developer-oriented source of truth for module boundaries and execution flow.
- `API*.md`: integration-oriented behavior/contracts.
- `docs/prompt-compatibility.md`: maintainer-oriented source of truth for the “API -> pure-text web-chat context” compatibility flow; update it whenever related behavior changes.
- Other `docs/*.md`: focused topics, avoid copy-pasting the same section into multiple files.

View File

@@ -20,6 +20,25 @@ Node 单元测试脚本会先做 `node --check` 语法门禁,再以 `--test-co
---
## PR 门禁 | PR Gates
打开或更新 PR 前,按 `.github/workflows/quality-gates.yml` 的同等本地门禁执行:
```bash
./scripts/lint.sh
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/run-unit-all.sh
npm run build --prefix webui
```
说明:
- `./scripts/lint.sh` 会运行 Go 格式化检查和 `golangci-lint`;修改 Go 文件后仍建议先执行 `gofmt -w <files>`
- `run-unit-all.sh` 串行调用 Go 与 Node 单元测试入口。
- `run-live.sh` 是真实账号端到端测试,适合作为发布或高风险改动后的补充验证,不属于每次 PR 的固定本地门禁。
---
## 快速开始 | Quick Start
### 单元测试 | Unit Tests
@@ -39,7 +58,7 @@ Node 单元测试脚本会先做 `node --check` 语法门禁,再以 `--test-co
./tests/scripts/check-refactor-line-gate.sh
./tests/scripts/check-node-split-syntax.sh
# 发布阻断:阶段 6 手工烟测签字检查(默认读取 plans/stage6-manual-smoke.md
# 历史阶段门禁:阶段 6 手工烟测签字检查(默认读取 plans/stage6-manual-smoke.md
./tests/scripts/check-stage6-manual-smoke.sh
```
@@ -190,8 +209,8 @@ go test -v -run TestParseToolCallsWithDeepSeekHallucination ./internal/toolcall/
# 运行 format 相关测试
go test -v ./internal/format/...
# 运行 adapter 相关测试
go test -v ./internal/adapter/openai/...
# 运行 HTTP API 相关测试
go test -v ./internal/httpapi/openai/...
```
### 调试 Tool Call 问题 | Debugging Tool Call Issues

View File

@@ -0,0 +1,402 @@
# API -> 网页对话纯文本兼容主链路说明
文档导航:[总览](../README.MD) / [架构说明](./ARCHITECTURE.md) / [接口文档](../API.md) / [测试指南](./TESTING.md)
> 本文档是 DS2API“把 OpenAI / Claude / Gemini 风格 API 请求兼容成 DeepSeek 网页对话纯文本上下文”的专项说明。
> 这是项目最重要的兼容产物之一。凡是修改消息标准化、tool prompt 注入、tool history 保留、文件引用、history split、下游 completion payload 组装等行为,都必须同步更新本文档。
## 1. 核心结论
DS2API 当前的核心思路,不是把客户端传来的 `messages``tools``attachments` 原样转发给下游。
而是把这些高层 API 语义,统一压缩成 DeepSeek 网页对话更容易理解的三类输入:
1. `prompt`
一个单字符串里面带有角色标记、system 指令、历史消息、assistant reasoning 标签、历史 tool call XML 等。
2. `ref_file_ids`
一个文件引用数组承载附件、inline 上传文件,以及必要时被拆出去的历史文件。
3. 控制位
例如 `thinking_enabled``search_enabled`、部分 passthrough 参数。
也就是说,项目最重要的兼容动作,是把“结构化 API 会话”翻译成“网页对话纯文本上下文 + 文件引用”。
## 2. 为什么这是核心产物
因为对下游来说,真正稳定的输入面不是 OpenAI/Claude/Gemini 的原生 schema而是
- 一段连续的对话 prompt
- 一组可引用文件
- 少量开关位
这也是为什么很多表面上看像“协议兼容”的代码,最终都会收敛到同一类逻辑:
- 先把不同协议的消息统一成内部消息序列
- 再把工具声明改写成 system prompt 文本
- 再把历史 tool call / tool result 改写成 prompt 可见内容
- 最后输出成 DeepSeek completion payload
## 3. 统一心智模型
当前主链路可以这样理解:
```text
客户端请求
-> HTTP API surfaceOpenAI / Claude / Gemini
-> promptcompat 统一消息标准化
-> tool prompt 注入
-> DeepSeek 风格 prompt 拼装
-> 文件收集 / inline 上传 / history splitOpenAI 链路)
-> completion payload
-> 下游网页对话接口
```
对应的关键代码入口:
- OpenAI Chat / Responses
[internal/promptcompat/request_normalize.go](../internal/promptcompat/request_normalize.go)
- OpenAI prompt 组装:
[internal/promptcompat/prompt_build.go](../internal/promptcompat/prompt_build.go)
- OpenAI 消息标准化:
[internal/promptcompat/message_normalize.go](../internal/promptcompat/message_normalize.go)
- Claude 标准化:
[internal/httpapi/claude/standard_request.go](../internal/httpapi/claude/standard_request.go)
- Claude 消息与 tool_use/tool_result 归一:
[internal/httpapi/claude/handler_utils.go](../internal/httpapi/claude/handler_utils.go)
- Gemini 复用 OpenAI prompt builder
[internal/httpapi/gemini/convert_request.go](../internal/httpapi/gemini/convert_request.go)
- DeepSeek prompt 角色标记拼装:
[internal/prompt/messages.go](../internal/prompt/messages.go)
- prompt 可见 tool history XML
[internal/prompt/tool_calls.go](../internal/prompt/tool_calls.go)
- completion payload
[internal/promptcompat/standard_request.go](../internal/promptcompat/standard_request.go)
## 4. 下游真正收到的东西
在“完成标准化后”,下游 completion payload 的核心形态是:
```json
{
"chat_session_id": "session-id",
"model_type": "default",
"parent_message_id": null,
"prompt": "<begin▁of▁sentence>...",
"ref_file_ids": [
"file-history",
"file-systemprompt",
"file-other-attachment"
],
"thinking_enabled": true,
"search_enabled": false
}
```
重点是:
- `prompt` 才是对话上下文主载体。
- `ref_file_ids` 只承载文件引用,不承载普通文本消息。
- `tools` 不会作为“原生工具 schema”直接下发给下游而是被改写进 `prompt`
- OpenAI Chat / Responses 原生走统一 OpenAI 标准化与 DeepSeek payload 组装Claude / Gemini 会尽量复用 OpenAI prompt/tool 语义,其中 Gemini 直接复用 `promptcompat.BuildOpenAIPromptForAdapter`Claude 消息接口在可代理场景会转换为 OpenAI chat 形态再执行。
- 客户端传入的 thinking / reasoning 开关会被归一到下游 `thinking_enabled`。Claude surface 没有 `thinking` 字段时按 Anthropic 语义视为关闭Gemini `generationConfig.thinkingConfig.thinkingBudget` 会翻译成同一套 thinking 开关;关闭时即使上游返回 `response/thinking_content`,兼容层也不会把它当作可见正文输出。
## 5. prompt 是怎么拼出来的
### 5.1 角色标记
最终 prompt 使用 DeepSeek 风格角色标记:
- `<begin▁of▁sentence>`
- `<System>`
- `<User>`
- `<Assistant>`
- `<Tool>`
- `<end▁of▁instructions>`
- `<end▁of▁sentence>`
- `<end▁of▁toolresults>`
实现位置:
[internal/prompt/messages.go](../internal/prompt/messages.go)
### 5.2 thinking continuity 说明
如果启用了 thinking会在最前面额外插入一个 system block提醒模型
- 继续既有会话,不要重开
- earlier messages 是 binding context
- 不要把最终回答只留在 reasoning 里
这部分不是客户端原始消息,而是兼容层主动补进去的连续性契约。
### 5.3 相邻同角色消息会合并
在最终 `MessagesPrepareWithThinking` 中,相邻同 role 的消息会被合并成一个块,中间插入空行。
这意味着:
- prompt 中看到的是“合并后的 role block”
- 不是客户端传来的逐条 message 原样排列
## 6. tools 为什么是“文本注入”,不是原生下发
当前项目把工具能力视为“prompt 约束的一部分”。
具体做法:
1. 把每个 tool 的名称、描述、参数 schema 序列化成文本。
2. 拼成 `You have access to these tools:` 大段说明。
3. 再附上统一的 XML tool call 格式约束。
4. 把这整段内容并入 system prompt。
工具调用正例仍只示范 canonical XML`<tool_calls>``<invoke name="...">``<parameter name="...">`
提示词会额外强调:如果要调用工具,工具块的首个非空白字符必须就是 `<tool_calls>`,不能只输出 `</tool_calls>` 而漏掉 opening tag。
正例中的工具名只会来自当前请求实际声明的工具;如果当前请求没有足够的已知工具形态,就省略对应的单工具、多工具或嵌套示例,避免把不可用工具名写进 prompt。
对执行类工具,脚本内容必须进入执行参数本身:`Bash` / `execute_command` 使用 `command``exec_command` 使用 `cmd`;不要把脚本示范成 `path` / `content` 文件写入参数。
OpenAI 路径实现:
[internal/promptcompat/tool_prompt.go](../internal/promptcompat/tool_prompt.go)
Claude 路径实现:
[internal/httpapi/claude/handler_utils.go](../internal/httpapi/claude/handler_utils.go)
统一工具调用格式模板:
[internal/toolcall/tool_prompt.go](../internal/toolcall/tool_prompt.go)
这也是项目“网页对话纯文本兼容”的关键设计:
- tools 对下游来说,本质上是 prompt 内规则
- 不是 native tool schema transport
## 7. assistant 的 tool_calls / reasoning 如何保留
### 7.1 reasoning 保留方式
assistant 的 reasoning 会变成一个显式标签块:
```text
[reasoning_content]
...
[/reasoning_content]
```
然后再接可见回答正文。
### 7.2 历史 tool_calls 保留方式
assistant 历史 `tool_calls` 不会保留成 OpenAI 原生 JSON而会转成 prompt 可见的 XML
```xml
<tool_calls>
<invoke name="read_file">
<parameter name="path"><![CDATA[src/main.go]]></parameter>
</invoke>
</tool_calls>
```
这也是当前项目里唯一受支持的 canonical tool-calling 形态;其他形态都会作为普通文本保留,不会作为可执行调用语法。
例外是 parser 会对一个非常窄的模型失误做修复:如果 assistant 输出了 `<invoke ...>` ... `</tool_calls>`,但漏掉最前面的 opening `<tool_calls>`,解析阶段会补回 wrapper 后再尝试识别。
这件事很重要,因为它决定了:
- 历史工具调用在 prompt 中是“可见文本历史”
- 不是“隐藏结构化元数据”
实现位置:
[internal/prompt/tool_calls.go](../internal/prompt/tool_calls.go)
### 7.3 tool result 保留方式
tool / function role 的结果会作为 `<Tool>...<end▁of▁toolresults>` 进入 prompt。
如果 tool content 为空,当前会补成字符串 `"null"`,避免整个 tool turn 丢失。
## 8. files、附件、systemprompt 文件的实际语义
这里要明确区分两类东西:
1. 文本型 system prompt
例如 OpenAI `developer` / `system` / Responses `instructions` / Claude top-level `system`
这类会进入 `prompt`
2. 文件型 systemprompt
例如通过附件、`input_file`、base64、data URL 上传的文件
这类不会直接内联进 `prompt`,而是进入 `ref_file_ids`
OpenAI 文件相关实现:
- inline/base64/data URL 上传:
[internal/httpapi/openai/files/file_inline_upload.go](../internal/httpapi/openai/files/file_inline_upload.go)
- 文件 ID 收集:
[internal/promptcompat/file_refs.go](../internal/promptcompat/file_refs.go)
结论:
- “systemprompt 文字”在 prompt 里
- “systemprompt 文件”通常只在 `ref_file_ids`
除非调用方自己把文件内容展开后再塞进 system/developer 文本,否则文件内容不会自动出现在 prompt 正文。
## 9. 多轮历史为什么不会一直完整内联在 prompt
history split 现在全局强制开启;旧配置中的 `history_split.enabled=false` 会被忽略。默认从第 2 个 user turn 起就可能触发,仍可通过 `history_split.trigger_after_turns` 调整触发阈值。
相关实现:
- 配置访问器:
[internal/config/store_accessors.go](../internal/config/store_accessors.go)
- 历史拆分:
[internal/httpapi/openai/history/history_split.go](../internal/httpapi/openai/history/history_split.go)
触发后行为:
1. 旧历史消息被切出去。
2. 旧历史会被重新序列化成一个文本文件。
3. 真正上传的文件名固定是 `HISTORY.txt`
4. 文件内容内部会使用 `IGNORE` 这层包装名来闭合 DeepSeek 官网原生文件标记。
5. 该文件上传后,其 `file_id` 会排在 `ref_file_ids` 最前面。
6. live prompt 只保留:
- system / developer
- 最新 user turn 起的上下文
历史文件内容不是普通自由文本,而是用同一套角色标记再次序列化出的 transcript
```text
[uploaded filename]: HISTORY.txt
[file content end]
<begin▁of▁sentence><User>...<Assistant>...<Tool>...
[file name]: IGNORE
[file content begin]
```
所以“完整上下文”在当前实现里,其实通常分散在两处:
- `prompt` 里的 live context
- `ref_file_ids` 指向的 history transcript file
## 10. 各协议入口的差异
### 10.1 OpenAI Chat / Responses
特点:
- `developer` 会映射到 `system`
- Responses `instructions` 会 prepend 为 system message
- `tools` 会注入 system prompt
- `attachments` / `input_file` / inline 文件会进入 `ref_file_ids`
- history split 主要在这条链路里生效
### 10.2 Claude Messages
特点:
- top-level `system` 优先作为系统提示
- `tool_use` / `tool_result` 会被转换成统一的 assistant/tool 历史语义
- `tools` 同样会被并进 system prompt
- 常规执行通过 `internal/httpapi/claude/handler_messages.go` 转到 OpenAI chat 路径,模型 alias 会先解析成 DeepSeek 原生模型
- 当前代码里没有像 OpenAI 那样完整的 `ref_file_ids` 附件链路
### 10.3 Gemini
特点:
- `systemInstruction``contents.parts``functionCall``functionResponse` 会先归一
- tools 会转成 OpenAI 风格 function schema
- prompt 构建复用 OpenAI 的 `promptcompat.BuildOpenAIPromptForAdapter`
- 未识别的非文本 part 会被安全序列化进 prompt并对二进制/疑似 base64 内容做省略或截断处理
也就是说Gemini 在“最终 prompt 语义”上,尽量和 OpenAI 保持一致。
## 11. 一份贴近真实的最终上下文示意
假设用户发来一个多轮请求:
- 有 system/developer 文本
- 有 tools
- 有一个文件型 systemprompt 附件
- 有历史 assistant tool call / tool result
- history split 已触发
那么最终上下文更接近:
```json
{
"prompt": "<begin▁of▁sentence><System>continuity instructions...\\n\\n原 system / developer\\n\\nYou have access to these tools: ...<end▁of▁instructions><User>最新问题<Assistant>",
"ref_file_ids": [
"file-history-ignore",
"file-systemprompt",
"file-other-attachment"
],
"thinking_enabled": true,
"search_enabled": false
}
```
这正是“API 转网页对话纯文本”的核心成果:
- 大部分结构化语义被压进 `prompt`
- 文件保持文件
- 历史必要时拆文件
## 12. 修改时必须同步本文档的场景
只要触碰以下任一类行为,就必须在同一提交或同一 PR 中更新本文档:
- 角色映射变更
- system / developer / instructions 合并规则变更
- assistant reasoning 保留格式变更
- assistant 历史 `tool_calls` 的 XML 呈现方式变更
- tool result 注入方式变更
- tool prompt 模板或 tool_choice 约束变更
- inline 文件上传 / 文件引用收集规则变更
- history split 触发条件、上传格式、`IGNORE` 包装格式变更
- completion payload 字段语义变更
- Claude / Gemini 对这套统一语义的复用关系变更
优先检查这些文件:
- `internal/promptcompat/request_normalize.go`
- `internal/promptcompat/prompt_build.go`
- `internal/promptcompat/message_normalize.go`
- `internal/promptcompat/tool_prompt.go`
- `internal/httpapi/openai/files/file_inline_upload.go`
- `internal/promptcompat/file_refs.go`
- `internal/httpapi/openai/history/history_split.go`
- `internal/promptcompat/responses_input_normalize.go`
- `internal/httpapi/claude/standard_request.go`
- `internal/httpapi/claude/handler_utils.go`
- `internal/httpapi/gemini/convert_request.go`
- `internal/httpapi/gemini/convert_messages.go`
- `internal/httpapi/gemini/convert_tools.go`
- `internal/prompt/messages.go`
- `internal/prompt/tool_calls.go`
- `internal/promptcompat/standard_request.go`
## 13. 建议的最小验证
改动这条链路后,至少补齐或检查这些测试:
- `go test ./internal/prompt/...`
- `go test ./internal/httpapi/openai/...`
- `go test ./internal/httpapi/claude/...`
- `go test ./internal/httpapi/gemini/...`
- `go test ./internal/util/...`
如果改的是 tool call 相关兼容语义,还应同时检查:
- `go test ./internal/toolcall/...`
- `node --test tests/node/stream-tool-sieve.test.js`
## 14. 文档同步约定
本文档是这条兼容链路的专项说明。
如果外部接口行为也变了,还应同步检查:
- [API.md](../API.md)
- [API.en.md](../API.en.md)
- [docs/toolcall-semantics.md](./toolcall-semantics.md)
原则是:
- 内部主链路变化,至少更新本文档
- 外部可见契约变化,再同步更新 API 文档

View File

@@ -1,74 +1,75 @@
# Tool call parsing semanticsGo/Node 统一语义)
本文档描述当前代码中工具调用解析链路的**实际行为**`internal/toolcall``internal/js/helpers/stream-tool-sieve` 为准
本文档描述当前代码中的**实际行为**`internal/toolcall``internal/toolstream``internal/js/helpers/stream-tool-sieve` 为准。
文档导航:[总览](../README.MD) / [架构说明](./ARCHITECTURE.md) / [测试指南](./TESTING.md)
## 1) 当前输出结构
## 1) 当前唯一可执行格式
当前版本只把下面这类 canonical XML 视为可执行工具调用:
```xml
<tool_calls>
<invoke name="read_file">
<parameter name="path"><![CDATA[README.MD]]></parameter>
</invoke>
</tool_calls>
```
约束:
- 必须有 `<tool_calls>...</tool_calls>` wrapper
- 每个调用必须在 `<invoke name="...">...</invoke>`
- 工具名必须放在 `invoke``name` 属性
- 参数必须使用 `<parameter name="...">...</parameter>`
兼容修复:
- 如果模型漏掉 opening `<tool_calls>`,但后面仍输出了一个或多个 `<invoke ...>` 并以 `</tool_calls>` 收尾Go 解析链路会在解析前补回缺失的 opening wrapper。
- 这是一个针对常见模型失误的窄修复不改变推荐输出格式prompt 仍要求模型直接输出完整 canonical XML。
## 2) 非 canonical 内容
任何不满足上述 canonical XML 形态的内容,都会保留为普通文本,不会执行。一个例外是上一节提到的“缺失 opening `<tool_calls>`、但 closing `</tool_calls>` 仍存在”的窄修复场景。
当前 parser 不把 allow-list 当作硬安全边界即使传入了已声明工具名列表XML 里出现未声明工具名时也会尽量解析并交给上层协议输出;真正的执行侧仍必须自行校验工具名和参数。
## 3) 流式与防泄漏行为
在流式链路中Go / Node 一致):
- canonical `<tool_calls>` wrapper 会进入结构化捕获
- 如果流里直接从 `<invoke ...>` 开始,但后面补上了 `</tool_calls>`Go 流式筛分也会按缺失 opening wrapper 的修复路径尝试恢复
- 已识别成功的工具调用不会再次回流到普通文本
- 不符合新格式的块不会执行,并继续按原样文本透传
- fenced code block 中的 XML 示例始终按普通文本处理
## 4) 输出结构
`ParseToolCallsDetailed` / `parseToolCallsDetailed` 返回:
- `calls`:解析出的工具调用列表(`name` + `input`
- `sawToolCallSyntax`:检测到工具调用语法特征时`true`
- `rejectedByPolicy`:当前实现固定为 `false`(预留字段)。
- `rejectedToolNames`:当前实现固定为空数组(预留字段)。
- `calls`:解析出的工具调用列表(`name` + `input`
- `sawToolCallSyntax`:检测到 canonical wrapper或命中“缺失 opening wrapper 但可修复”的形态时会`true`
- `rejectedByPolicy`:当前固定为 `false`
- `rejectedToolNames`:当前固定为空数组
> 当前 `filterToolCallsDetailed` 仅做结构清洗,不做 allow-list 工具名硬拒绝。
## 5) 落地建议
## 2) 解析范围(重点)
1. Prompt 里只示范 canonical XML 语法。
2. 上游客户端仍应直接输出 canonical XMLDS2API 只对“closing tag 在、opening tag 漏掉”的常见失误做窄修复,不会泛化接受其他旧格式。
3. 不要依赖 parser 做安全控制;执行器侧仍应做工具名和参数校验。
当前版本的可执行解析以 **XML/Markup 家族**为主:
- `<tool_call>...</tool_call>`
- `<function_call>...</function_call>`
- `<invoke ...>...</invoke>`(含自闭合)
- `<tool_use>...</tool_use>`
- antml 变体(如 `antml:function_call` / `antml:argument`
并支持在这些标记块内部解析:
- JSON 参数字符串
- 标签参数(`<parameter name="...">...`
- key/value 风格子标签
## 3) 不应再假设的行为
以下说法在当前实现中已不成立:
1. “纯 JSON `tool_calls` 片段会被直接当作可执行工具调用解析”。
2. “存在 `toolcall.mode` / `toolcall.early_emit_confidence` 等可配置开关可以改变解析策略”。
当前策略在代码中固定为:
- 特征匹配开启feature-match on
- 高置信度早发开启early emit on
- policy 拒绝字段保留但未启用
## 4) 流式与防泄漏语义
在流式链路中OpenAI / Claude / Gemini 统一内核):
- 工具调用片段会被优先提取为结构化增量输出;
- 已识别的工具调用原始片段不会作为普通文本再次回流;
- fenced code block 中的示例内容按文本处理,不作为可执行工具调用。
## 5) 落地建议(按当前实现)
1. Prompt 里优先约束模型输出 XML/Markup 工具块。
2. 执行器侧继续做工具名白名单与参数 schema 校验(不要依赖 parser 代替安全策略)。
3. 需要兼容历史“纯 JSON tool_calls”模型输出时请在上游模板层把输出规范化为 XML/Markup 风格再进入 DS2API。
## 6) 回归验证建议
## 6) 回归验证
可直接运行:
```bash
go test -v -run 'TestParseToolCalls|TestRepair' ./internal/toolcall/
go test -v -run 'TestParseToolCalls|TestProcessToolSieve' ./internal/toolcall ./internal/toolstream ./internal/httpapi/openai/...
node --test tests/node/stream-tool-sieve.test.js
```
重点覆盖:
- `<tool_call>` / `<function_call>` / `<invoke>` / `tool_use` / antml 变体
- 参数 JSON 修复与解析
- 流式增量下的工具调用提取与文本防泄漏
- canonical `<tool_calls>` wrapper 正常解析
- 非 canonical 内容按普通文本透传
- 代码块示例不执行

View File

@@ -1,34 +0,0 @@
package claude
import "testing"
type mockClaudeConfig struct {
m map[string]string
}
func (m mockClaudeConfig) ClaudeMapping() map[string]string { return m.m }
func (mockClaudeConfig) CompatStripReferenceMarkers() bool { return true }
func TestNormalizeClaudeRequestUsesConfigInterfaceMapping(t *testing.T) {
req := map[string]any{
"model": "claude-opus-4-6",
"messages": []any{
map[string]any{"role": "user", "content": "hello"},
},
}
out, err := normalizeClaudeRequest(mockClaudeConfig{
m: map[string]string{
"fast": "deepseek-chat",
"slow": "deepseek-reasoner-search",
},
}, req)
if err != nil {
t.Fatalf("normalizeClaudeRequest error: %v", err)
}
if out.Standard.ResolvedModel != "deepseek-reasoner-search" {
t.Fatalf("resolved model mismatch: got=%q", out.Standard.ResolvedModel)
}
if !out.Standard.Thinking || !out.Standard.Search {
t.Fatalf("unexpected flags: thinking=%v search=%v", out.Standard.Thinking, out.Standard.Search)
}
}

View File

@@ -1,72 +0,0 @@
package openai
import (
"net/http"
"strings"
"sync"
"time"
"github.com/go-chi/chi/v5"
"ds2api/internal/auth"
"ds2api/internal/config"
"ds2api/internal/util"
)
const (
// openAIUploadMaxSize limits total multipart request body size (100 MiB).
openAIUploadMaxSize = 100 << 20
// openAIGeneralMaxSize limits total JSON request body size (100 MiB).
openAIGeneralMaxSize = 100 << 20
)
// writeJSON is a package-internal alias kept to avoid mass-renaming across
// every call-site in this package.
var writeJSON = util.WriteJSON
type Handler struct {
Store ConfigReader
Auth AuthResolver
DS DeepSeekCaller
leaseMu sync.Mutex
streamLeases map[string]streamLease
responsesMu sync.Mutex
responses *responseStore
}
func (h *Handler) compatStripReferenceMarkers() bool {
if h == nil || h.Store == nil {
return true
}
return h.Store.CompatStripReferenceMarkers()
}
type streamLease struct {
Auth *auth.RequestAuth
ExpiresAt time.Time
}
func RegisterRoutes(r chi.Router, h *Handler) {
r.Get("/v1/models", h.ListModels)
r.Get("/v1/models/{model_id}", h.GetModel)
r.Post("/v1/chat/completions", h.ChatCompletions)
r.Post("/v1/responses", h.Responses)
r.Get("/v1/responses/{response_id}", h.GetResponseByID)
r.Post("/v1/files", h.UploadFile)
r.Post("/v1/embeddings", h.Embeddings)
}
func (h *Handler) ListModels(w http.ResponseWriter, _ *http.Request) {
writeJSON(w, http.StatusOK, config.OpenAIModelsResponse())
}
func (h *Handler) GetModel(w http.ResponseWriter, r *http.Request) {
modelID := strings.TrimSpace(chi.URLParam(r, "model_id"))
model, ok := config.OpenAIModelByID(h.Store, modelID)
if !ok {
writeOpenAIError(w, http.StatusNotFound, "Model not found.")
return
}
writeJSON(w, http.StatusOK, model)
}

View File

@@ -1,170 +0,0 @@
package openai
import (
"ds2api/internal/toolcall"
"encoding/json"
"fmt"
"strings"
"github.com/google/uuid"
"ds2api/internal/util"
)
func injectToolPrompt(messages []map[string]any, tools []any, policy util.ToolChoicePolicy) ([]map[string]any, []string) {
if policy.IsNone() {
return messages, nil
}
toolSchemas := make([]string, 0, len(tools))
names := make([]string, 0, len(tools))
isAllowed := func(name string) bool {
if strings.TrimSpace(name) == "" {
return false
}
if len(policy.Allowed) == 0 {
return true
}
_, ok := policy.Allowed[name]
return ok
}
for _, t := range tools {
tool, ok := t.(map[string]any)
if !ok {
continue
}
fn, _ := tool["function"].(map[string]any)
if len(fn) == 0 {
fn = tool
}
name, _ := fn["name"].(string)
desc, _ := fn["description"].(string)
schema, _ := fn["parameters"].(map[string]any)
name = strings.TrimSpace(name)
if !isAllowed(name) {
continue
}
names = append(names, name)
if desc == "" {
desc = "No description available"
}
b, _ := json.Marshal(schema)
toolSchemas = append(toolSchemas, fmt.Sprintf("Tool: %s\nDescription: %s\nParameters: %s", name, desc, string(b)))
}
if len(toolSchemas) == 0 {
return messages, names
}
toolPrompt := "You have access to these tools:\n\n" + strings.Join(toolSchemas, "\n\n") + "\n\n" + buildToolCallInstructions(names)
if policy.Mode == util.ToolChoiceRequired {
toolPrompt += "\n7) For this response, you MUST call at least one tool from the allowed list."
}
if policy.Mode == util.ToolChoiceForced && strings.TrimSpace(policy.ForcedName) != "" {
toolPrompt += "\n7) For this response, you MUST call exactly this tool name: " + strings.TrimSpace(policy.ForcedName)
toolPrompt += "\n8) Do not call any other tool."
}
for i := range messages {
if messages[i]["role"] == "system" {
old, _ := messages[i]["content"].(string)
messages[i]["content"] = strings.TrimSpace(old + "\n\n" + toolPrompt)
return messages, names
}
}
messages = append([]map[string]any{{"role": "system", "content": toolPrompt}}, messages...)
return messages, names
}
// buildToolCallInstructions delegates to the shared util implementation.
func buildToolCallInstructions(toolNames []string) string {
return toolcall.BuildToolCallInstructions(toolNames)
}
func formatIncrementalStreamToolCallDeltas(deltas []toolCallDelta, ids map[int]string) []map[string]any {
if len(deltas) == 0 {
return nil
}
out := make([]map[string]any, 0, len(deltas))
for _, d := range deltas {
if d.Name == "" && d.Arguments == "" {
continue
}
callID, ok := ids[d.Index]
if !ok || callID == "" {
callID = "call_" + strings.ReplaceAll(uuid.NewString(), "-", "")
ids[d.Index] = callID
}
item := map[string]any{
"index": d.Index,
"id": callID,
"type": "function",
}
fn := map[string]any{}
if d.Name != "" {
fn["name"] = d.Name
}
if d.Arguments != "" {
fn["arguments"] = d.Arguments
}
if len(fn) > 0 {
item["function"] = fn
}
out = append(out, item)
}
return out
}
func filterIncrementalToolCallDeltasByAllowed(deltas []toolCallDelta, seenNames map[int]string) []toolCallDelta {
if len(deltas) == 0 {
return nil
}
out := make([]toolCallDelta, 0, len(deltas))
for _, d := range deltas {
if d.Name != "" {
if seenNames != nil {
seenNames[d.Index] = d.Name
}
out = append(out, d)
continue
}
if seenNames == nil {
out = append(out, d)
continue
}
name := strings.TrimSpace(seenNames[d.Index])
if name == "" {
continue
}
out = append(out, d)
}
return out
}
func formatFinalStreamToolCallsWithStableIDs(calls []toolcall.ParsedToolCall, ids map[int]string) []map[string]any {
if len(calls) == 0 {
return nil
}
out := make([]map[string]any, 0, len(calls))
for i, c := range calls {
callID := ""
if ids != nil {
callID = strings.TrimSpace(ids[i])
}
if callID == "" {
callID = "call_" + strings.ReplaceAll(uuid.NewString(), "-", "")
if ids != nil {
ids[i] = callID
}
}
args, _ := json.Marshal(c.Input)
out = append(out, map[string]any{
"index": i,
"id": callID,
"type": "function",
"function": map[string]any{
"name": c.Name,
"arguments": string(args),
},
})
}
return out
}

View File

@@ -1,9 +0,0 @@
package openai
func (h *Handler) toolcallFeatureMatchEnabled() bool {
return true
}
func (h *Handler) toolcallEarlyEmitHighConfidence() bool {
return true
}

View File

@@ -1,96 +0,0 @@
package openai
import (
"strings"
"ds2api/internal/prompt"
)
func normalizeOpenAIMessagesForPrompt(raw []any, traceID string) []map[string]any {
_ = traceID
out := make([]map[string]any, 0, len(raw))
for _, item := range raw {
msg, ok := item.(map[string]any)
if !ok {
continue
}
role := strings.ToLower(strings.TrimSpace(asString(msg["role"])))
switch role {
case "assistant":
content := buildAssistantContentForPrompt(msg)
if content == "" {
continue
}
out = append(out, map[string]any{
"role": "assistant",
"content": content,
})
case "tool", "function":
content := buildToolContentForPrompt(msg)
out = append(out, map[string]any{
"role": "tool",
"content": content,
})
case "user", "system", "developer":
out = append(out, map[string]any{
"role": normalizeOpenAIRoleForPrompt(role),
"content": normalizeOpenAIContentForPrompt(msg["content"]),
})
default:
content := normalizeOpenAIContentForPrompt(msg["content"])
if content == "" {
continue
}
if role == "" {
role = "user"
}
out = append(out, map[string]any{
"role": normalizeOpenAIRoleForPrompt(role),
"content": content,
})
}
}
return out
}
func buildAssistantContentForPrompt(msg map[string]any) string {
content := strings.TrimSpace(normalizeOpenAIContentForPrompt(msg["content"]))
toolHistory := prompt.FormatToolCallsForPrompt(msg["tool_calls"])
switch {
case content == "" && toolHistory == "":
return ""
case content == "":
return toolHistory
case toolHistory == "":
return content
default:
return content + "\n\n" + toolHistory
}
}
func buildToolContentForPrompt(msg map[string]any) string {
content := normalizeOpenAIContentForPrompt(msg["content"])
if strings.TrimSpace(content) == "" {
return "null"
}
return content
}
func normalizeOpenAIContentForPrompt(v any) string {
return prompt.NormalizeContent(v)
}
func normalizeOpenAIRoleForPrompt(role string) string {
role = strings.ToLower(strings.TrimSpace(role))
if role == "developer" {
return "system"
}
return role
}
func asString(v any) string {
if s, ok := v.(string); ok {
return s
}
return ""
}

View File

@@ -1,26 +0,0 @@
package openai
import (
"ds2api/internal/deepseek"
"ds2api/internal/util"
)
func buildOpenAIFinalPrompt(messagesRaw []any, toolsRaw any, traceID string, thinkingEnabled bool) (string, []string) {
return buildOpenAIFinalPromptWithPolicy(messagesRaw, toolsRaw, traceID, util.DefaultToolChoicePolicy(), thinkingEnabled)
}
func buildOpenAIFinalPromptWithPolicy(messagesRaw []any, toolsRaw any, traceID string, toolPolicy util.ToolChoicePolicy, thinkingEnabled bool) (string, []string) {
messages := normalizeOpenAIMessagesForPrompt(messagesRaw, traceID)
toolNames := []string{}
if tools, ok := toolsRaw.([]any); ok && len(tools) > 0 {
messages, toolNames = injectToolPrompt(messages, tools, toolPolicy)
}
return deepseek.MessagesPrepareWithThinking(messages, thinkingEnabled), toolNames
}
// BuildPromptForAdapter exposes the OpenAI-compatible prompt building flow so
// other protocol adapters (for example Gemini) can reuse the same tool/history
// normalization logic and remain behavior-compatible with chat/completions.
func BuildPromptForAdapter(messagesRaw []any, toolsRaw any, traceID string, thinkingEnabled bool) (string, []string) {
return buildOpenAIFinalPrompt(messagesRaw, toolsRaw, traceID, thinkingEnabled)
}

View File

@@ -1,210 +0,0 @@
package openai
import (
"testing"
"ds2api/internal/config"
"ds2api/internal/util"
)
func newEmptyStoreForNormalizeTest(t *testing.T) *config.Store {
t.Helper()
t.Setenv("DS2API_CONFIG_JSON", `{}`)
return config.LoadStore()
}
func TestNormalizeOpenAIChatRequest(t *testing.T) {
store := newEmptyStoreForNormalizeTest(t)
req := map[string]any{
"model": "gpt-5-codex",
"messages": []any{
map[string]any{"role": "user", "content": "hello"},
},
"temperature": 0.3,
"stream": true,
}
n, err := normalizeOpenAIChatRequest(store, req, "")
if err != nil {
t.Fatalf("normalize failed: %v", err)
}
if n.ResolvedModel != "deepseek-reasoner" {
t.Fatalf("unexpected resolved model: %s", n.ResolvedModel)
}
if !n.Stream {
t.Fatalf("expected stream=true")
}
if _, ok := n.PassThrough["temperature"]; !ok {
t.Fatalf("expected temperature passthrough")
}
if n.FinalPrompt == "" {
t.Fatalf("expected non-empty final prompt")
}
}
func TestNormalizeOpenAIChatRequestCollectsRefFileIDs(t *testing.T) {
store := newEmptyStoreForNormalizeTest(t)
req := map[string]any{
"model": "gpt-5-codex",
"messages": []any{
map[string]any{
"role": "user",
"content": []any{
map[string]any{"type": "input_text", "text": "hello"},
map[string]any{"type": "input_file", "file_id": "file-msg"},
},
},
},
"attachments": []any{
map[string]any{"file_id": "file-attachment"},
},
"ref_file_ids": []any{"file-top", "file-attachment"},
}
n, err := normalizeOpenAIChatRequest(store, req, "")
if err != nil {
t.Fatalf("normalize failed: %v", err)
}
if len(n.RefFileIDs) != 3 {
t.Fatalf("expected 3 distinct file ids, got %#v", n.RefFileIDs)
}
if n.RefFileIDs[0] != "file-top" || n.RefFileIDs[1] != "file-attachment" || n.RefFileIDs[2] != "file-msg" {
t.Fatalf("unexpected file ids: %#v", n.RefFileIDs)
}
}
func TestNormalizeOpenAIResponsesRequestInput(t *testing.T) {
store := newEmptyStoreForNormalizeTest(t)
req := map[string]any{
"model": "gpt-4o",
"input": "ping",
"instructions": "system",
}
n, err := normalizeOpenAIResponsesRequest(store, req, "")
if err != nil {
t.Fatalf("normalize failed: %v", err)
}
if n.ResolvedModel != "deepseek-chat" {
t.Fatalf("unexpected resolved model: %s", n.ResolvedModel)
}
if len(n.Messages) != 2 {
t.Fatalf("expected 2 normalized messages, got %d", len(n.Messages))
}
}
func TestNormalizeOpenAIResponsesRequestToolChoiceRequired(t *testing.T) {
store := newEmptyStoreForNormalizeTest(t)
req := map[string]any{
"model": "gpt-4o",
"input": "ping",
"tools": []any{
map[string]any{
"type": "function",
"function": map[string]any{
"name": "search",
"parameters": map[string]any{
"type": "object",
},
},
},
},
"tool_choice": "required",
}
n, err := normalizeOpenAIResponsesRequest(store, req, "")
if err != nil {
t.Fatalf("normalize failed: %v", err)
}
if n.ToolChoice.Mode != util.ToolChoiceRequired {
t.Fatalf("expected tool choice mode required, got %q", n.ToolChoice.Mode)
}
if len(n.ToolNames) != 1 || n.ToolNames[0] != "search" {
t.Fatalf("unexpected tool names: %#v", n.ToolNames)
}
}
func TestNormalizeOpenAIResponsesRequestToolChoiceForcedFunction(t *testing.T) {
store := newEmptyStoreForNormalizeTest(t)
req := map[string]any{
"model": "gpt-4o",
"input": "ping",
"tools": []any{
map[string]any{
"type": "function",
"function": map[string]any{
"name": "search",
},
},
map[string]any{
"type": "function",
"function": map[string]any{
"name": "read_file",
},
},
},
"tool_choice": map[string]any{
"type": "function",
"name": "read_file",
},
}
n, err := normalizeOpenAIResponsesRequest(store, req, "")
if err != nil {
t.Fatalf("normalize failed: %v", err)
}
if n.ToolChoice.Mode != util.ToolChoiceForced {
t.Fatalf("expected tool choice mode forced, got %q", n.ToolChoice.Mode)
}
if n.ToolChoice.ForcedName != "read_file" {
t.Fatalf("expected forced tool name read_file, got %q", n.ToolChoice.ForcedName)
}
if len(n.ToolNames) != 1 || n.ToolNames[0] != "read_file" {
t.Fatalf("expected filtered tool names [read_file], got %#v", n.ToolNames)
}
}
func TestNormalizeOpenAIResponsesRequestToolChoiceForcedUndeclaredFails(t *testing.T) {
store := newEmptyStoreForNormalizeTest(t)
req := map[string]any{
"model": "gpt-4o",
"input": "ping",
"tools": []any{
map[string]any{
"type": "function",
"function": map[string]any{
"name": "search",
},
},
},
"tool_choice": map[string]any{
"type": "function",
"name": "read_file",
},
}
if _, err := normalizeOpenAIResponsesRequest(store, req, ""); err == nil {
t.Fatalf("expected forced undeclared tool to fail")
}
}
func TestNormalizeOpenAIResponsesRequestToolChoiceNoneKeepsToolDetectionEnabled(t *testing.T) {
store := newEmptyStoreForNormalizeTest(t)
req := map[string]any{
"model": "gpt-4o",
"input": "ping",
"tools": []any{
map[string]any{
"type": "function",
"function": map[string]any{
"name": "search",
},
},
},
"tool_choice": "none",
}
n, err := normalizeOpenAIResponsesRequest(store, req, "")
if err != nil {
t.Fatalf("normalize failed: %v", err)
}
if n.ToolChoice.Mode != util.ToolChoiceNone {
t.Fatalf("expected tool choice mode none, got %q", n.ToolChoice.Mode)
}
if len(n.ToolNames) == 0 {
t.Fatalf("expected tool detection sentinel when tool_choice=none, got %#v", n.ToolNames)
}
}

View File

@@ -1,463 +0,0 @@
package openai
import (
"strings"
"testing"
)
func TestProcessToolSieveInterceptsXMLToolCallWithoutLeak(t *testing.T) {
var state toolStreamSieveState
// Simulate a model producing XML tool call output chunk by chunk.
chunks := []string{
"<tool_calls>\n",
" <tool_call>\n",
" <tool_name>read_file</tool_name>\n",
` <parameters>{"path":"README.MD"}</parameters>` + "\n",
" </tool_call>\n",
"</tool_calls>",
}
var events []toolStreamEvent
for _, c := range chunks {
events = append(events, processToolSieveChunk(&state, c, []string{"read_file"})...)
}
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent string
var toolCalls int
for _, evt := range events {
if evt.Content != "" {
textContent += evt.Content
}
toolCalls += len(evt.ToolCalls)
}
if strings.Contains(textContent, "<tool_call") {
t.Fatalf("XML tool call content leaked to text: %q", textContent)
}
if strings.Contains(textContent, "read_file") {
t.Fatalf("tool name leaked to text: %q", textContent)
}
if toolCalls == 0 {
t.Fatal("expected tool calls to be extracted, got none")
}
}
func TestProcessToolSieveXMLWithLeadingText(t *testing.T) {
var state toolStreamSieveState
// Model outputs some prose then an XML tool call.
chunks := []string{
"Let me check the file.\n",
"<tool_calls>\n <tool_call>\n <tool_name>read_file</tool_name>\n",
` <parameters>{"path":"go.mod"}</parameters>` + "\n </tool_call>\n</tool_calls>",
}
var events []toolStreamEvent
for _, c := range chunks {
events = append(events, processToolSieveChunk(&state, c, []string{"read_file"})...)
}
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent string
var toolCalls int
for _, evt := range events {
if evt.Content != "" {
textContent += evt.Content
}
toolCalls += len(evt.ToolCalls)
}
// Leading text should be emitted.
if !strings.Contains(textContent, "Let me check the file.") {
t.Fatalf("expected leading text to be emitted, got %q", textContent)
}
// The XML itself should NOT leak.
if strings.Contains(textContent, "<tool_call") {
t.Fatalf("XML tool call content leaked to text: %q", textContent)
}
if toolCalls == 0 {
t.Fatal("expected tool calls to be extracted, got none")
}
}
func TestProcessToolSievePassesThroughNonToolXMLBlock(t *testing.T) {
var state toolStreamSieveState
chunk := `<tool_call><title>示例 XML</title><body>plain text xml payload</body></tool_call>`
events := processToolSieveChunk(&state, chunk, []string{"read_file"})
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent strings.Builder
toolCalls := 0
for _, evt := range events {
textContent.WriteString(evt.Content)
toolCalls += len(evt.ToolCalls)
}
if toolCalls != 0 {
t.Fatalf("expected no tool calls for plain XML payload, got %d events=%#v", toolCalls, events)
}
if textContent.String() != chunk {
t.Fatalf("expected XML payload to pass through unchanged, got %q", textContent.String())
}
}
func TestProcessToolSieveNonToolXMLKeepsSuffixForToolParsing(t *testing.T) {
var state toolStreamSieveState
chunk := `<tool_call><title>plain xml</title></tool_call><invoke name="read_file"><parameters>{"path":"README.MD"}</parameters></invoke>`
events := processToolSieveChunk(&state, chunk, []string{"read_file"})
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent strings.Builder
toolCalls := 0
for _, evt := range events {
textContent.WriteString(evt.Content)
toolCalls += len(evt.ToolCalls)
}
if !strings.Contains(textContent.String(), `<tool_call><title>plain xml</title></tool_call>`) {
t.Fatalf("expected leading non-tool XML to be preserved, got %q", textContent.String())
}
if strings.Contains(textContent.String(), `<invoke name="read_file">`) {
t.Fatalf("expected invoke tool XML to be intercepted, got %q", textContent.String())
}
if toolCalls != 1 {
t.Fatalf("expected exactly one parsed tool call from suffix, got %d events=%#v", toolCalls, events)
}
}
func TestProcessToolSievePassesThroughMalformedExecutableXMLBlock(t *testing.T) {
var state toolStreamSieveState
chunk := `<tool_call><parameters>{"path":"README.md"}</parameters></tool_call>`
events := processToolSieveChunk(&state, chunk, []string{"read_file"})
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent strings.Builder
toolCalls := 0
for _, evt := range events {
textContent.WriteString(evt.Content)
toolCalls += len(evt.ToolCalls)
}
if toolCalls != 0 {
t.Fatalf("expected malformed executable-looking XML to stay text, got %d events=%#v", toolCalls, events)
}
if textContent.String() != chunk {
t.Fatalf("expected malformed executable-looking XML to pass through unchanged, got %q", textContent.String())
}
}
func TestProcessToolSievePassesThroughFencedXMLToolCallExamples(t *testing.T) {
var state toolStreamSieveState
input := strings.Join([]string{
"Before first example.\n```",
"xml\n<tool_call><tool_name>read_file</tool_name><parameters>{\"path\":\"README.md\"}</parameters></tool_call>\n```\n",
"Between examples.\n```xml\n",
"<tool_call><tool_name>search</tool_name><parameters>{\"q\":\"golang\"}</parameters></tool_call>\n",
"```\nAfter examples.",
}, "")
chunks := []string{
"Before first example.\n```",
"xml\n<tool_call><tool_name>read_file</tool_name><parameters>{\"path\":\"README.md\"}</parameters></tool_call>\n```\n",
"Between examples.\n```xml\n",
"<tool_call><tool_name>search</tool_name><parameters>{\"q\":\"golang\"}</parameters></tool_call>\n",
"```\nAfter examples.",
}
var events []toolStreamEvent
for _, c := range chunks {
events = append(events, processToolSieveChunk(&state, c, []string{"read_file", "search"})...)
}
events = append(events, flushToolSieve(&state, []string{"read_file", "search"})...)
var textContent strings.Builder
toolCalls := 0
for _, evt := range events {
if evt.Content != "" {
textContent.WriteString(evt.Content)
}
toolCalls += len(evt.ToolCalls)
}
if toolCalls != 0 {
t.Fatalf("expected fenced XML examples to stay text, got %d tool calls events=%#v", toolCalls, events)
}
if textContent.String() != input {
t.Fatalf("expected fenced XML examples to pass through unchanged, got %q", textContent.String())
}
}
func TestProcessToolSieveKeepsPartialXMLTagInsideFencedExample(t *testing.T) {
var state toolStreamSieveState
input := strings.Join([]string{
"Example:\n```xml\n<tool_ca",
"ll><tool_name>read_file</tool_name><parameters>{\"path\":\"README.md\"}</parameters></tool_call>\n```\n",
"Done.",
}, "")
chunks := []string{
"Example:\n```xml\n<tool_ca",
"ll><tool_name>read_file</tool_name><parameters>{\"path\":\"README.md\"}</parameters></tool_call>\n```\n",
"Done.",
}
var events []toolStreamEvent
for _, c := range chunks {
events = append(events, processToolSieveChunk(&state, c, []string{"read_file"})...)
}
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent strings.Builder
toolCalls := 0
for _, evt := range events {
if evt.Content != "" {
textContent.WriteString(evt.Content)
}
toolCalls += len(evt.ToolCalls)
}
if toolCalls != 0 {
t.Fatalf("expected partial fenced XML to stay text, got %d tool calls events=%#v", toolCalls, events)
}
if textContent.String() != input {
t.Fatalf("expected partial fenced XML to pass through unchanged, got %q", textContent.String())
}
}
func TestProcessToolSievePartialXMLTagHeldBack(t *testing.T) {
var state toolStreamSieveState
// Chunk ends with a partial XML tool tag.
events := processToolSieveChunk(&state, "Hello <tool_ca", []string{"read_file"})
var textContent string
for _, evt := range events {
textContent += evt.Content
}
// "Hello " should be emitted, but "<tool_ca" should be held back.
if strings.Contains(textContent, "<tool_ca") {
t.Fatalf("partial XML tag should not be emitted, got %q", textContent)
}
if !strings.Contains(textContent, "Hello") {
t.Fatalf("expected 'Hello' text to be emitted, got %q", textContent)
}
}
func TestFindToolSegmentStartDetectsXMLToolCalls(t *testing.T) {
cases := []struct {
name string
input string
want int
}{
{"tool_calls_tag", "some text <tool_calls>\n", 10},
{"tool_call_tag", "prefix <tool_call>\n", 7},
{"invoke_tag", "text <invoke name=\"foo\">body</invoke>", 5},
{"xml_inside_code_fence", "```xml\n<tool_call><tool_name>read_file</tool_name></tool_call>\n```", -1},
{"function_call_tag", "<function_call name=\"foo\">body</function_call>", 0},
{"no_xml", "just plain text", -1},
{"gemini_json_no_detect", `some text {"functionCall":{"name":"search"}}`, -1},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := findToolSegmentStart(nil, tc.input)
if got != tc.want {
t.Fatalf("findToolSegmentStart(%q) = %d, want %d", tc.input, got, tc.want)
}
})
}
}
func TestFindPartialXMLToolTagStart(t *testing.T) {
cases := []struct {
name string
input string
want int
}{
{"partial_tool_call", "Hello <tool_ca", 6},
{"partial_invoke", "Prefix <inv", 7},
{"partial_lt_only", "Text <", 5},
{"complete_tag", "Text <tool_call>done", -1},
{"no_lt", "plain text", -1},
{"closed_lt", "a < b > c", -1},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got := findPartialXMLToolTagStart(tc.input)
if got != tc.want {
t.Fatalf("findPartialXMLToolTagStart(%q) = %d, want %d", tc.input, got, tc.want)
}
})
}
}
func TestHasOpenXMLToolTag(t *testing.T) {
if !hasOpenXMLToolTag("<tool_call>\n<tool_name>foo</tool_name>") {
t.Fatal("should detect open XML tool tag without closing tag")
}
if hasOpenXMLToolTag("<tool_call>\n<tool_name>foo</tool_name></tool_call>") {
t.Fatal("should return false when closing tag is present")
}
if hasOpenXMLToolTag("plain text without any XML") {
t.Fatal("should return false for plain text")
}
}
// Test the EXACT scenario the user reports: token-by-token streaming where
// <tool_calls> tag arrives in small pieces.
func TestProcessToolSieveTokenByTokenXMLNoLeak(t *testing.T) {
var state toolStreamSieveState
// Simulate DeepSeek model generating tokens one at a time.
chunks := []string{
"<",
"tool",
"_calls",
">\n",
" <",
"tool",
"_call",
">\n",
" <",
"tool",
"_name",
">",
"read",
"_file",
"</",
"tool",
"_name",
">\n",
" <",
"parameters",
">",
`{"path"`,
`: "README.MD"`,
`}`,
"</",
"parameters",
">\n",
" </",
"tool",
"_call",
">\n",
"</",
"tool",
"_calls",
">",
}
var events []toolStreamEvent
for _, c := range chunks {
events = append(events, processToolSieveChunk(&state, c, []string{"read_file"})...)
}
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent string
var toolCalls int
for _, evt := range events {
if evt.Content != "" {
textContent += evt.Content
}
toolCalls += len(evt.ToolCalls)
}
if strings.Contains(textContent, "<tool_call") {
t.Fatalf("XML tool call content leaked to text in token-by-token mode: %q", textContent)
}
if strings.Contains(textContent, "tool_calls>") {
t.Fatalf("closing tag fragment leaked to text: %q", textContent)
}
if strings.Contains(textContent, "read_file") {
t.Fatalf("tool name leaked to text: %q", textContent)
}
if toolCalls == 0 {
t.Fatal("expected tool calls to be extracted, got none")
}
}
// Test that flushToolSieve on incomplete XML falls back to raw text.
func TestFlushToolSieveIncompleteXMLFallsBackToText(t *testing.T) {
var state toolStreamSieveState
// XML block starts but stream ends before completion.
chunks := []string{
"<tool_calls>\n",
" <tool_call>\n",
" <tool_name>read_file</tool_name>\n",
}
var events []toolStreamEvent
for _, c := range chunks {
events = append(events, processToolSieveChunk(&state, c, []string{"read_file"})...)
}
// Stream ends abruptly - flush should NOT dump raw XML.
events = append(events, flushToolSieve(&state, []string{"read_file"})...)
var textContent string
for _, evt := range events {
if evt.Content != "" {
textContent += evt.Content
}
}
if textContent != strings.Join(chunks, "") {
t.Fatalf("expected incomplete XML to fall back to raw text, got %q", textContent)
}
}
// Test that the opening tag "<tool_calls>\n " is NOT emitted as text content.
func TestOpeningXMLTagNotLeakedAsContent(t *testing.T) {
var state toolStreamSieveState
// First chunk is the opening tag - should be held, not emitted.
evts1 := processToolSieveChunk(&state, "<tool_calls>\n ", []string{"read_file"})
for _, evt := range evts1 {
if strings.Contains(evt.Content, "<tool_calls>") {
t.Fatalf("opening tag leaked on first chunk: %q", evt.Content)
}
}
// Remaining content arrives.
evts2 := processToolSieveChunk(&state, "<tool_call>\n <tool_name>read_file</tool_name>\n <parameters>{\"path\":\"README.MD\"}</parameters>\n </tool_call>\n</tool_calls>", []string{"read_file"})
evts2 = append(evts2, flushToolSieve(&state, []string{"read_file"})...)
var textContent string
var toolCalls int
allEvents := append(evts1, evts2...)
for _, evt := range allEvents {
if evt.Content != "" {
textContent += evt.Content
}
toolCalls += len(evt.ToolCalls)
}
if strings.Contains(textContent, "<tool_call") {
t.Fatalf("XML content leaked: %q", textContent)
}
if toolCalls == 0 {
t.Fatal("expected tool calls to be extracted")
}
}
func TestProcessToolSieveFallsBackToRawAttemptCompletion(t *testing.T) {
var state toolStreamSieveState
// Simulate an agent outputting attempt_completion XML tag.
// If it does not parse as a tool call, it should fall back to raw text.
chunks := []string{
"Done with task.\n",
"<attempt_completion>\n",
" <result>Here is the answer</result>\n",
"</attempt_completion>",
}
var events []toolStreamEvent
for _, c := range chunks {
events = append(events, processToolSieveChunk(&state, c, []string{"attempt_completion"})...)
}
events = append(events, flushToolSieve(&state, []string{"attempt_completion"})...)
var textContent string
for _, evt := range events {
if evt.Content != "" {
textContent += evt.Content
}
}
if !strings.Contains(textContent, "Done with task.\n") {
t.Fatalf("expected leading text to be emitted, got %q", textContent)
}
if textContent != strings.Join(chunks, "") {
t.Fatalf("expected agent XML to fall back to raw text, got %q", textContent)
}
}

View File

@@ -1,15 +0,0 @@
package openai
import "net/http"
func writeUpstreamEmptyOutputError(w http.ResponseWriter, text string, contentFilter bool) bool {
if text != "" {
return false
}
if contentFilter {
writeOpenAIErrorWithCode(w, http.StatusBadRequest, "Upstream content filtered the response and returned no output.", "content_filter")
return true
}
writeOpenAIErrorWithCode(w, http.StatusTooManyRequests, "Upstream model returned empty output.", "upstream_empty_output")
return true
}

View File

@@ -1,83 +0,0 @@
package openai
import (
"ds2api/internal/auth"
"net/http/httptest"
"testing"
"time"
)
func TestIsVercelStreamPrepareRequest(t *testing.T) {
req := httptest.NewRequest("POST", "/v1/chat/completions?__stream_prepare=1", nil)
if !isVercelStreamPrepareRequest(req) {
t.Fatalf("expected prepare request to be detected")
}
req2 := httptest.NewRequest("POST", "/v1/chat/completions", nil)
if isVercelStreamPrepareRequest(req2) {
t.Fatalf("expected non-prepare request")
}
}
func TestIsVercelStreamReleaseRequest(t *testing.T) {
req := httptest.NewRequest("POST", "/v1/chat/completions?__stream_release=1", nil)
if !isVercelStreamReleaseRequest(req) {
t.Fatalf("expected release request to be detected")
}
req2 := httptest.NewRequest("POST", "/v1/chat/completions", nil)
if isVercelStreamReleaseRequest(req2) {
t.Fatalf("expected non-release request")
}
}
func TestVercelInternalSecret(t *testing.T) {
t.Run("prefer explicit secret", func(t *testing.T) {
t.Setenv("DS2API_VERCEL_INTERNAL_SECRET", "stream-secret")
t.Setenv("DS2API_ADMIN_KEY", "admin-fallback")
if got := vercelInternalSecret(); got != "stream-secret" {
t.Fatalf("expected explicit secret, got %q", got)
}
})
t.Run("fallback to admin key", func(t *testing.T) {
t.Setenv("DS2API_VERCEL_INTERNAL_SECRET", "")
t.Setenv("DS2API_ADMIN_KEY", "admin-fallback")
if got := vercelInternalSecret(); got != "admin-fallback" {
t.Fatalf("expected admin key fallback, got %q", got)
}
})
t.Run("default admin when env missing", func(t *testing.T) {
t.Setenv("DS2API_VERCEL_INTERNAL_SECRET", "")
t.Setenv("DS2API_ADMIN_KEY", "")
if got := vercelInternalSecret(); got != "admin" {
t.Fatalf("expected default admin fallback, got %q", got)
}
})
}
func TestStreamLeaseLifecycle(t *testing.T) {
h := &Handler{}
leaseID := h.holdStreamLease(&auth.RequestAuth{UseConfigToken: false})
if leaseID == "" {
t.Fatalf("expected non-empty lease id")
}
if ok := h.releaseStreamLease(leaseID); !ok {
t.Fatalf("expected lease release success")
}
if ok := h.releaseStreamLease(leaseID); ok {
t.Fatalf("expected duplicate release to fail")
}
}
func TestStreamLeaseTTL(t *testing.T) {
t.Setenv("DS2API_VERCEL_STREAM_LEASE_TTL_SECONDS", "120")
if got := streamLeaseTTL(); got != 120*time.Second {
t.Fatalf("expected ttl=120s, got %v", got)
}
t.Setenv("DS2API_VERCEL_STREAM_LEASE_TTL_SECONDS", "invalid")
if got := streamLeaseTTL(); got != 15*time.Minute {
t.Fatalf("expected default ttl on invalid value, got %v", got)
}
}

View File

@@ -1,55 +0,0 @@
package admin
import (
"github.com/go-chi/chi/v5"
)
type Handler struct {
Store ConfigStore
Pool PoolController
DS DeepSeekCaller
OpenAI OpenAIChatCaller
}
func RegisterRoutes(r chi.Router, h *Handler) {
r.Post("/login", h.login)
r.Get("/verify", h.verify)
r.Group(func(pr chi.Router) {
pr.Use(h.requireAdmin)
pr.Get("/vercel/config", h.getVercelConfig)
pr.Get("/config", h.getConfig)
pr.Post("/config", h.updateConfig)
pr.Get("/settings", h.getSettings)
pr.Put("/settings", h.updateSettings)
pr.Post("/settings/password", h.updateSettingsPassword)
pr.Post("/config/import", h.configImport)
pr.Get("/config/export", h.configExport)
pr.Post("/keys", h.addKey)
pr.Delete("/keys/{key}", h.deleteKey)
pr.Get("/proxies", h.listProxies)
pr.Post("/proxies", h.addProxy)
pr.Put("/proxies/{proxyID}", h.updateProxy)
pr.Delete("/proxies/{proxyID}", h.deleteProxy)
pr.Post("/proxies/test", h.testProxy)
pr.Get("/accounts", h.listAccounts)
pr.Post("/accounts", h.addAccount)
pr.Delete("/accounts/{identifier}", h.deleteAccount)
pr.Put("/accounts/{identifier}/proxy", h.updateAccountProxy)
pr.Get("/queue/status", h.queueStatus)
pr.Post("/accounts/test", h.testSingleAccount)
pr.Post("/accounts/test-all", h.testAllAccounts)
pr.Post("/accounts/sessions/delete-all", h.deleteAllSessions)
pr.Post("/import", h.batchImport)
pr.Post("/test", h.testAPI)
pr.Post("/dev/raw-samples/capture", h.captureRawSample)
pr.Get("/dev/raw-samples/query", h.queryRawSampleCaptures)
pr.Post("/dev/raw-samples/save", h.saveRawSampleFromCaptures)
pr.Post("/vercel/sync", h.syncVercel)
pr.Get("/vercel/status", h.vercelStatus)
pr.Post("/vercel/status", h.vercelStatus)
pr.Get("/export", h.exportConfig)
pr.Get("/dev/captures", h.getDevCaptures)
pr.Delete("/dev/captures", h.clearDevCaptures)
pr.Get("/version", h.getVersion)
})
}

View File

@@ -1,53 +0,0 @@
package admin
import (
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
)
func TestListAccountsPageSizeCapIs5000(t *testing.T) {
accounts := make([]string, 0, 150)
for i := range 150 {
accounts = append(accounts, fmt.Sprintf(`{"email":"u%d@example.com","password":"pwd"}`, i))
}
raw := fmt.Sprintf(`{"accounts":[%s]}`, strings.Join(accounts, ","))
router := newHTTPAdminHarness(t, raw, &testingDSMock{})
rec := httptest.NewRecorder()
router.ServeHTTP(rec, adminReq(http.MethodGet, "/accounts?page=1&page_size=200", nil))
if rec.Code != http.StatusOK {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
var payload map[string]any
if err := json.Unmarshal(rec.Body.Bytes(), &payload); err != nil {
t.Fatalf("decode response: %v", err)
}
items, _ := payload["items"].([]any)
if len(items) != 150 {
t.Fatalf("expected all 150 accounts with page_size=200, got %d", len(items))
}
if ps, _ := payload["page_size"].(float64); ps != 200 {
t.Fatalf("expected page_size=200 in response, got %v", payload["page_size"])
}
}
func TestListAccountsPageSizeAbove5000ClampedTo5000(t *testing.T) {
router := newHTTPAdminHarness(t, `{"accounts":[{"email":"u@example.com","password":"pwd"}]}`, &testingDSMock{})
rec := httptest.NewRecorder()
router.ServeHTTP(rec, adminReq(http.MethodGet, "/accounts?page=1&page_size=9999", nil))
if rec.Code != http.StatusOK {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
var payload map[string]any
if err := json.Unmarshal(rec.Body.Bytes(), &payload); err != nil {
t.Fatalf("decode response: %v", err)
}
if ps, _ := payload["page_size"].(float64); ps != 5000 {
t.Fatalf("expected page_size clamped to 5000, got %v", payload["page_size"])
}
}

View File

@@ -1,183 +0,0 @@
package admin
import (
"fmt"
"net/http"
"strconv"
"strings"
"ds2api/internal/config"
"ds2api/internal/util"
)
// writeJSON and intFrom are package-internal aliases for the shared util versions.
var writeJSON = util.WriteJSON
var intFrom = util.IntFrom
func reverseAccounts(a []config.Account) {
for i, j := 0, len(a)-1; i < j; i, j = i+1, j-1 {
a[i], a[j] = a[j], a[i]
}
}
func intFromQuery(r *http.Request, key string, d int) int {
v := r.URL.Query().Get(key)
if v == "" {
return d
}
n, err := strconv.Atoi(v)
if err != nil {
return d
}
return n
}
func nilIfEmpty(s string) any {
if s == "" {
return nil
}
return s
}
func nilIfZero(v int64) any {
if v == 0 {
return nil
}
return v
}
func toStringSlice(v any) ([]string, bool) {
arr, ok := v.([]any)
if !ok {
return nil, false
}
out := make([]string, 0, len(arr))
for _, item := range arr {
out = append(out, strings.TrimSpace(fmt.Sprintf("%v", item)))
}
return out, true
}
func toAccount(m map[string]any) config.Account {
email := fieldString(m, "email")
mobile := config.NormalizeMobileForStorage(fieldString(m, "mobile"))
return config.Account{
Email: email,
Mobile: mobile,
Password: fieldString(m, "password"),
ProxyID: fieldString(m, "proxy_id"),
}
}
func fieldString(m map[string]any, key string) string {
v, ok := m[key]
if !ok || v == nil {
return ""
}
return strings.TrimSpace(fmt.Sprintf("%v", v))
}
func statusOr(v int, d int) int {
if v == 0 {
return d
}
return v
}
func accountMatchesIdentifier(acc config.Account, identifier string) bool {
id := strings.TrimSpace(identifier)
if id == "" {
return false
}
if strings.TrimSpace(acc.Email) == id {
return true
}
if mobileKey := config.CanonicalMobileKey(id); mobileKey != "" && mobileKey == config.CanonicalMobileKey(acc.Mobile) {
return true
}
return acc.Identifier() == id
}
func normalizeAccountForStorage(acc config.Account) config.Account {
acc.Email = strings.TrimSpace(acc.Email)
acc.Mobile = config.NormalizeMobileForStorage(acc.Mobile)
acc.ProxyID = strings.TrimSpace(acc.ProxyID)
return acc
}
func toProxy(m map[string]any) config.Proxy {
return config.NormalizeProxy(config.Proxy{
ID: fieldString(m, "id"),
Name: fieldString(m, "name"),
Type: fieldString(m, "type"),
Host: fieldString(m, "host"),
Port: intFrom(m["port"]),
Username: fieldString(m, "username"),
Password: fieldString(m, "password"),
})
}
func findProxyByID(c config.Config, proxyID string) (config.Proxy, bool) {
id := strings.TrimSpace(proxyID)
if id == "" {
return config.Proxy{}, false
}
for _, proxy := range c.Proxies {
proxy = config.NormalizeProxy(proxy)
if proxy.ID == id {
return proxy, true
}
}
return config.Proxy{}, false
}
func accountDedupeKey(acc config.Account) string {
if email := strings.TrimSpace(acc.Email); email != "" {
return "email:" + email
}
if mobile := config.CanonicalMobileKey(acc.Mobile); mobile != "" {
return "mobile:" + mobile
}
if id := strings.TrimSpace(acc.Identifier()); id != "" {
return "id:" + id
}
return ""
}
func normalizeAndDedupeAccounts(accounts []config.Account) []config.Account {
if len(accounts) == 0 {
return nil
}
out := make([]config.Account, 0, len(accounts))
seen := make(map[string]struct{}, len(accounts))
for _, acc := range accounts {
acc = normalizeAccountForStorage(acc)
key := accountDedupeKey(acc)
if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
out = append(out, acc)
}
return out
}
func findAccountByIdentifier(store ConfigStore, identifier string) (config.Account, bool) {
id := strings.TrimSpace(identifier)
if id == "" {
return config.Account{}, false
}
if acc, ok := store.FindAccount(id); ok {
return acc, true
}
accounts := store.Snapshot().Accounts
for _, acc := range accounts {
if accountMatchesIdentifier(acc, id) {
return acc, true
}
}
return config.Account{}, false
}

View File

@@ -0,0 +1,769 @@
package chathistory
import (
"encoding/json"
"errors"
"fmt"
"os"
"path/filepath"
"sort"
"strings"
"sync"
"time"
"github.com/google/uuid"
"ds2api/internal/config"
)
const (
FileVersion = 2
DisabledLimit = 0
DefaultLimit = 20
MaxLimit = 50
defaultPreviewAt = 160
)
var allowedLimits = map[int]struct{}{
DisabledLimit: {},
10: {},
20: {},
50: {},
}
var ErrDisabled = errors.New("chat history disabled")
type Entry struct {
ID string `json:"id"`
Revision int64 `json:"revision"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
CompletedAt int64 `json:"completed_at,omitempty"`
Status string `json:"status"`
CallerID string `json:"caller_id,omitempty"`
AccountID string `json:"account_id,omitempty"`
Model string `json:"model,omitempty"`
Stream bool `json:"stream"`
UserInput string `json:"user_input,omitempty"`
Messages []Message `json:"messages,omitempty"`
HistoryText string `json:"history_text,omitempty"`
FinalPrompt string `json:"final_prompt,omitempty"`
ReasoningContent string `json:"reasoning_content,omitempty"`
Content string `json:"content,omitempty"`
Error string `json:"error,omitempty"`
StatusCode int `json:"status_code,omitempty"`
ElapsedMs int64 `json:"elapsed_ms,omitempty"`
FinishReason string `json:"finish_reason,omitempty"`
Usage map[string]any `json:"usage,omitempty"`
}
type Message struct {
Role string `json:"role"`
Content string `json:"content"`
}
type SummaryEntry struct {
ID string `json:"id"`
Revision int64 `json:"revision"`
CreatedAt int64 `json:"created_at"`
UpdatedAt int64 `json:"updated_at"`
CompletedAt int64 `json:"completed_at,omitempty"`
Status string `json:"status"`
CallerID string `json:"caller_id,omitempty"`
AccountID string `json:"account_id,omitempty"`
Model string `json:"model,omitempty"`
Stream bool `json:"stream"`
UserInput string `json:"user_input,omitempty"`
Preview string `json:"preview,omitempty"`
StatusCode int `json:"status_code,omitempty"`
ElapsedMs int64 `json:"elapsed_ms,omitempty"`
FinishReason string `json:"finish_reason,omitempty"`
DetailRevision int64 `json:"detail_revision"`
}
type File struct {
Version int `json:"version"`
Limit int `json:"limit"`
Revision int64 `json:"revision"`
Items []SummaryEntry `json:"items"`
}
type StartParams struct {
CallerID string
AccountID string
Model string
Stream bool
UserInput string
Messages []Message
HistoryText string
FinalPrompt string
}
type UpdateParams struct {
Status string
ReasoningContent string
Content string
Error string
StatusCode int
ElapsedMs int64
FinishReason string
Usage map[string]any
Completed bool
}
type detailEnvelope struct {
Version int `json:"version"`
Item Entry `json:"item"`
}
type legacyFile struct {
Version int `json:"version"`
Limit int `json:"limit"`
Items []Entry `json:"items"`
}
type legacyProbe struct {
Items []map[string]json.RawMessage `json:"items"`
}
type Store struct {
mu sync.Mutex
path string
detailDir string
state File
details map[string]Entry
dirty map[string]struct{}
deleted map[string]struct{}
err error
}
func New(path string) *Store {
s := &Store{
path: strings.TrimSpace(path),
detailDir: strings.TrimSpace(path) + ".d",
state: File{
Version: FileVersion,
Limit: DefaultLimit,
Revision: 0,
Items: []SummaryEntry{},
},
details: map[string]Entry{},
dirty: map[string]struct{}{},
deleted: map[string]struct{}{},
}
s.mu.Lock()
defer s.mu.Unlock()
s.err = s.loadLocked()
return s
}
func (s *Store) Path() string {
if s == nil {
return ""
}
return s.path
}
func (s *Store) DetailDir() string {
if s == nil {
return ""
}
return s.detailDir
}
func (s *Store) Err() error {
if s == nil {
return errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
return s.err
}
func (s *Store) Snapshot() (File, error) {
if s == nil {
return File{}, errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return File{}, s.err
}
return cloneFile(s.state), nil
}
func (s *Store) Enabled() bool {
if s == nil {
return false
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return false
}
return s.state.Limit != DisabledLimit
}
func (s *Store) Get(id string) (Entry, error) {
if s == nil {
return Entry{}, errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return Entry{}, s.err
}
item, ok := s.details[strings.TrimSpace(id)]
if !ok {
return Entry{}, errors.New("chat history entry not found")
}
return cloneEntry(item), nil
}
func (s *Store) Start(params StartParams) (Entry, error) {
if s == nil {
return Entry{}, errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return Entry{}, s.err
}
if s.state.Limit == DisabledLimit {
return Entry{}, ErrDisabled
}
now := time.Now().UnixMilli()
revision := s.nextRevisionLocked()
entry := Entry{
ID: "chat_" + strings.ReplaceAll(uuid.NewString(), "-", ""),
Revision: revision,
CreatedAt: now,
UpdatedAt: now,
Status: "streaming",
CallerID: strings.TrimSpace(params.CallerID),
AccountID: strings.TrimSpace(params.AccountID),
Model: strings.TrimSpace(params.Model),
Stream: params.Stream,
UserInput: strings.TrimSpace(params.UserInput),
Messages: cloneMessages(params.Messages),
HistoryText: params.HistoryText,
FinalPrompt: strings.TrimSpace(params.FinalPrompt),
}
s.details[entry.ID] = entry
s.markDetailDirtyLocked(entry.ID)
s.rebuildIndexLocked()
if err := s.saveLocked(); err != nil {
return cloneEntry(entry), err
}
return cloneEntry(entry), nil
}
func (s *Store) Update(id string, params UpdateParams) (Entry, error) {
if s == nil {
return Entry{}, errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return Entry{}, s.err
}
target := strings.TrimSpace(id)
if target == "" {
return Entry{}, errors.New("history id is required")
}
item, ok := s.details[target]
if !ok {
return Entry{}, errors.New("chat history entry not found")
}
now := time.Now().UnixMilli()
item.Revision = s.nextRevisionLocked()
item.UpdatedAt = now
if params.Status != "" {
item.Status = params.Status
}
item.ReasoningContent = params.ReasoningContent
item.Content = params.Content
item.Error = strings.TrimSpace(params.Error)
item.StatusCode = params.StatusCode
item.ElapsedMs = params.ElapsedMs
item.FinishReason = strings.TrimSpace(params.FinishReason)
if params.Usage != nil {
item.Usage = cloneMap(params.Usage)
}
if params.Completed {
item.CompletedAt = now
}
s.details[target] = item
s.markDetailDirtyLocked(target)
s.rebuildIndexLocked()
if err := s.saveLocked(); err != nil {
return Entry{}, err
}
return cloneEntry(item), nil
}
func (s *Store) Delete(id string) error {
if s == nil {
return errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return s.err
}
target := strings.TrimSpace(id)
if target == "" {
return errors.New("history id is required")
}
if _, ok := s.details[target]; !ok {
return errors.New("chat history entry not found")
}
s.markDetailDeletedLocked(target)
delete(s.details, target)
s.nextRevisionLocked()
s.rebuildIndexLocked()
if err := s.saveLocked(); err != nil {
return err
}
return nil
}
func (s *Store) Clear() error {
if s == nil {
return errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return s.err
}
for id := range s.details {
s.markDetailDeletedLocked(id)
}
s.details = map[string]Entry{}
s.nextRevisionLocked()
s.rebuildIndexLocked()
if err := s.saveLocked(); err != nil {
return err
}
return nil
}
func (s *Store) SetLimit(limit int) (File, error) {
if s == nil {
return File{}, errors.New("chat history store is nil")
}
s.mu.Lock()
defer s.mu.Unlock()
if s.err != nil {
return File{}, s.err
}
if !isAllowedLimit(limit) {
return File{}, fmt.Errorf("unsupported chat history limit: %d", limit)
}
s.state.Limit = limit
s.nextRevisionLocked()
s.rebuildIndexLocked()
if err := s.saveLocked(); err != nil {
return File{}, err
}
return cloneFile(s.state), nil
}
func (s *Store) loadLocked() error {
if strings.TrimSpace(s.path) == "" {
return errors.New("chat history path is required")
}
if err := os.MkdirAll(filepath.Dir(s.path), 0o755); err != nil && filepath.Dir(s.path) != "." {
return fmt.Errorf("create chat history dir: %w", err)
}
if err := os.MkdirAll(s.detailDir, 0o755); err != nil {
return fmt.Errorf("create chat history detail dir: %w", err)
}
raw, err := os.ReadFile(s.path)
if err != nil {
if errors.Is(err, os.ErrNotExist) {
if saveErr := s.saveLocked(); saveErr != nil {
config.Logger.Warn("[chat_history] bootstrap write failed", "path", s.path, "error", saveErr)
}
return nil
}
return fmt.Errorf("read chat history index: %w", err)
}
legacy, legacyOK, legacyErr := parseLegacy(raw)
if legacyErr != nil {
return legacyErr
}
if legacyOK {
s.loadLegacyLocked(legacy)
if err := s.saveLocked(); err != nil {
config.Logger.Warn("[chat_history] legacy migration writeback failed", "path", s.path, "error", err)
}
return nil
}
var state File
if err := json.Unmarshal(raw, &state); err != nil {
return fmt.Errorf("decode chat history index: %w", err)
}
if state.Version == 0 {
state.Version = FileVersion
}
if !isAllowedLimit(state.Limit) {
state.Limit = DefaultLimit
}
s.state = cloneFile(state)
s.details = map[string]Entry{}
for _, item := range state.Items {
detail, err := readDetailFile(filepath.Join(s.detailDir, item.ID+".json"))
if err != nil {
return err
}
s.details[item.ID] = detail
}
s.rebuildIndexLocked()
if saveErr := s.saveLocked(); saveErr != nil {
config.Logger.Warn("[chat_history] index rewrite failed", "path", s.path, "error", saveErr)
}
return nil
}
func (s *Store) loadLegacyLocked(legacy legacyFile) {
s.state.Version = FileVersion
s.state.Limit = legacy.Limit
if !isAllowedLimit(s.state.Limit) {
s.state.Limit = DefaultLimit
}
s.details = map[string]Entry{}
s.dirty = map[string]struct{}{}
s.deleted = map[string]struct{}{}
maxRevision := int64(0)
for _, item := range legacy.Items {
if strings.TrimSpace(item.ID) == "" {
continue
}
item.Messages = cloneMessages(item.Messages)
if item.Revision == 0 {
if item.UpdatedAt > 0 {
item.Revision = item.UpdatedAt
} else {
item.Revision = time.Now().UnixNano()
}
}
if item.Revision > maxRevision {
maxRevision = item.Revision
}
s.details[item.ID] = item
s.markDetailDirtyLocked(item.ID)
}
s.state.Revision = maxRevision
s.rebuildIndexLocked()
}
func (s *Store) saveLocked() error {
s.state.Version = FileVersion
if !isAllowedLimit(s.state.Limit) {
s.state.Limit = DefaultLimit
}
s.rebuildIndexLocked()
if err := os.MkdirAll(s.detailDir, 0o755); err != nil {
return fmt.Errorf("create chat history detail dir: %w", err)
}
for _, id := range sortedDetailIDs(s.deleted) {
path := filepath.Join(s.detailDir, id+".json")
if err := os.Remove(path); err != nil && !errors.Is(err, os.ErrNotExist) {
return fmt.Errorf("remove stale chat history detail: %w", err)
}
}
for _, id := range sortedDetailIDs(s.dirty) {
item, ok := s.details[id]
if !ok {
continue
}
path := filepath.Join(s.detailDir, id+".json")
payload, err := json.MarshalIndent(detailEnvelope{
Version: FileVersion,
Item: item,
}, "", " ")
if err != nil {
return fmt.Errorf("encode chat history detail: %w", err)
}
if err := writeFileAtomic(path, append(payload, '\n')); err != nil {
return err
}
}
payload, err := json.MarshalIndent(s.state, "", " ")
if err != nil {
return fmt.Errorf("encode chat history index: %w", err)
}
if err := writeFileAtomic(s.path, append(payload, '\n')); err != nil {
return err
}
s.clearPendingDetailChangesLocked()
return nil
}
func (s *Store) rebuildIndexLocked() {
summaries := make([]SummaryEntry, 0, len(s.details))
for _, item := range s.details {
summaries = append(summaries, summaryFromEntry(item))
}
sort.Slice(summaries, func(i, j int) bool {
if summaries[i].UpdatedAt == summaries[j].UpdatedAt {
return summaries[i].CreatedAt > summaries[j].CreatedAt
}
return summaries[i].UpdatedAt > summaries[j].UpdatedAt
})
if s.state.Limit < DisabledLimit || !isAllowedLimit(s.state.Limit) {
s.state.Limit = DefaultLimit
}
if s.state.Limit == DisabledLimit {
s.state.Items = summaries
return
}
if len(summaries) > s.state.Limit {
keep := make(map[string]struct{}, s.state.Limit)
for _, item := range summaries[:s.state.Limit] {
keep[item.ID] = struct{}{}
}
for id := range s.details {
if _, ok := keep[id]; !ok {
s.markDetailDeletedLocked(id)
delete(s.details, id)
}
}
summaries = summaries[:s.state.Limit]
}
s.state.Items = summaries
}
func (s *Store) nextRevisionLocked() int64 {
next := time.Now().UnixNano()
if next <= s.state.Revision {
next = s.state.Revision + 1
}
s.state.Revision = next
return next
}
func summaryFromEntry(item Entry) SummaryEntry {
return SummaryEntry{
ID: item.ID,
Revision: item.Revision,
CreatedAt: item.CreatedAt,
UpdatedAt: item.UpdatedAt,
CompletedAt: item.CompletedAt,
Status: item.Status,
CallerID: item.CallerID,
AccountID: item.AccountID,
Model: item.Model,
Stream: item.Stream,
UserInput: item.UserInput,
Preview: buildPreview(item),
StatusCode: item.StatusCode,
ElapsedMs: item.ElapsedMs,
FinishReason: item.FinishReason,
DetailRevision: item.Revision,
}
}
func buildPreview(item Entry) string {
candidate := strings.TrimSpace(item.Content)
if candidate == "" {
candidate = strings.TrimSpace(item.ReasoningContent)
}
if candidate == "" {
candidate = strings.TrimSpace(item.Error)
}
if candidate == "" {
candidate = strings.TrimSpace(item.UserInput)
}
if len(candidate) > defaultPreviewAt {
return candidate[:defaultPreviewAt] + "..."
}
return candidate
}
func readDetailFile(path string) (Entry, error) {
raw, err := os.ReadFile(path)
if err != nil {
return Entry{}, fmt.Errorf("read chat history detail: %w", err)
}
var env detailEnvelope
if err := json.Unmarshal(raw, &env); err != nil {
return Entry{}, fmt.Errorf("decode chat history detail: %w", err)
}
return cloneEntry(env.Item), nil
}
func parseLegacy(raw []byte) (legacyFile, bool, error) {
var legacy legacyFile
if err := json.Unmarshal(raw, &legacy); err != nil {
return legacyFile{}, false, nil
}
if len(legacy.Items) == 0 {
return legacy, false, nil
}
var probe legacyProbe
if err := json.Unmarshal(raw, &probe); err == nil {
for _, item := range probe.Items {
if _, ok := item["detail_revision"]; ok {
return legacy, false, nil
}
}
}
return legacy, true, nil
}
func writeFileAtomic(path string, body []byte) error {
dir := filepath.Dir(path)
if dir == "" {
dir = "."
}
if dir != "." {
if err := os.MkdirAll(dir, 0o755); err != nil {
return fmt.Errorf("create chat history dir: %w", err)
}
}
tmpFile, err := os.CreateTemp(dir, ".chat-history-*.tmp")
if err != nil {
return fmt.Errorf("create temp chat history: %w", err)
}
tmpPath := tmpFile.Name()
cleanup := func() error {
if err := os.Remove(tmpPath); err != nil && !errors.Is(err, os.ErrNotExist) {
return fmt.Errorf("remove temp chat history: %w", err)
}
return nil
}
withCleanup := func(primary error, closeErr error) error {
errs := []error{primary}
if closeErr != nil {
errs = append(errs, fmt.Errorf("close temp chat history: %w", closeErr))
}
if cleanupErr := cleanup(); cleanupErr != nil {
errs = append(errs, cleanupErr)
}
return errors.Join(errs...)
}
if _, err := tmpFile.Write(body); err != nil {
return withCleanup(fmt.Errorf("write temp chat history: %w", err), tmpFile.Close())
}
if err := tmpFile.Sync(); err != nil {
return withCleanup(fmt.Errorf("sync temp chat history: %w", err), tmpFile.Close())
}
if err := tmpFile.Close(); err != nil {
if cleanupErr := cleanup(); cleanupErr != nil {
return errors.Join(fmt.Errorf("close temp chat history: %w", err), cleanupErr)
}
return fmt.Errorf("close temp chat history: %w", err)
}
if err := os.Rename(tmpPath, path); err != nil {
if cleanupErr := cleanup(); cleanupErr != nil {
return errors.Join(fmt.Errorf("promote temp chat history: %w", err), cleanupErr)
}
return fmt.Errorf("promote temp chat history: %w", err)
}
return nil
}
func ListETag(revision int64) string {
return fmt.Sprintf(`W/"chat-history-list-%d"`, revision)
}
func DetailETag(id string, revision int64) string {
return fmt.Sprintf(`W/"chat-history-detail-%s-%d"`, strings.TrimSpace(id), revision)
}
func isAllowedLimit(limit int) bool {
_, ok := allowedLimits[limit]
return ok
}
func (s *Store) markDetailDirtyLocked(id string) {
id = strings.TrimSpace(id)
if id == "" {
return
}
if s.dirty == nil {
s.dirty = map[string]struct{}{}
}
if s.deleted == nil {
s.deleted = map[string]struct{}{}
}
s.dirty[id] = struct{}{}
delete(s.deleted, id)
}
func (s *Store) markDetailDeletedLocked(id string) {
id = strings.TrimSpace(id)
if id == "" {
return
}
if s.dirty == nil {
s.dirty = map[string]struct{}{}
}
if s.deleted == nil {
s.deleted = map[string]struct{}{}
}
s.deleted[id] = struct{}{}
delete(s.dirty, id)
}
func (s *Store) clearPendingDetailChangesLocked() {
s.dirty = map[string]struct{}{}
s.deleted = map[string]struct{}{}
}
func sortedDetailIDs(ids map[string]struct{}) []string {
if len(ids) == 0 {
return nil
}
out := make([]string, 0, len(ids))
for id := range ids {
out = append(out, id)
}
sort.Strings(out)
return out
}
func cloneFile(in File) File {
out := File{
Version: in.Version,
Limit: in.Limit,
Revision: in.Revision,
Items: make([]SummaryEntry, len(in.Items)),
}
copy(out.Items, in.Items)
return out
}
func cloneEntry(item Entry) Entry {
item.Usage = cloneMap(item.Usage)
item.Messages = cloneMessages(item.Messages)
return item
}
func cloneMap(in map[string]any) map[string]any {
if in == nil {
return nil
}
out := make(map[string]any, len(in))
for k, v := range in {
out[k] = v
}
return out
}
func cloneMessages(messages []Message) []Message {
if len(messages) == 0 {
return []Message{}
}
out := make([]Message, len(messages))
copy(out, messages)
return out
}

View File

@@ -0,0 +1,483 @@
package chathistory
import (
"bytes"
"encoding/json"
"os"
"path/filepath"
"strings"
"sync"
"testing"
)
func blockDetailDir(t *testing.T, detailDir string) func() {
t.Helper()
blockedDir := detailDir + ".blocked"
if err := os.RemoveAll(blockedDir); err != nil {
t.Fatalf("remove blocked detail dir failed: %v", err)
}
if err := os.Rename(detailDir, blockedDir); err != nil {
t.Fatalf("move detail dir aside failed: %v", err)
}
if err := os.RemoveAll(detailDir); err != nil {
t.Fatalf("remove blocked detail path failed: %v", err)
}
if err := os.WriteFile(detailDir, []byte("blocked"), 0o644); err != nil {
t.Fatalf("write blocked detail path failed: %v", err)
}
var once sync.Once
return func() {
t.Helper()
once.Do(func() {
if err := os.RemoveAll(detailDir); err != nil {
t.Fatalf("remove blocking detail path failed: %v", err)
}
if err := os.Rename(blockedDir, detailDir); err != nil {
t.Fatalf("restore detail dir failed: %v", err)
}
})
}
}
func TestStoreCreatesAndPersistsEntries(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
store := New(path)
started, err := store.Start(StartParams{
CallerID: "caller:abc",
AccountID: "user@example.com",
Model: "deepseek-v4-flash",
Stream: true,
UserInput: "hello",
})
if err != nil {
t.Fatalf("start entry failed: %v", err)
}
updated, err := store.Update(started.ID, UpdateParams{
Status: "success",
ReasoningContent: "thinking",
Content: "answer",
StatusCode: 200,
ElapsedMs: 321,
FinishReason: "stop",
Usage: map[string]any{"total_tokens": 9},
Completed: true,
})
if err != nil {
t.Fatalf("update entry failed: %v", err)
}
if updated.Status != "success" || updated.Content != "answer" {
t.Fatalf("unexpected updated entry: %#v", updated)
}
snapshot, err := store.Snapshot()
if err != nil {
t.Fatalf("snapshot failed: %v", err)
}
if snapshot.Limit != DefaultLimit {
t.Fatalf("unexpected default limit: %d", snapshot.Limit)
}
if len(snapshot.Items) != 1 {
t.Fatalf("expected one item, got %d", len(snapshot.Items))
}
if snapshot.Items[0].CompletedAt == 0 {
t.Fatalf("expected completed_at to be populated")
}
if snapshot.Items[0].Preview != "answer" {
t.Fatalf("expected summary preview=answer, got %#v", snapshot.Items[0])
}
reloaded := New(path)
reloadedSnapshot, err := reloaded.Snapshot()
if err != nil {
t.Fatalf("reload snapshot failed: %v", err)
}
if len(reloadedSnapshot.Items) != 1 {
t.Fatalf("unexpected reloaded summaries: %#v", reloadedSnapshot.Items)
}
full, err := reloaded.Get(started.ID)
if err != nil {
t.Fatalf("get detail failed: %v", err)
}
if full.Content != "answer" {
t.Fatalf("expected detail content=answer, got %#v", full)
}
}
func TestStoreTrimsToConfiguredLimit(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
store := New(path)
if _, err := store.SetLimit(10); err != nil {
t.Fatalf("set limit failed: %v", err)
}
for i := 0; i < 12; i++ {
entry, err := store.Start(StartParams{Model: "deepseek-v4-flash", UserInput: "msg"})
if err != nil {
t.Fatalf("start %d failed: %v", i, err)
}
if _, err := store.Update(entry.ID, UpdateParams{Status: "success", Content: "ok", Completed: true}); err != nil {
t.Fatalf("update %d failed: %v", i, err)
}
}
snapshot, err := store.Snapshot()
if err != nil {
t.Fatalf("snapshot failed: %v", err)
}
if len(snapshot.Items) != 10 {
t.Fatalf("expected 10 items, got %d", len(snapshot.Items))
}
}
func TestStoreDeleteClearAndLimitValidation(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
store := New(path)
entry, err := store.Start(StartParams{UserInput: "hello"})
if err != nil {
t.Fatalf("start failed: %v", err)
}
if err := store.Delete(entry.ID); err != nil {
t.Fatalf("delete failed: %v", err)
}
snapshot, err := store.Snapshot()
if err != nil {
t.Fatalf("snapshot failed: %v", err)
}
if len(snapshot.Items) != 0 {
t.Fatalf("expected empty items after delete, got %d", len(snapshot.Items))
}
if _, err := store.SetLimit(999); err == nil {
t.Fatalf("expected invalid limit error")
}
if err := store.Clear(); err != nil {
t.Fatalf("clear failed: %v", err)
}
}
func TestStoreDisablePreservesHistoryAndBlocksNewEntries(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
store := New(path)
entry, err := store.Start(StartParams{UserInput: "hello"})
if err != nil {
t.Fatalf("start failed: %v", err)
}
if _, err := store.Update(entry.ID, UpdateParams{Status: "success", Content: "world", Completed: true}); err != nil {
t.Fatalf("update failed: %v", err)
}
snapshot, err := store.SetLimit(DisabledLimit)
if err != nil {
t.Fatalf("disable failed: %v", err)
}
if snapshot.Limit != DisabledLimit {
t.Fatalf("expected disabled limit, got %d", snapshot.Limit)
}
if len(snapshot.Items) != 1 {
t.Fatalf("expected disabled mode to preserve summaries, got %d", len(snapshot.Items))
}
if store.Enabled() {
t.Fatalf("expected store to report disabled")
}
if _, err := store.Start(StartParams{UserInput: "later"}); err != ErrDisabled {
t.Fatalf("expected ErrDisabled, got %v", err)
}
}
func TestStoreConcurrentUpdatesKeepSplitFilesValid(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
store := New(path)
var wg sync.WaitGroup
for i := 0; i < 8; i++ {
wg.Add(1)
go func(idx int) {
defer wg.Done()
entry, err := store.Start(StartParams{
CallerID: "caller:test",
Model: "deepseek-v4-flash",
UserInput: "hello",
})
if err != nil {
t.Errorf("start failed: %v", err)
return
}
_, err = store.Update(entry.ID, UpdateParams{
Status: "success",
Content: "answer",
ElapsedMs: int64(idx),
Completed: true,
})
if err != nil {
t.Errorf("update failed: %v", err)
}
}(i)
}
wg.Wait()
snapshot, err := store.Snapshot()
if err != nil {
t.Fatalf("snapshot failed: %v", err)
}
if len(snapshot.Items) != 8 {
t.Fatalf("expected 8 items, got %d", len(snapshot.Items))
}
raw, err := os.ReadFile(path)
if err != nil {
t.Fatalf("read index failed: %v", err)
}
var persisted File
if err := json.Unmarshal(raw, &persisted); err != nil {
t.Fatalf("persisted index is invalid json: %v", err)
}
if len(persisted.Items) != 8 {
t.Fatalf("expected persisted items=8, got %d", len(persisted.Items))
}
detailFiles, err := os.ReadDir(path + ".d")
if err != nil {
t.Fatalf("read detail dir failed: %v", err)
}
if len(detailFiles) != 8 {
t.Fatalf("expected 8 detail files, got %d", len(detailFiles))
}
}
func TestStoreAutoMigratesLegacyMonolith(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
legacy := legacyFile{
Version: 1,
Limit: 20,
Items: []Entry{{
ID: "chat_legacy",
CreatedAt: 1,
UpdatedAt: 2,
Status: "success",
UserInput: "hello",
Content: "world",
ReasoningContent: "thinking",
}},
}
body, _ := json.MarshalIndent(legacy, "", " ")
if err := os.WriteFile(path, body, 0o644); err != nil {
t.Fatalf("write legacy file failed: %v", err)
}
store := New(path)
if err := store.Err(); err != nil {
t.Fatalf("expected legacy migration success, got %v", err)
}
snapshot, err := store.Snapshot()
if err != nil {
t.Fatalf("snapshot failed: %v", err)
}
if len(snapshot.Items) != 1 {
t.Fatalf("expected one migrated summary, got %#v", snapshot.Items)
}
full, err := store.Get("chat_legacy")
if err != nil {
t.Fatalf("get migrated detail failed: %v", err)
}
if full.Content != "world" {
t.Fatalf("expected migrated detail content preserved, got %#v", full)
}
}
func TestStoreAutoMigratesMetadataOnlyLegacyMonolith(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
legacy := legacyFile{
Version: 1,
Limit: 20,
Items: []Entry{{
ID: "chat_metadata_only",
Revision: 0,
CreatedAt: 1,
UpdatedAt: 2,
Status: "error",
CallerID: "caller:test",
AccountID: "acct:test",
Model: "deepseek-v4-flash",
Stream: true,
UserInput: "hello",
Error: "boom",
StatusCode: 500,
ElapsedMs: 12,
FinishReason: "error",
}},
}
body, _ := json.MarshalIndent(legacy, "", " ")
if err := os.WriteFile(path, body, 0o644); err != nil {
t.Fatalf("write legacy file failed: %v", err)
}
store := New(path)
if err := store.Err(); err != nil {
t.Fatalf("expected legacy metadata-only migration success, got %v", err)
}
snapshot, err := store.Snapshot()
if err != nil {
t.Fatalf("snapshot failed: %v", err)
}
if len(snapshot.Items) != 1 {
t.Fatalf("expected one migrated summary, got %#v", snapshot.Items)
}
full, err := store.Get("chat_metadata_only")
if err != nil {
t.Fatalf("get migrated detail failed: %v", err)
}
if full.Error != "boom" || full.UserInput != "hello" {
t.Fatalf("expected metadata-only legacy fields preserved, got %#v", full)
}
if _, err := os.Stat(filepath.Join(store.DetailDir(), "chat_metadata_only.json")); err != nil {
t.Fatalf("expected migrated detail file to exist: %v", err)
}
}
func TestStoreLegacyMigrationBestEffortWhenRewriteFails(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
longID := "chat_" + strings.Repeat("x", 320)
legacy := legacyFile{
Version: 1,
Limit: 20,
Items: []Entry{{
ID: longID,
CreatedAt: 1,
UpdatedAt: 2,
Status: "success",
UserInput: "hello",
Content: "world",
}},
}
body, err := json.MarshalIndent(legacy, "", " ")
if err != nil {
t.Fatalf("marshal legacy file failed: %v", err)
}
if err := os.WriteFile(path, body, 0o644); err != nil {
t.Fatalf("write legacy file failed: %v", err)
}
store := New(path)
if err := store.Err(); err != nil {
t.Fatalf("expected store to stay usable after migration writeback failure, got %v", err)
}
if !store.Enabled() {
t.Fatal("expected store to remain enabled after best-effort migration")
}
snapshot, err := store.Snapshot()
if err != nil {
t.Fatalf("snapshot failed: %v", err)
}
if len(snapshot.Items) != 1 || snapshot.Items[0].ID != longID {
t.Fatalf("unexpected snapshot after best-effort migration: %#v", snapshot.Items)
}
full, err := store.Get(longID)
if err != nil {
t.Fatalf("get migrated detail failed: %v", err)
}
if full.Content != "world" {
t.Fatalf("expected migrated content to stay in memory, got %#v", full)
}
if _, statErr := os.Stat(filepath.Join(store.DetailDir(), longID+".json")); statErr == nil {
t.Fatal("expected detail write to fail for overlong legacy id")
}
}
func TestStoreTransientPersistenceFailureDoesNotLatch(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
store := New(path)
first, err := store.Start(StartParams{UserInput: "first"})
if err != nil {
t.Fatalf("start first failed: %v", err)
}
restore := blockDetailDir(t, store.DetailDir())
t.Cleanup(restore)
blocked, err := store.Start(StartParams{UserInput: "blocked"})
if err == nil {
t.Fatalf("expected start failure while detail dir is blocked")
}
if blocked.ID == "" {
t.Fatalf("expected in-memory entry from failed start")
}
if err := store.Err(); err != nil {
t.Fatalf("transient start failure should not latch store error: %v", err)
}
if _, err := store.Update(first.ID, UpdateParams{Status: "success", Content: "one", Completed: true}); err == nil {
t.Fatalf("expected update failure while detail dir is blocked")
}
if err := store.Err(); err != nil {
t.Fatalf("transient update failure should not latch store error: %v", err)
}
restore()
if _, err := store.Update(blocked.ID, UpdateParams{Status: "success", Content: "two", Completed: true}); err != nil {
t.Fatalf("update after restore failed: %v", err)
}
if _, err := store.Start(StartParams{UserInput: "later"}); err != nil {
t.Fatalf("start after restore failed: %v", err)
}
full, err := store.Get(blocked.ID)
if err != nil {
t.Fatalf("get restored entry failed: %v", err)
}
if full.Content != "two" || full.Status != "success" {
t.Fatalf("expected restored entry persisted, got %#v", full)
}
}
func TestStoreWritesOnlyChangedDetailFiles(t *testing.T) {
path := filepath.Join(t.TempDir(), "chat_history.json")
store := New(path)
first, err := store.Start(StartParams{UserInput: "one"})
if err != nil {
t.Fatalf("start first failed: %v", err)
}
if _, err := store.Update(first.ID, UpdateParams{Status: "success", Content: "first", Completed: true}); err != nil {
t.Fatalf("update first failed: %v", err)
}
second, err := store.Start(StartParams{UserInput: "two"})
if err != nil {
t.Fatalf("start second failed: %v", err)
}
if _, err := store.Update(second.ID, UpdateParams{Status: "success", Content: "second", Completed: true}); err != nil {
t.Fatalf("update second failed: %v", err)
}
firstPath := filepath.Join(store.DetailDir(), first.ID+".json")
secondPath := filepath.Join(store.DetailDir(), second.ID+".json")
beforeFirst, err := os.ReadFile(firstPath)
if err != nil {
t.Fatalf("read first detail before update failed: %v", err)
}
beforeSecond, err := os.ReadFile(secondPath)
if err != nil {
t.Fatalf("read second detail before update failed: %v", err)
}
if _, err := store.Update(first.ID, UpdateParams{Status: "success", Content: "first-updated", Completed: true}); err != nil {
t.Fatalf("update first again failed: %v", err)
}
afterFirst, err := os.ReadFile(firstPath)
if err != nil {
t.Fatalf("read first detail after update failed: %v", err)
}
afterSecond, err := os.ReadFile(secondPath)
if err != nil {
t.Fatalf("read second detail after update failed: %v", err)
}
if bytes.Equal(beforeFirst, afterFirst) {
t.Fatalf("expected first detail file to change after update")
}
if !bytes.Equal(beforeSecond, afterSecond) {
t.Fatalf("expected untouched detail file to remain byte-identical")
}
}

View File

@@ -1,32 +1,21 @@
package claudeconv
import "strings"
import (
"strings"
type ClaudeMappingProvider interface {
ClaudeMapping() map[string]string
}
"ds2api/internal/config"
)
func ConvertClaudeToDeepSeek(claudeReq map[string]any, mappingProvider ClaudeMappingProvider, defaultClaudeModel string) map[string]any {
func ConvertClaudeToDeepSeek(claudeReq map[string]any, aliasProvider config.ModelAliasReader, defaultClaudeModel string) map[string]any {
messages, _ := claudeReq["messages"].([]any)
model, _ := claudeReq["model"].(string)
if model == "" {
model = defaultClaudeModel
}
mapping := map[string]string{}
if mappingProvider != nil {
mapping = mappingProvider.ClaudeMapping()
}
dsModel := mapping["fast"]
if dsModel == "" {
dsModel = "deepseek-chat"
}
modelLower := strings.ToLower(model)
if strings.Contains(modelLower, "opus") || strings.Contains(modelLower, "reasoner") || strings.Contains(modelLower, "slow") {
if slow := mapping["slow"]; slow != "" {
dsModel = slow
}
dsModel, ok := config.ResolveModel(aliasProvider, model)
if !ok || strings.TrimSpace(dsModel) == "" {
dsModel = "deepseek-v4-flash"
}
convertedMessages := make([]any, 0, len(messages)+1)

View File

@@ -17,18 +17,15 @@ func (c Config) MarshalJSON() ([]byte, error) {
if len(c.Keys) > 0 {
m["keys"] = c.Keys
}
if len(c.APIKeys) > 0 {
m["api_keys"] = c.APIKeys
}
if len(c.Accounts) > 0 {
m["accounts"] = c.Accounts
}
if len(c.Proxies) > 0 {
m["proxies"] = c.Proxies
}
if len(c.ClaudeMapping) > 0 {
m["claude_mapping"] = c.ClaudeMapping
}
if len(c.ClaudeModelMap) > 0 {
m["claude_model_mapping"] = c.ClaudeModelMap
}
if len(c.ModelAliases) > 0 {
m["model_aliases"] = c.ModelAliases
}
@@ -48,6 +45,9 @@ func (c Config) MarshalJSON() ([]byte, error) {
m["embeddings"] = c.Embeddings
}
m["auto_delete"] = c.AutoDelete
if c.HistorySplit.Enabled != nil || c.HistorySplit.TriggerAfterTurns != nil {
m["history_split"] = c.HistorySplit
}
if c.VercelSyncHash != "" {
m["_vercel_sync_hash"] = c.VercelSyncHash
}
@@ -69,6 +69,10 @@ func (c *Config) UnmarshalJSON(b []byte) error {
if err := json.Unmarshal(v, &c.Keys); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
}
case "api_keys":
if err := json.Unmarshal(v, &c.APIKeys); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
}
case "accounts":
if err := json.Unmarshal(v, &c.Accounts); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
@@ -78,13 +82,8 @@ func (c *Config) UnmarshalJSON(b []byte) error {
return fmt.Errorf("invalid field %q: %w", k, err)
}
case "claude_mapping":
if err := json.Unmarshal(v, &c.ClaudeMapping); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
}
case "claude_model_mapping":
if err := json.Unmarshal(v, &c.ClaudeModelMap); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
}
// Removed legacy mapping fields are ignored instead of persisted.
case "model_aliases":
if err := json.Unmarshal(v, &c.ModelAliases); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
@@ -115,6 +114,10 @@ func (c *Config) UnmarshalJSON(b []byte) error {
if err := json.Unmarshal(v, &c.AutoDelete); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
}
case "history_split":
if err := json.Unmarshal(v, &c.HistorySplit); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
}
case "_vercel_sync_hash":
if err := json.Unmarshal(v, &c.VercelSyncHash); err != nil {
return fmt.Errorf("invalid field %q: %w", k, err)
@@ -130,26 +133,30 @@ func (c *Config) UnmarshalJSON(b []byte) error {
}
}
}
c.NormalizeCredentials()
return nil
}
func (c Config) Clone() Config {
clone := Config{
Keys: slices.Clone(c.Keys),
Accounts: slices.Clone(c.Accounts),
Proxies: slices.Clone(c.Proxies),
ClaudeMapping: cloneStringMap(c.ClaudeMapping),
ClaudeModelMap: cloneStringMap(c.ClaudeModelMap),
ModelAliases: cloneStringMap(c.ModelAliases),
Admin: c.Admin,
Runtime: c.Runtime,
Keys: slices.Clone(c.Keys),
APIKeys: slices.Clone(c.APIKeys),
Accounts: slices.Clone(c.Accounts),
Proxies: slices.Clone(c.Proxies),
ModelAliases: cloneStringMap(c.ModelAliases),
Admin: c.Admin,
Runtime: c.Runtime,
Compat: CompatConfig{
WideInputStrictOutput: cloneBoolPtr(c.Compat.WideInputStrictOutput),
StripReferenceMarkers: cloneBoolPtr(c.Compat.StripReferenceMarkers),
},
Responses: c.Responses,
Embeddings: c.Embeddings,
AutoDelete: c.AutoDelete,
Responses: c.Responses,
Embeddings: c.Embeddings,
AutoDelete: c.AutoDelete,
HistorySplit: HistorySplitConfig{
Enabled: cloneBoolPtr(c.HistorySplit.Enabled),
TriggerAfterTurns: cloneIntPtr(c.HistorySplit.TriggerAfterTurns),
},
VercelSyncHash: c.VercelSyncHash,
VercelSyncTime: c.VercelSyncTime,
AdditionalFields: map[string]any{},
@@ -179,6 +186,14 @@ func cloneBoolPtr(in *bool) *bool {
return &v
}
func cloneIntPtr(in *int) *int {
if in == nil {
return nil
}
v := *in
return &v
}
func parseConfigString(raw string) (Config, error) {
var cfg Config
candidates := []string{raw}

View File

@@ -8,24 +8,26 @@ import (
)
type Config struct {
Keys []string `json:"keys,omitempty"`
Accounts []Account `json:"accounts,omitempty"`
Proxies []Proxy `json:"proxies,omitempty"`
ClaudeMapping map[string]string `json:"claude_mapping,omitempty"`
ClaudeModelMap map[string]string `json:"claude_model_mapping,omitempty"`
ModelAliases map[string]string `json:"model_aliases,omitempty"`
Admin AdminConfig `json:"admin,omitempty"`
Runtime RuntimeConfig `json:"runtime,omitempty"`
Compat CompatConfig `json:"compat,omitempty"`
Responses ResponsesConfig `json:"responses,omitempty"`
Embeddings EmbeddingsConfig `json:"embeddings,omitempty"`
AutoDelete AutoDeleteConfig `json:"auto_delete"`
VercelSyncHash string `json:"_vercel_sync_hash,omitempty"`
VercelSyncTime int64 `json:"_vercel_sync_time,omitempty"`
AdditionalFields map[string]any `json:"-"`
Keys []string `json:"keys,omitempty"`
APIKeys []APIKey `json:"api_keys,omitempty"`
Accounts []Account `json:"accounts,omitempty"`
Proxies []Proxy `json:"proxies,omitempty"`
ModelAliases map[string]string `json:"model_aliases,omitempty"`
Admin AdminConfig `json:"admin,omitempty"`
Runtime RuntimeConfig `json:"runtime,omitempty"`
Compat CompatConfig `json:"compat,omitempty"`
Responses ResponsesConfig `json:"responses,omitempty"`
Embeddings EmbeddingsConfig `json:"embeddings,omitempty"`
AutoDelete AutoDeleteConfig `json:"auto_delete"`
HistorySplit HistorySplitConfig `json:"history_split"`
VercelSyncHash string `json:"_vercel_sync_hash,omitempty"`
VercelSyncTime int64 `json:"_vercel_sync_time,omitempty"`
AdditionalFields map[string]any `json:"-"`
}
type Account struct {
Name string `json:"name,omitempty"`
Remark string `json:"remark,omitempty"`
Email string `json:"email,omitempty"`
Mobile string `json:"mobile,omitempty"`
Password string `json:"password,omitempty"`
@@ -33,6 +35,12 @@ type Account struct {
ProxyID string `json:"proxy_id,omitempty"`
}
type APIKey struct {
Key string `json:"key"`
Name string `json:"name,omitempty"`
Remark string `json:"remark,omitempty"`
}
type Proxy struct {
ID string `json:"id,omitempty"`
Name string `json:"name,omitempty"`
@@ -73,6 +81,28 @@ func (c *Config) ClearAccountTokens() {
}
}
func (c *Config) NormalizeCredentials() {
if c == nil {
return
}
normalizedAPIKeys := normalizeAPIKeys(c.APIKeys)
if len(normalizedAPIKeys) > 0 {
c.APIKeys = normalizedAPIKeys
c.Keys = apiKeysToStrings(c.APIKeys)
} else {
c.Keys = normalizeKeys(c.Keys)
c.APIKeys = apiKeysFromStrings(c.Keys, nil)
}
for i := range c.Accounts {
c.Accounts[i].Name = strings.TrimSpace(c.Accounts[i].Name)
c.Accounts[i].Remark = strings.TrimSpace(c.Accounts[i].Remark)
}
c.normalizeModelAliases()
c.forceHistorySplitEnabled()
}
// DropInvalidAccounts removes accounts that cannot be addressed by admin APIs
// (no email and no normalizable mobile). This prevents legacy token-only
// records from becoming orphaned empty entries after token stripping.
@@ -90,6 +120,35 @@ func (c *Config) DropInvalidAccounts() {
c.Accounts = kept
}
func (c *Config) normalizeModelAliases() {
if c == nil {
return
}
aliases := map[string]string{}
for k, v := range c.ModelAliases {
key := strings.TrimSpace(lower(k))
val := strings.TrimSpace(lower(v))
if key == "" || val == "" {
continue
}
aliases[key] = val
}
if len(aliases) == 0 {
c.ModelAliases = nil
} else {
c.ModelAliases = aliases
}
}
func (c *Config) forceHistorySplitEnabled() {
if c == nil {
return
}
enabled := true
c.HistorySplit.Enabled = &enabled
}
type CompatConfig struct {
WideInputStrictOutput *bool `json:"wide_input_strict_output,omitempty"`
StripReferenceMarkers *bool `json:"strip_reference_markers,omitempty"`
@@ -120,3 +179,8 @@ type AutoDeleteConfig struct {
Mode string `json:"mode,omitempty"`
Sessions bool `json:"sessions,omitempty"`
}
type HistorySplitConfig struct {
Enabled *bool `json:"enabled,omitempty"`
TriggerAfterTurns *int `json:"trigger_after_turns,omitempty"`
}

View File

@@ -10,19 +10,19 @@ import (
// ─── GetModelConfig edge cases ───────────────────────────────────────
func TestGetModelConfigDeepSeekChat(t *testing.T) {
thinking, search, ok := GetModelConfig("deepseek-chat")
thinking, search, ok := GetModelConfig("deepseek-v4-flash")
if !ok {
t.Fatal("expected ok for deepseek-chat")
t.Fatal("expected ok for deepseek-v4-flash")
}
if thinking || search {
t.Fatalf("expected no thinking/search for deepseek-chat, got thinking=%v search=%v", thinking, search)
if !thinking || search {
t.Fatalf("expected thinking=true search=false for deepseek-v4-flash, got thinking=%v search=%v", thinking, search)
}
}
func TestGetModelConfigDeepSeekReasoner(t *testing.T) {
thinking, search, ok := GetModelConfig("deepseek-reasoner")
thinking, search, ok := GetModelConfig("deepseek-v4-pro")
if !ok {
t.Fatal("expected ok for deepseek-reasoner")
t.Fatal("expected ok for deepseek-v4-pro")
}
if !thinking || search {
t.Fatalf("expected thinking=true search=false, got thinking=%v search=%v", thinking, search)
@@ -30,19 +30,19 @@ func TestGetModelConfigDeepSeekReasoner(t *testing.T) {
}
func TestGetModelConfigDeepSeekChatSearch(t *testing.T) {
thinking, search, ok := GetModelConfig("deepseek-chat-search")
thinking, search, ok := GetModelConfig("deepseek-v4-flash-search")
if !ok {
t.Fatal("expected ok for deepseek-chat-search")
t.Fatal("expected ok for deepseek-v4-flash-search")
}
if thinking || !search {
t.Fatalf("expected thinking=false search=true, got thinking=%v search=%v", thinking, search)
if !thinking || !search {
t.Fatalf("expected thinking=true search=true, got thinking=%v search=%v", thinking, search)
}
}
func TestGetModelConfigDeepSeekReasonerSearch(t *testing.T) {
thinking, search, ok := GetModelConfig("deepseek-reasoner-search")
thinking, search, ok := GetModelConfig("deepseek-v4-pro-search")
if !ok {
t.Fatal("expected ok for deepseek-reasoner-search")
t.Fatal("expected ok for deepseek-v4-pro-search")
}
if !thinking || !search {
t.Fatalf("expected both true, got thinking=%v search=%v", thinking, search)
@@ -50,19 +50,19 @@ func TestGetModelConfigDeepSeekReasonerSearch(t *testing.T) {
}
func TestGetModelConfigDeepSeekExpertChat(t *testing.T) {
thinking, search, ok := GetModelConfig("deepseek-expert-chat")
thinking, search, ok := GetModelConfig("deepseek-v4-pro")
if !ok {
t.Fatal("expected ok for deepseek-expert-chat")
t.Fatal("expected ok for deepseek-v4-pro")
}
if thinking || search {
t.Fatalf("expected no thinking/search for deepseek-expert-chat, got thinking=%v search=%v", thinking, search)
if !thinking || search {
t.Fatalf("expected thinking=true search=false for deepseek-v4-pro, got thinking=%v search=%v", thinking, search)
}
}
func TestGetModelConfigDeepSeekExpertReasonerSearch(t *testing.T) {
thinking, search, ok := GetModelConfig("deepseek-expert-reasoner-search")
thinking, search, ok := GetModelConfig("deepseek-v4-pro-search")
if !ok {
t.Fatal("expected ok for deepseek-expert-reasoner-search")
t.Fatal("expected ok for deepseek-v4-pro-search")
}
if !thinking || !search {
t.Fatalf("expected both true, got thinking=%v search=%v", thinking, search)
@@ -70,9 +70,9 @@ func TestGetModelConfigDeepSeekExpertReasonerSearch(t *testing.T) {
}
func TestGetModelConfigDeepSeekVisionReasonerSearch(t *testing.T) {
thinking, search, ok := GetModelConfig("deepseek-vision-reasoner-search")
thinking, search, ok := GetModelConfig("deepseek-v4-vision-search")
if !ok {
t.Fatal("expected ok for deepseek-vision-reasoner-search")
t.Fatal("expected ok for deepseek-v4-vision-search")
}
if !thinking || !search {
t.Fatalf("expected both true, got thinking=%v search=%v", thinking, search)
@@ -80,27 +80,27 @@ func TestGetModelConfigDeepSeekVisionReasonerSearch(t *testing.T) {
}
func TestGetModelTypeDefaultExpertAndVision(t *testing.T) {
defaultType, ok := GetModelType("deepseek-chat")
defaultType, ok := GetModelType("deepseek-v4-flash")
if !ok || defaultType != "default" {
t.Fatalf("expected default model_type, got ok=%v model_type=%q", ok, defaultType)
}
expertType, ok := GetModelType("deepseek-expert-chat")
expertType, ok := GetModelType("deepseek-v4-pro")
if !ok || expertType != "expert" {
t.Fatalf("expected expert model_type, got ok=%v model_type=%q", ok, expertType)
}
visionType, ok := GetModelType("deepseek-vision-chat")
visionType, ok := GetModelType("deepseek-v4-vision")
if !ok || visionType != "vision" {
t.Fatalf("expected vision model_type, got ok=%v model_type=%q", ok, visionType)
}
}
func TestGetModelConfigCaseInsensitive(t *testing.T) {
thinking, search, ok := GetModelConfig("DeepSeek-Chat")
thinking, search, ok := GetModelConfig("DeepSeek-V4-Flash")
if !ok {
t.Fatal("expected ok for case-insensitive deepseek-chat")
t.Fatal("expected ok for case-insensitive deepseek-v4-flash")
}
if thinking || search {
t.Fatalf("expected no thinking/search for case-insensitive deepseek-chat")
if !thinking || search {
t.Fatalf("expected thinking=true search=false for case-insensitive deepseek-v4-flash")
}
}
@@ -145,15 +145,16 @@ func TestConfigJSONRoundtrip(t *testing.T) {
trueVal := true
falseVal := false
cfg := Config{
Keys: []string{"key1", "key2"},
Accounts: []Account{{Email: "user@example.com", Password: "pass", Token: "tok"}},
ClaudeMapping: map[string]string{
"fast": "deepseek-chat",
"slow": "deepseek-reasoner",
},
Keys: []string{"key1", "key2"},
Accounts: []Account{{Email: "user@example.com", Password: "pass", Token: "tok"}},
ModelAliases: map[string]string{"Claude-Sonnet-4-6": "DeepSeek-V4-Flash"},
AutoDelete: AutoDeleteConfig{
Mode: "single",
},
HistorySplit: HistorySplitConfig{
Enabled: &trueVal,
TriggerAfterTurns: func() *int { v := 2; return &v }(),
},
Runtime: RuntimeConfig{
TokenRefreshIntervalHours: 12,
},
@@ -184,8 +185,8 @@ func TestConfigJSONRoundtrip(t *testing.T) {
if len(decoded.Accounts) != 1 || decoded.Accounts[0].Email != "user@example.com" {
t.Fatalf("unexpected accounts: %#v", decoded.Accounts)
}
if decoded.ClaudeMapping["fast"] != "deepseek-chat" {
t.Fatalf("unexpected claude mapping: %#v", decoded.ClaudeMapping)
if decoded.ModelAliases["claude-sonnet-4-6"] != "deepseek-v4-flash" {
t.Fatalf("unexpected normalized model aliases: %#v", decoded.ModelAliases)
}
if decoded.Runtime.TokenRefreshIntervalHours != 12 {
t.Fatalf("unexpected runtime refresh interval: %#v", decoded.Runtime.TokenRefreshIntervalHours)
@@ -193,6 +194,12 @@ func TestConfigJSONRoundtrip(t *testing.T) {
if decoded.AutoDelete.Mode != "single" {
t.Fatalf("unexpected auto delete mode: %#v", decoded.AutoDelete.Mode)
}
if decoded.HistorySplit.Enabled == nil || !*decoded.HistorySplit.Enabled {
t.Fatalf("unexpected history split enabled: %#v", decoded.HistorySplit.Enabled)
}
if decoded.HistorySplit.TriggerAfterTurns == nil || *decoded.HistorySplit.TriggerAfterTurns != 2 {
t.Fatalf("unexpected history split trigger_after_turns: %#v", decoded.HistorySplit.TriggerAfterTurns)
}
if decoded.Compat.WideInputStrictOutput == nil || !*decoded.Compat.WideInputStrictOutput {
t.Fatalf("unexpected compat wide_input_strict_output: %#v", decoded.Compat.WideInputStrictOutput)
}
@@ -245,19 +252,40 @@ func TestConfigUnmarshalJSONPreservesUnknownFields(t *testing.T) {
}
}
func TestConfigUnmarshalJSONIgnoresRemovedLegacyModelMappings(t *testing.T) {
raw := `{"keys":["k1"],"accounts":[],"claude_mapping":{"fast":"deepseek-v4-pro"},"claude_model_mapping":{"slow":"deepseek-v4-pro"}}`
var cfg Config
if err := json.Unmarshal([]byte(raw), &cfg); err != nil {
t.Fatalf("unmarshal error: %v", err)
}
if len(cfg.ModelAliases) != 0 {
t.Fatalf("expected removed legacy mappings to be ignored, got %#v", cfg.ModelAliases)
}
if _, ok := cfg.AdditionalFields["claude_mapping"]; ok {
t.Fatalf("expected removed legacy field not to persist in additional fields: %#v", cfg.AdditionalFields)
}
if _, ok := cfg.AdditionalFields["claude_model_mapping"]; ok {
t.Fatalf("expected removed legacy field not to persist in additional fields: %#v", cfg.AdditionalFields)
}
}
// ─── Config.Clone ────────────────────────────────────────────────────
func TestConfigCloneIsDeepCopy(t *testing.T) {
falseVal := false
trueVal := true
turns := 2
cfg := Config{
Keys: []string{"key1"},
Accounts: []Account{{Email: "user@test.com", Token: "token"}},
ClaudeMapping: map[string]string{
"fast": "deepseek-chat",
},
Keys: []string{"key1"},
Accounts: []Account{{Email: "user@test.com", Token: "token"}},
ModelAliases: map[string]string{"claude-sonnet-4-6": "deepseek-v4-flash"},
Compat: CompatConfig{
StripReferenceMarkers: &falseVal,
},
HistorySplit: HistorySplitConfig{
Enabled: &trueVal,
TriggerAfterTurns: &turns,
},
AdditionalFields: map[string]any{"custom": "value"},
}
@@ -266,10 +294,16 @@ func TestConfigCloneIsDeepCopy(t *testing.T) {
// Modify original
cfg.Keys[0] = "modified"
cfg.Accounts[0].Email = "modified@test.com"
cfg.ClaudeMapping["fast"] = "modified-model"
cfg.ModelAliases["claude-sonnet-4-6"] = "modified-model"
if cfg.Compat.StripReferenceMarkers != nil {
*cfg.Compat.StripReferenceMarkers = true
}
if cfg.HistorySplit.Enabled != nil {
*cfg.HistorySplit.Enabled = false
}
if cfg.HistorySplit.TriggerAfterTurns != nil {
*cfg.HistorySplit.TriggerAfterTurns = 5
}
// Cloned should not be affected
if cloned.Keys[0] != "key1" {
@@ -278,12 +312,18 @@ func TestConfigCloneIsDeepCopy(t *testing.T) {
if cloned.Accounts[0].Email != "user@test.com" {
t.Fatalf("clone accounts was affected: %#v", cloned.Accounts)
}
if cloned.ClaudeMapping["fast"] != "deepseek-chat" {
t.Fatalf("clone claude mapping was affected: %#v", cloned.ClaudeMapping)
if cloned.ModelAliases["claude-sonnet-4-6"] != "deepseek-v4-flash" {
t.Fatalf("clone model aliases was affected: %#v", cloned.ModelAliases)
}
if cloned.Compat.StripReferenceMarkers == nil || *cloned.Compat.StripReferenceMarkers {
t.Fatalf("clone compat was affected: %#v", cloned.Compat.StripReferenceMarkers)
}
if cloned.HistorySplit.Enabled == nil || !*cloned.HistorySplit.Enabled {
t.Fatalf("clone history split enabled was affected: %#v", cloned.HistorySplit.Enabled)
}
if cloned.HistorySplit.TriggerAfterTurns == nil || *cloned.HistorySplit.TriggerAfterTurns != 2 {
t.Fatalf("clone history split trigger was affected: %#v", cloned.HistorySplit.TriggerAfterTurns)
}
}
func TestConfigCloneNilMaps(t *testing.T) {
@@ -529,25 +569,122 @@ func TestStoreUpdate(t *testing.T) {
}
}
func TestStoreClaudeMapping(t *testing.T) {
t.Setenv("DS2API_CONFIG_JSON", `{"keys":[],"accounts":[],"claude_mapping":{"fast":"deepseek-chat","slow":"deepseek-reasoner"}}`)
func TestStoreUpdateReconcilesAPIKeyMutations(t *testing.T) {
t.Setenv("DS2API_CONFIG_JSON", `{
"keys":["k1"],
"api_keys":[{"key":"k1","name":"primary","remark":"prod"}],
"accounts":[]
}`)
store := LoadStore()
mapping := store.ClaudeMapping()
if mapping["fast"] != "deepseek-chat" {
t.Fatalf("unexpected fast mapping: %q", mapping["fast"])
if err := store.Update(func(cfg *Config) error {
cfg.APIKeys = append(cfg.APIKeys, APIKey{Key: "k2", Name: "secondary", Remark: "staging"})
return nil
}); err != nil {
t.Fatalf("add api key failed: %v", err)
}
if mapping["slow"] != "deepseek-reasoner" {
t.Fatalf("unexpected slow mapping: %q", mapping["slow"])
snap := store.Snapshot()
if len(snap.Keys) != 2 || snap.Keys[0] != "k1" || snap.Keys[1] != "k2" {
t.Fatalf("unexpected keys after api key add: %#v", snap.Keys)
}
if len(snap.APIKeys) != 2 {
t.Fatalf("unexpected api keys length after add: %#v", snap.APIKeys)
}
if snap.APIKeys[0].Name != "primary" || snap.APIKeys[0].Remark != "prod" {
t.Fatalf("metadata for existing key was lost: %#v", snap.APIKeys[0])
}
if snap.APIKeys[1].Name != "secondary" || snap.APIKeys[1].Remark != "staging" {
t.Fatalf("metadata for new key was lost: %#v", snap.APIKeys[1])
}
if err := store.Update(func(cfg *Config) error {
cfg.APIKeys = append([]APIKey(nil), cfg.APIKeys[1:]...)
return nil
}); err != nil {
t.Fatalf("delete api key failed: %v", err)
}
snap = store.Snapshot()
if len(snap.Keys) != 1 || snap.Keys[0] != "k2" {
t.Fatalf("unexpected keys after api key delete: %#v", snap.Keys)
}
if len(snap.APIKeys) != 1 || snap.APIKeys[0].Key != "k2" {
t.Fatalf("unexpected api keys after delete: %#v", snap.APIKeys)
}
}
func TestStoreClaudeMappingEmpty(t *testing.T) {
func TestStoreUpdateReconcilesLegacyKeyMutations(t *testing.T) {
t.Setenv("DS2API_CONFIG_JSON", `{
"keys":["k1"],
"api_keys":[{"key":"k1","name":"primary","remark":"prod"}],
"accounts":[]
}`)
store := LoadStore()
if err := store.Update(func(cfg *Config) error {
cfg.Keys = append(cfg.Keys, "k2")
return nil
}); err != nil {
t.Fatalf("legacy key update failed: %v", err)
}
snap := store.Snapshot()
if len(snap.Keys) != 2 || snap.Keys[0] != "k1" || snap.Keys[1] != "k2" {
t.Fatalf("unexpected keys after legacy update: %#v", snap.Keys)
}
if len(snap.APIKeys) != 2 {
t.Fatalf("unexpected api keys after legacy update: %#v", snap.APIKeys)
}
if snap.APIKeys[0].Name != "primary" || snap.APIKeys[0].Remark != "prod" {
t.Fatalf("metadata for preserved key was lost: %#v", snap.APIKeys[0])
}
if snap.APIKeys[1].Key != "k2" || snap.APIKeys[1].Name != "" || snap.APIKeys[1].Remark != "" {
t.Fatalf("new legacy key should stay metadata-free: %#v", snap.APIKeys[1])
}
}
func TestNormalizeCredentialsPrefersStructuredAPIKeys(t *testing.T) {
cfg := Config{
Keys: []string{"legacy-key"},
APIKeys: []APIKey{
{Key: "structured-key", Name: "primary", Remark: "prod"},
},
}
cfg.NormalizeCredentials()
if len(cfg.Keys) != 1 || cfg.Keys[0] != "structured-key" {
t.Fatalf("unexpected normalized keys: %#v", cfg.Keys)
}
if len(cfg.APIKeys) != 1 {
t.Fatalf("unexpected normalized api keys: %#v", cfg.APIKeys)
}
if cfg.APIKeys[0].Key != "structured-key" || cfg.APIKeys[0].Name != "primary" || cfg.APIKeys[0].Remark != "prod" {
t.Fatalf("unexpected structured api key metadata: %#v", cfg.APIKeys[0])
}
}
func TestStoreModelAliasesIncludesDefaultsAndOverrides(t *testing.T) {
t.Setenv("DS2API_CONFIG_JSON", `{"keys":[],"accounts":[],"model_aliases":{"claude-opus-4-6":"deepseek-v4-pro-search"}}`)
store := LoadStore()
aliases := store.ModelAliases()
if aliases["claude-sonnet-4-6"] != "deepseek-v4-flash" {
t.Fatalf("expected default alias to remain available, got %q", aliases["claude-sonnet-4-6"])
}
if aliases["claude-opus-4-6"] != "deepseek-v4-pro-search" {
t.Fatalf("expected custom alias override, got %q", aliases["claude-opus-4-6"])
}
}
func TestStoreModelAliasesDefault(t *testing.T) {
t.Setenv("DS2API_CONFIG_JSON", `{"keys":[],"accounts":[]}`)
store := LoadStore()
mapping := store.ClaudeMapping()
// Even without config mapping, there are defaults
if mapping == nil {
t.Fatal("expected non-nil mapping (may contain defaults)")
aliases := store.ModelAliases()
if aliases == nil {
t.Fatal("expected non-nil aliases")
}
if aliases["claude-sonnet-4-6"] != "deepseek-v4-flash" {
t.Fatalf("expected built-in alias, got %q", aliases["claude-sonnet-4-6"])
}
}
@@ -597,18 +734,12 @@ func TestOpenAIModelsResponse(t *testing.T) {
t.Fatal("expected non-empty models list")
}
expected := map[string]bool{
"deepseek-chat": false,
"deepseek-reasoner": false,
"deepseek-chat-search": false,
"deepseek-reasoner-search": false,
"deepseek-expert-chat": false,
"deepseek-expert-reasoner": false,
"deepseek-expert-chat-search": false,
"deepseek-expert-reasoner-search": false,
"deepseek-vision-chat": false,
"deepseek-vision-reasoner": false,
"deepseek-vision-chat-search": false,
"deepseek-vision-reasoner-search": false,
"deepseek-v4-flash": false,
"deepseek-v4-pro": false,
"deepseek-v4-flash-search": false,
"deepseek-v4-pro-search": false,
"deepseek-v4-vision": false,
"deepseek-v4-vision-search": false,
}
for _, model := range data {
if _, ok := expected[model.ID]; ok {

View File

@@ -0,0 +1,158 @@
package config
import (
"slices"
"strings"
)
func (c *Config) ReconcileCredentials(base Config) {
if c == nil {
return
}
currKeys := normalizeKeys(c.Keys)
currAPIKeys := normalizeAPIKeys(c.APIKeys)
baseKeys := normalizeKeys(base.Keys)
baseAPIKeys := normalizeAPIKeys(base.APIKeys)
keysChanged := !slices.Equal(currKeys, baseKeys)
apiKeysChanged := !equalAPIKeys(currAPIKeys, baseAPIKeys)
if keysChanged && !apiKeysChanged {
c.APIKeys = apiKeysFromStrings(currKeys, apiKeyMap(baseAPIKeys))
} else {
c.APIKeys = currAPIKeys
}
c.Keys = apiKeysToStrings(c.APIKeys)
}
func normalizeKeys(keys []string) []string {
if len(keys) == 0 {
return nil
}
out := make([]string, 0, len(keys))
seen := make(map[string]struct{}, len(keys))
for _, key := range keys {
key = strings.TrimSpace(key)
if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
out = append(out, key)
}
if len(out) == 0 {
return nil
}
return out
}
func normalizeAPIKeys(items []APIKey) []APIKey {
if len(items) == 0 {
return nil
}
out := make([]APIKey, 0, len(items))
seen := make(map[string]struct{}, len(items))
for _, item := range items {
key := strings.TrimSpace(item.Key)
if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
out = append(out, APIKey{
Key: key,
Name: strings.TrimSpace(item.Name),
Remark: strings.TrimSpace(item.Remark),
})
}
if len(out) == 0 {
return nil
}
return out
}
func apiKeysFromStrings(keys []string, meta map[string]APIKey) []APIKey {
if len(keys) == 0 {
return nil
}
out := make([]APIKey, 0, len(keys))
seen := make(map[string]struct{}, len(keys))
for _, key := range keys {
key = strings.TrimSpace(key)
if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
if item, ok := meta[key]; ok {
out = append(out, APIKey{
Key: key,
Name: strings.TrimSpace(item.Name),
Remark: strings.TrimSpace(item.Remark),
})
continue
}
out = append(out, APIKey{Key: key})
}
if len(out) == 0 {
return nil
}
return out
}
func apiKeysToStrings(items []APIKey) []string {
if len(items) == 0 {
return nil
}
keys := make([]string, 0, len(items))
for _, item := range items {
key := strings.TrimSpace(item.Key)
if key == "" {
continue
}
keys = append(keys, key)
}
if len(keys) == 0 {
return nil
}
return keys
}
func apiKeyMap(items []APIKey) map[string]APIKey {
if len(items) == 0 {
return nil
}
out := make(map[string]APIKey, len(items))
for _, item := range items {
key := strings.TrimSpace(item.Key)
if key == "" {
continue
}
if _, ok := out[key]; ok {
continue
}
out[key] = APIKey{
Key: key,
Name: strings.TrimSpace(item.Name),
Remark: strings.TrimSpace(item.Remark),
}
}
return out
}
func equalAPIKeys(a, b []APIKey) bool {
if len(a) != len(b) {
return false
}
return slices.EqualFunc(a, b, func(x, y APIKey) bool {
return strings.TrimSpace(x.Key) == strings.TrimSpace(y.Key) &&
strings.TrimSpace(x.Name) == strings.TrimSpace(y.Name) &&
strings.TrimSpace(x.Remark) == strings.TrimSpace(y.Remark)
})
}

View File

@@ -7,22 +7,63 @@ type mockModelAliasReader map[string]string
func (m mockModelAliasReader) ModelAliases() map[string]string { return m }
func TestResolveModelDirectDeepSeek(t *testing.T) {
got, ok := ResolveModel(nil, "deepseek-chat")
if !ok || got != "deepseek-chat" {
t.Fatalf("expected deepseek-chat, got ok=%v model=%q", ok, got)
got, ok := ResolveModel(nil, "deepseek-v4-flash")
if !ok || got != "deepseek-v4-flash" {
t.Fatalf("expected deepseek-v4-flash, got ok=%v model=%q", ok, got)
}
}
func TestResolveModelAlias(t *testing.T) {
got, ok := ResolveModel(nil, "gpt-4.1")
if !ok || got != "deepseek-chat" {
t.Fatalf("expected alias gpt-4.1 -> deepseek-chat, got ok=%v model=%q", ok, got)
if !ok || got != "deepseek-v4-flash" {
t.Fatalf("expected alias gpt-4.1 -> deepseek-v4-flash, got ok=%v model=%q", ok, got)
}
}
func TestResolveLatestOpenAIAlias(t *testing.T) {
got, ok := ResolveModel(nil, "gpt-5.5")
if !ok || got != "deepseek-v4-flash" {
t.Fatalf("expected alias gpt-5.5 -> deepseek-v4-flash, got ok=%v model=%q", ok, got)
}
}
func TestResolveLatestClaudeAlias(t *testing.T) {
got, ok := ResolveModel(nil, "claude-sonnet-4-6")
if !ok || got != "deepseek-v4-flash" {
t.Fatalf("expected alias claude-sonnet-4-6 -> deepseek-v4-flash, got ok=%v model=%q", ok, got)
}
}
func TestResolveExpandedHistoricalAliases(t *testing.T) {
cases := []struct {
name string
model string
want string
}{
{name: "openai old chatgpt", model: "chatgpt-4o", want: "deepseek-v4-flash"},
{name: "openai codex max", model: "gpt-5.1-codex-max", want: "deepseek-v4-pro"},
{name: "openai deep research", model: "o3-deep-research", want: "deepseek-v4-pro-search"},
{name: "openai historical reasoning", model: "o1-preview", want: "deepseek-v4-pro"},
{name: "claude latest historical", model: "claude-3-5-sonnet-latest", want: "deepseek-v4-flash"},
{name: "claude historical opus", model: "claude-3-opus-20240229", want: "deepseek-v4-pro"},
{name: "claude historical haiku", model: "claude-3-haiku-20240307", want: "deepseek-v4-flash"},
{name: "gemini latest alias", model: "gemini-flash-latest", want: "deepseek-v4-flash"},
{name: "gemini historical pro", model: "gemini-1.5-pro", want: "deepseek-v4-pro"},
{name: "gemini vision legacy", model: "gemini-pro-vision", want: "deepseek-v4-vision"},
}
for _, tc := range cases {
t.Run(tc.name, func(t *testing.T) {
got, ok := ResolveModel(nil, tc.model)
if !ok || got != tc.want {
t.Fatalf("expected alias %s -> %s, got ok=%v model=%q", tc.model, tc.want, ok, got)
}
})
}
}
func TestResolveModelHeuristicReasoner(t *testing.T) {
got, ok := ResolveModel(nil, "o3-super")
if !ok || got != "deepseek-reasoner" {
if !ok || got != "deepseek-v4-pro" {
t.Fatalf("expected heuristic reasoner, got ok=%v model=%q", ok, got)
}
}
@@ -34,28 +75,58 @@ func TestResolveModelUnknown(t *testing.T) {
}
}
func TestResolveModelRejectsLegacyDeepSeekIDs(t *testing.T) {
legacyModels := []string{
"deepseek-chat",
"deepseek-reasoner",
"deepseek-chat-search",
"deepseek-reasoner-search",
"deepseek-expert-chat",
"deepseek-expert-reasoner",
"deepseek-vision-chat",
}
for _, model := range legacyModels {
if got, ok := ResolveModel(nil, model); ok {
t.Fatalf("expected legacy model %q to be rejected, got %q", model, got)
}
}
}
func TestResolveModelRejectsRetiredHistoricalModels(t *testing.T) {
retiredModels := []string{
"claude-2.1",
"claude-instant-1.2",
"gpt-3.5-turbo",
}
for _, model := range retiredModels {
if got, ok := ResolveModel(nil, model); ok {
t.Fatalf("expected retired model %q to be rejected, got %q", model, got)
}
}
}
func TestResolveModelDirectDeepSeekExpert(t *testing.T) {
got, ok := ResolveModel(nil, "deepseek-expert-chat")
if !ok || got != "deepseek-expert-chat" {
t.Fatalf("expected deepseek-expert-chat, got ok=%v model=%q", ok, got)
got, ok := ResolveModel(nil, "deepseek-v4-pro")
if !ok || got != "deepseek-v4-pro" {
t.Fatalf("expected deepseek-v4-pro, got ok=%v model=%q", ok, got)
}
}
func TestResolveModelCustomAliasToExpert(t *testing.T) {
got, ok := ResolveModel(mockModelAliasReader{
"my-expert-model": "deepseek-expert-reasoner-search",
"my-expert-model": "deepseek-v4-pro-search",
}, "my-expert-model")
if !ok || got != "deepseek-expert-reasoner-search" {
t.Fatalf("expected alias -> deepseek-expert-reasoner-search, got ok=%v model=%q", ok, got)
if !ok || got != "deepseek-v4-pro-search" {
t.Fatalf("expected alias -> deepseek-v4-pro-search, got ok=%v model=%q", ok, got)
}
}
func TestResolveModelCustomAliasToVision(t *testing.T) {
got, ok := ResolveModel(mockModelAliasReader{
"my-vision-model": "deepseek-vision-chat-search",
"my-vision-model": "deepseek-v4-vision-search",
}, "my-vision-model")
if !ok || got != "deepseek-vision-chat-search" {
t.Fatalf("expected alias -> deepseek-vision-chat-search, got ok=%v model=%q", ok, got)
if !ok || got != "deepseek-v4-vision-search" {
t.Fatalf("expected alias -> deepseek-v4-vision-search, got ok=%v model=%q", ok, got)
}
}

View File

@@ -15,28 +15,22 @@ type ModelAliasReader interface {
}
var DeepSeekModels = []ModelInfo{
{ID: "deepseek-chat", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-reasoner", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-chat-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-reasoner-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-expert-chat", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-expert-reasoner", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-expert-chat-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-expert-reasoner-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-vision-chat", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-vision-reasoner", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-vision-chat-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-vision-reasoner-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-v4-flash", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-v4-pro", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-v4-flash-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-v4-pro-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-v4-vision", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
{ID: "deepseek-v4-vision-search", Object: "model", Created: 1677610602, OwnedBy: "deepseek", Permission: []any{}},
}
var ClaudeModels = []ModelInfo{
// Current aliases
{ID: "claude-opus-4-6", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-sonnet-4-5", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-sonnet-4-6", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-haiku-4-5", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
// Current snapshots
{ID: "claude-opus-4-5-20251101", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
// Claude 4.x snapshots and prior aliases kept for compatibility
{ID: "claude-sonnet-4-5", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-opus-4-1", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-opus-4-1-20250805", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-opus-4-0", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
@@ -57,44 +51,13 @@ var ClaudeModels = []ModelInfo{
{ID: "claude-3-5-haiku-latest", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-3-5-haiku-20241022", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-3-haiku-20240307", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
// Claude 2.x and 1.x (retired but accepted for compatibility)
{ID: "claude-2.1", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-2.0", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-1.3", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-1.2", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-1.1", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-1.0", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-instant-1.2", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-instant-1.1", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
{ID: "claude-instant-1.0", Object: "model", Created: 1715635200, OwnedBy: "anthropic"},
}
func GetModelConfig(model string) (thinking bool, search bool, ok bool) {
switch lower(model) {
case "deepseek-chat":
return false, false, true
case "deepseek-reasoner":
case "deepseek-v4-flash", "deepseek-v4-pro", "deepseek-v4-vision":
return true, false, true
case "deepseek-chat-search":
return false, true, true
case "deepseek-reasoner-search":
return true, true, true
case "deepseek-expert-chat":
return false, false, true
case "deepseek-expert-reasoner":
return true, false, true
case "deepseek-expert-chat-search":
return false, true, true
case "deepseek-expert-reasoner-search":
return true, true, true
case "deepseek-vision-chat":
return false, false, true
case "deepseek-vision-reasoner":
return true, false, true
case "deepseek-vision-chat-search":
return false, true, true
case "deepseek-vision-reasoner-search":
case "deepseek-v4-flash-search", "deepseek-v4-pro-search", "deepseek-v4-vision-search":
return true, true, true
default:
return false, false, false
@@ -103,11 +66,11 @@ func GetModelConfig(model string) (thinking bool, search bool, ok bool) {
func GetModelType(model string) (modelType string, ok bool) {
switch lower(model) {
case "deepseek-chat", "deepseek-reasoner", "deepseek-chat-search", "deepseek-reasoner-search":
case "deepseek-v4-flash", "deepseek-v4-flash-search":
return "default", true
case "deepseek-expert-chat", "deepseek-expert-reasoner", "deepseek-expert-chat-search", "deepseek-expert-reasoner-search":
case "deepseek-v4-pro", "deepseek-v4-pro-search":
return "expert", true
case "deepseek-vision-chat", "deepseek-vision-reasoner", "deepseek-vision-chat-search", "deepseek-vision-reasoner-search":
case "deepseek-v4-vision", "deepseek-v4-vision-search":
return "vision", true
default:
return "", false
@@ -121,27 +84,105 @@ func IsSupportedDeepSeekModel(model string) bool {
func DefaultModelAliases() map[string]string {
return map[string]string{
"gpt-4o": "deepseek-chat",
"gpt-4.1": "deepseek-chat",
"gpt-4.1-mini": "deepseek-chat",
"gpt-4.1-nano": "deepseek-chat",
"gpt-5": "deepseek-chat",
"gpt-5-mini": "deepseek-chat",
"gpt-5-codex": "deepseek-reasoner",
"o1": "deepseek-reasoner",
"o1-mini": "deepseek-reasoner",
"o3": "deepseek-reasoner",
"o3-mini": "deepseek-reasoner",
"claude-sonnet-4-5": "deepseek-chat",
"claude-haiku-4-5": "deepseek-chat",
"claude-opus-4-6": "deepseek-reasoner",
"claude-3-5-sonnet": "deepseek-chat",
"claude-3-5-haiku": "deepseek-chat",
"claude-3-opus": "deepseek-reasoner",
"gemini-2.5-pro": "deepseek-chat",
"gemini-2.5-flash": "deepseek-chat",
"llama-3.1-70b-instruct": "deepseek-chat",
"qwen-max": "deepseek-chat",
// OpenAI GPT / ChatGPT families
"chatgpt-4o": "deepseek-v4-flash",
"gpt-4": "deepseek-v4-flash",
"gpt-4-turbo": "deepseek-v4-flash",
"gpt-4-turbo-preview": "deepseek-v4-flash",
"gpt-4.5-preview": "deepseek-v4-flash",
"gpt-4o": "deepseek-v4-flash",
"gpt-4o-mini": "deepseek-v4-flash",
"gpt-4.1": "deepseek-v4-flash",
"gpt-4.1-mini": "deepseek-v4-flash",
"gpt-4.1-nano": "deepseek-v4-flash",
"gpt-5": "deepseek-v4-flash",
"gpt-5-chat": "deepseek-v4-flash",
"gpt-5.1": "deepseek-v4-flash",
"gpt-5.1-chat": "deepseek-v4-flash",
"gpt-5.2": "deepseek-v4-flash",
"gpt-5.2-chat": "deepseek-v4-flash",
"gpt-5.3-chat": "deepseek-v4-flash",
"gpt-5.4": "deepseek-v4-flash",
"gpt-5.5": "deepseek-v4-flash",
"gpt-5-mini": "deepseek-v4-flash",
"gpt-5-nano": "deepseek-v4-flash",
"gpt-5.4-mini": "deepseek-v4-flash",
"gpt-5.4-nano": "deepseek-v4-flash",
"gpt-5-pro": "deepseek-v4-pro",
"gpt-5.2-pro": "deepseek-v4-pro",
"gpt-5.4-pro": "deepseek-v4-pro",
"gpt-5.5-pro": "deepseek-v4-pro",
"gpt-5-codex": "deepseek-v4-pro",
"gpt-5.1-codex": "deepseek-v4-pro",
"gpt-5.1-codex-mini": "deepseek-v4-pro",
"gpt-5.1-codex-max": "deepseek-v4-pro",
"gpt-5.2-codex": "deepseek-v4-pro",
"gpt-5.3-codex": "deepseek-v4-pro",
"codex-mini-latest": "deepseek-v4-pro",
// OpenAI reasoning / research families
"o1": "deepseek-v4-pro",
"o1-preview": "deepseek-v4-pro",
"o1-mini": "deepseek-v4-pro",
"o1-pro": "deepseek-v4-pro",
"o3": "deepseek-v4-pro",
"o3-mini": "deepseek-v4-pro",
"o3-pro": "deepseek-v4-pro",
"o3-deep-research": "deepseek-v4-pro-search",
"o4-mini": "deepseek-v4-pro",
"o4-mini-deep-research": "deepseek-v4-pro-search",
// Claude current and historical aliases
"claude-opus-4-6": "deepseek-v4-pro",
"claude-opus-4-1": "deepseek-v4-pro",
"claude-opus-4-1-20250805": "deepseek-v4-pro",
"claude-opus-4-0": "deepseek-v4-pro",
"claude-opus-4-20250514": "deepseek-v4-pro",
"claude-sonnet-4-6": "deepseek-v4-flash",
"claude-sonnet-4-5": "deepseek-v4-flash",
"claude-sonnet-4-5-20250929": "deepseek-v4-flash",
"claude-sonnet-4-0": "deepseek-v4-flash",
"claude-sonnet-4-20250514": "deepseek-v4-flash",
"claude-haiku-4-5": "deepseek-v4-flash",
"claude-haiku-4-5-20251001": "deepseek-v4-flash",
"claude-3-7-sonnet": "deepseek-v4-flash",
"claude-3-7-sonnet-latest": "deepseek-v4-flash",
"claude-3-7-sonnet-20250219": "deepseek-v4-flash",
"claude-3-5-sonnet": "deepseek-v4-flash",
"claude-3-5-sonnet-latest": "deepseek-v4-flash",
"claude-3-5-sonnet-20240620": "deepseek-v4-flash",
"claude-3-5-sonnet-20241022": "deepseek-v4-flash",
"claude-3-5-haiku": "deepseek-v4-flash",
"claude-3-5-haiku-latest": "deepseek-v4-flash",
"claude-3-5-haiku-20241022": "deepseek-v4-flash",
"claude-3-opus": "deepseek-v4-pro",
"claude-3-opus-20240229": "deepseek-v4-pro",
"claude-3-sonnet": "deepseek-v4-flash",
"claude-3-sonnet-20240229": "deepseek-v4-flash",
"claude-3-haiku": "deepseek-v4-flash",
"claude-3-haiku-20240307": "deepseek-v4-flash",
// Gemini current and historical text / multimodal models
"gemini-pro": "deepseek-v4-pro",
"gemini-pro-vision": "deepseek-v4-vision",
"gemini-pro-latest": "deepseek-v4-pro",
"gemini-flash-latest": "deepseek-v4-flash",
"gemini-1.5-pro": "deepseek-v4-pro",
"gemini-1.5-flash": "deepseek-v4-flash",
"gemini-1.5-flash-8b": "deepseek-v4-flash",
"gemini-2.0-flash": "deepseek-v4-flash",
"gemini-2.0-flash-lite": "deepseek-v4-flash",
"gemini-2.5-pro": "deepseek-v4-pro",
"gemini-2.5-flash": "deepseek-v4-flash",
"gemini-2.5-flash-lite": "deepseek-v4-flash",
"gemini-3.1-pro": "deepseek-v4-pro",
"gemini-3-pro": "deepseek-v4-pro",
"gemini-3-flash": "deepseek-v4-flash",
"gemini-3.1-flash": "deepseek-v4-flash",
"gemini-3.1-flash-lite": "deepseek-v4-flash",
"llama-3.1-70b-instruct": "deepseek-v4-flash",
"qwen-max": "deepseek-v4-flash",
}
}
@@ -150,6 +191,9 @@ func ResolveModel(store ModelAliasReader, requested string) (string, bool) {
if model == "" {
return "", false
}
if isRetiredHistoricalModel(model) {
return "", false
}
if IsSupportedDeepSeekModel(model) {
return model, true
}
@@ -179,23 +223,44 @@ func ResolveModel(store ModelAliasReader, requested string) (string, bool) {
return "", false
}
useVision := strings.Contains(model, "vision")
useReasoner := strings.Contains(model, "reason") ||
strings.Contains(model, "reasoner") ||
strings.HasPrefix(model, "o1") ||
strings.HasPrefix(model, "o3") ||
strings.Contains(model, "opus") ||
strings.Contains(model, "slow") ||
strings.Contains(model, "r1")
useSearch := strings.Contains(model, "search")
switch {
case useVision && useSearch:
return "deepseek-v4-vision-search", true
case useVision:
return "deepseek-v4-vision", true
case useReasoner && useSearch:
return "deepseek-reasoner-search", true
return "deepseek-v4-pro-search", true
case useReasoner:
return "deepseek-reasoner", true
return "deepseek-v4-pro", true
case useSearch:
return "deepseek-chat-search", true
return "deepseek-v4-flash-search", true
default:
return "deepseek-chat", true
return "deepseek-v4-flash", true
}
}
func isRetiredHistoricalModel(model string) bool {
switch {
case strings.HasPrefix(model, "claude-1."):
return true
case strings.HasPrefix(model, "claude-2."):
return true
case strings.HasPrefix(model, "claude-instant-"):
return true
case strings.HasPrefix(model, "gpt-3.5"):
return true
default:
return false
}
}

View File

@@ -37,6 +37,10 @@ func RawStreamSampleRoot() string {
return ResolvePath("DS2API_RAW_STREAM_SAMPLE_ROOT", "tests/raw_stream_samples")
}
func ChatHistoryPath() string {
return ResolvePath("DS2API_CHAT_HISTORY_PATH", "data/chat_history.json")
}
func StaticAdminDir() string {
return ResolvePath("DS2API_STATIC_ADMIN_DIR", "static/admin")
}

View File

@@ -43,6 +43,7 @@ func LoadStoreWithError() (*Store, error) {
func loadStore() (*Store, error) {
cfg, fromEnv, err := loadConfig()
cfg.NormalizeCredentials()
if validateErr := ValidateConfig(cfg); validateErr != nil {
err = errors.Join(err, validateErr)
}
@@ -112,6 +113,7 @@ func loadConfigFromFile(path string) (Config, error) {
if err := json.Unmarshal(content, &cfg); err != nil {
return Config{}, err
}
cfg.NormalizeCredentials()
cfg.DropInvalidAccounts()
if strings.Contains(string(content), `"test_status"`) && !IsVercel() {
if b, err := json.MarshalIndent(cfg, "", " "); err == nil {
@@ -207,6 +209,7 @@ func (s *Store) UpdateAccountToken(identifier, token string) error {
func (s *Store) Replace(cfg Config) error {
s.mu.Lock()
defer s.mu.Unlock()
cfg.NormalizeCredentials()
s.cfg = cfg.Clone()
s.rebuildIndexes()
return s.saveLocked()
@@ -215,10 +218,13 @@ func (s *Store) Replace(cfg Config) error {
func (s *Store) Update(mutator func(*Config) error) error {
s.mu.Lock()
defer s.mu.Unlock()
cfg := s.cfg.Clone()
base := s.cfg.Clone()
cfg := base.Clone()
if err := mutator(&cfg); err != nil {
return err
}
cfg.ReconcileCredentials(base)
cfg.NormalizeCredentials()
s.cfg = cfg
s.rebuildIndexes()
return s.saveLocked()

View File

@@ -6,18 +6,6 @@ import (
"strings"
)
func (s *Store) ClaudeMapping() map[string]string {
s.mu.RLock()
defer s.mu.RUnlock()
if len(s.cfg.ClaudeModelMap) > 0 {
return cloneStringMap(s.cfg.ClaudeModelMap)
}
if len(s.cfg.ClaudeMapping) > 0 {
return cloneStringMap(s.cfg.ClaudeMapping)
}
return map[string]string{"fast": "deepseek-chat", "slow": "deepseek-reasoner"}
}
func (s *Store) ModelAliases() map[string]string {
s.mu.RLock()
defer s.mu.RUnlock()
@@ -174,3 +162,16 @@ func (s *Store) RuntimeTokenRefreshIntervalHours() int {
func (s *Store) AutoDeleteSessions() bool {
return s.AutoDeleteMode() != "none"
}
func (s *Store) HistorySplitEnabled() bool {
return true
}
func (s *Store) HistorySplitTriggerAfterTurns() int {
s.mu.RLock()
defer s.mu.RUnlock()
if s.cfg.HistorySplit.TriggerAfterTurns == nil || *s.cfg.HistorySplit.TriggerAfterTurns <= 0 {
return 1
}
return *s.cfg.HistorySplit.TriggerAfterTurns
}

View File

@@ -0,0 +1,42 @@
package config
import "testing"
func TestStoreHistorySplitAccessors(t *testing.T) {
store := &Store{cfg: Config{}}
if !store.HistorySplitEnabled() {
t.Fatal("expected history split enabled by default")
}
if got := store.HistorySplitTriggerAfterTurns(); got != 1 {
t.Fatalf("default history split trigger_after_turns=%d want=1", got)
}
enabled := false
turns := 3
store.cfg.HistorySplit = HistorySplitConfig{
Enabled: &enabled,
TriggerAfterTurns: &turns,
}
if !store.HistorySplitEnabled() {
t.Fatal("expected history split to stay enabled after legacy disabled override")
}
if got := store.HistorySplitTriggerAfterTurns(); got != 3 {
t.Fatalf("history split trigger_after_turns=%d want=3", got)
}
}
func TestStoreHistorySplitLegacyDisabledConfigNormalizesToEnabled(t *testing.T) {
t.Setenv("DS2API_CONFIG_JSON", `{"keys":["k1"],"history_split":{"enabled":false,"trigger_after_turns":2}}`)
store := LoadStore()
if !store.HistorySplitEnabled() {
t.Fatal("expected history split enabled when legacy config disables it")
}
snap := store.Snapshot()
if snap.HistorySplit.Enabled == nil || !*snap.HistorySplit.Enabled {
t.Fatalf("expected normalized history_split.enabled=true, got %#v", snap.HistorySplit.Enabled)
}
if got := store.HistorySplitTriggerAfterTurns(); got != 2 {
t.Fatalf("history split trigger_after_turns=%d want=2", got)
}
}

View File

@@ -24,6 +24,9 @@ func ValidateConfig(c Config) error {
if err := ValidateAutoDeleteConfig(c.AutoDelete); err != nil {
return err
}
if err := ValidateHistorySplitConfig(c.HistorySplit); err != nil {
return err
}
if err := ValidateAccountProxyReferences(c.Accounts, c.Proxies); err != nil {
return err
}
@@ -111,6 +114,15 @@ func ValidateAutoDeleteConfig(autoDelete AutoDeleteConfig) error {
return ValidateAutoDeleteMode(autoDelete.Mode)
}
func ValidateHistorySplitConfig(historySplit HistorySplitConfig) error {
if historySplit.TriggerAfterTurns != nil {
if err := ValidateIntRange("history_split.trigger_after_turns", *historySplit.TriggerAfterTurns, 1, 1000, true); err != nil {
return err
}
}
return nil
}
func ValidateIntRange(name string, value, min, max int, required bool) error {
if value == 0 && !required {
return nil

View File

@@ -39,6 +39,13 @@ func TestValidateConfigRejectsInvalidValues(t *testing.T) {
cfg: Config{AutoDelete: AutoDeleteConfig{Mode: "maybe"}},
want: "auto_delete.mode",
},
{
name: "history split",
cfg: Config{HistorySplit: HistorySplitConfig{
TriggerAfterTurns: intPtr(0),
}},
want: "history_split.trigger_after_turns",
},
}
for _, tc := range tests {
@@ -59,3 +66,5 @@ func TestValidateConfigAcceptsLegacyAutoDeleteSessions(t *testing.T) {
t.Fatalf("expected legacy auto_delete.sessions config to remain valid, got %v", err)
}
}
func intPtr(v int) *int { return &v }

View File

@@ -1,7 +1,8 @@
package deepseek
package client
import (
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"errors"
"fmt"
"net/http"
@@ -28,7 +29,7 @@ func (c *Client) Login(ctx context.Context, acc config.Account) (string, error)
} else {
return "", errors.New("missing email/mobile")
}
resp, err := c.postJSON(ctx, clients.regular, clients.fallback, DeepSeekLoginURL, BaseHeaders, payload)
resp, err := c.postJSON(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekLoginURL, dsprotocol.BaseHeaders, payload)
if err != nil {
return "", err
}
@@ -58,7 +59,7 @@ func (c *Client) CreateSession(ctx context.Context, a *auth.RequestAuth, maxAtte
refreshed := false
for attempts < maxAttempts {
headers := c.authHeaders(a.DeepSeekToken)
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, DeepSeekCreateSessionURL, headers, map[string]any{"agent": "chat"})
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekCreateSessionURL, headers, map[string]any{"agent": "chat"})
if err != nil {
config.Logger.Warn("[create_session] request error", "error", err, "account", a.AccountID)
attempts++
@@ -91,7 +92,7 @@ func (c *Client) CreateSession(ctx context.Context, a *auth.RequestAuth, maxAtte
}
func (c *Client) GetPow(ctx context.Context, a *auth.RequestAuth, maxAttempts int) (string, error) {
return c.GetPowForTarget(ctx, a, DeepSeekCompletionTargetPath, maxAttempts)
return c.GetPowForTarget(ctx, a, dsprotocol.DeepSeekCompletionTargetPath, maxAttempts)
}
func (c *Client) GetPowForTarget(ctx context.Context, a *auth.RequestAuth, targetPath string, maxAttempts int) (string, error) {
@@ -100,16 +101,20 @@ func (c *Client) GetPowForTarget(ctx context.Context, a *auth.RequestAuth, targe
}
targetPath = strings.TrimSpace(targetPath)
if targetPath == "" {
targetPath = DeepSeekCompletionTargetPath
targetPath = dsprotocol.DeepSeekCompletionTargetPath
}
clients := c.requestClientsForAuth(ctx, a)
attempts := 0
refreshed := false
lastFailureKind := FailureUnknown
lastFailureMessage := ""
for attempts < maxAttempts {
headers := c.authHeaders(a.DeepSeekToken)
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, DeepSeekCreatePowURL, headers, map[string]any{"target_path": targetPath})
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekCreatePowURL, headers, map[string]any{"target_path": targetPath})
if err != nil {
config.Logger.Warn("[get_pow] request error", "error", err, "account", a.AccountID, "target_path", targetPath)
lastFailureKind = FailureUnknown
lastFailureMessage = err.Error()
attempts++
continue
}
@@ -126,6 +131,12 @@ func (c *Client) GetPowForTarget(ctx context.Context, a *auth.RequestAuth, targe
return BuildPowHeader(challenge, answer)
}
config.Logger.Warn("[get_pow] failed", "status", status, "code", code, "biz_code", bizCode, "msg", msg, "biz_msg", bizMsg, "use_config_token", a.UseConfigToken, "account", a.AccountID, "target_path", targetPath)
lastFailureMessage = failureMessage(msg, bizMsg, "get pow failed")
if isTokenInvalid(status, code, bizCode, msg, bizMsg) || isAuthIndicativeBizFailure(msg, bizMsg) {
lastFailureKind = authFailureKind(a.UseConfigToken)
} else {
lastFailureKind = FailureUnknown
}
if a.UseConfigToken {
if !refreshed && shouldAttemptRefresh(status, code, bizCode, msg, bizMsg) {
if c.Auth.RefreshToken(ctx, a) {
@@ -141,12 +152,15 @@ func (c *Client) GetPowForTarget(ctx context.Context, a *auth.RequestAuth, targe
}
attempts++
}
if lastFailureKind != FailureUnknown {
return "", &RequestFailure{Op: "get pow", Kind: lastFailureKind, Message: lastFailureMessage}
}
return "", errors.New("get pow failed")
}
func (c *Client) authHeaders(token string) map[string]string {
headers := make(map[string]string, len(BaseHeaders)+1)
for k, v := range BaseHeaders {
headers := make(map[string]string, len(dsprotocol.BaseHeaders)+1)
for k, v := range dsprotocol.BaseHeaders {
headers[k] = v
}
headers["authorization"] = "Bearer " + token
@@ -210,6 +224,23 @@ func isAuthIndicativeBizFailure(msg string, bizMsg string) bool {
return false
}
func authFailureKind(useConfigToken bool) FailureKind {
if useConfigToken {
return FailureManagedUnauthorized
}
return FailureDirectUnauthorized
}
func failureMessage(msg string, bizMsg string, fallback string) string {
if trimmed := strings.TrimSpace(bizMsg); trimmed != "" {
return trimmed
}
if trimmed := strings.TrimSpace(msg); trimmed != "" {
return trimmed
}
return strings.TrimSpace(fallback)
}
// DeepSeek has returned create-session ids in both biz_data.id and
// biz_data.chat_session.id across observed response variants; accept either.
func extractCreateSessionID(resp map[string]any) string {

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import "testing"

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import "testing"

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import "testing"

View File

@@ -1,8 +1,9 @@
package deepseek
package client
import (
"bytes"
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"encoding/json"
"errors"
"net/http"
@@ -20,10 +21,10 @@ func (c *Client) CallCompletion(ctx context.Context, a *auth.RequestAuth, payloa
clients := c.requestClientsForAuth(ctx, a)
headers := c.authHeaders(a.DeepSeekToken)
headers["x-ds-pow-response"] = powResp
captureSession := c.capture.Start("deepseek_completion", DeepSeekCompletionURL, a.AccountID, payload)
captureSession := c.capture.Start("deepseek_completion", dsprotocol.DeepSeekCompletionURL, a.AccountID, payload)
attempts := 0
for attempts < maxAttempts {
resp, err := c.streamPost(ctx, clients.stream, DeepSeekCompletionURL, headers, payload)
resp, err := c.streamPost(ctx, clients.stream, dsprotocol.DeepSeekCompletionURL, headers, payload)
if err != nil {
attempts++
time.Sleep(time.Second)

View File

@@ -1,9 +1,10 @@
package deepseek
package client
import (
"bufio"
"bytes"
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"encoding/json"
"errors"
"io"
@@ -60,8 +61,8 @@ func (c *Client) callContinue(ctx context.Context, a *auth.RequestAuth, sessionI
"fallback_to_resume": true,
}
config.Logger.Info("[auto_continue] calling continue", "session_id", sessionID, "message_id", responseMessageID)
captureSession := c.capture.Start("deepseek_continue", DeepSeekContinueURL, a.AccountID, payload)
resp, err := c.streamPost(ctx, clients.stream, DeepSeekContinueURL, headers, payload)
captureSession := c.capture.Start("deepseek_continue", dsprotocol.DeepSeekContinueURL, a.AccountID, payload)
resp, err := c.streamPost(ctx, clients.stream, dsprotocol.DeepSeekContinueURL, headers, payload)
if err != nil {
return nil, err
}

View File

@@ -1,8 +1,9 @@
package deepseek
package client
import (
"bytes"
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"errors"
"io"
"net/http"
@@ -58,8 +59,8 @@ func TestCallContinuePropagatesPowHeaderToFallbackRequest(t *testing.T) {
if seenPow != "pow-response-abc" {
t.Fatalf("continue request pow header=%q want=%q", seenPow, "pow-response-abc")
}
if seenURL != DeepSeekContinueURL {
t.Fatalf("continue request url=%q want=%q", seenURL, DeepSeekContinueURL)
if seenURL != dsprotocol.DeepSeekContinueURL {
t.Fatalf("continue request url=%q want=%q", seenURL, dsprotocol.DeepSeekContinueURL)
}
}
@@ -112,8 +113,8 @@ func TestCallCompletionAutoContinueThreadsPowHeader(t *testing.T) {
if seenPow != "pow-response-xyz" {
t.Fatalf("threaded continue pow header=%q want=%q", seenPow, "pow-response-xyz")
}
if seenContinueURL != DeepSeekContinueURL {
t.Fatalf("continue url=%q want=%q", seenContinueURL, DeepSeekContinueURL)
if seenContinueURL != dsprotocol.DeepSeekContinueURL {
t.Fatalf("continue url=%q want=%q", seenContinueURL, dsprotocol.DeepSeekContinueURL)
}
if !bytes.Contains(out, []byte(`"status":"WIP"`)) {
t.Fatalf("expected initial stream content in body, got=%s", string(out))

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import (
"context"

View File

@@ -1,7 +1,8 @@
package deepseek
package client
import (
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"errors"
"fmt"
"net/http"
@@ -70,7 +71,7 @@ func (c *Client) fetchUploadedFile(ctx context.Context, a *auth.RequestAuth, fil
return nil, errors.New("file id is required")
}
clients := c.requestClientsForAuth(ctx, a)
reqURL := DeepSeekFetchFilesURL + "?file_ids=" + url.QueryEscape(fileID)
reqURL := dsprotocol.DeepSeekFetchFilesURL + "?file_ids=" + url.QueryEscape(fileID)
headers := c.authHeaders(a.DeepSeekToken)
resp, status, err := c.getJSONWithStatus(ctx, clients.regular, reqURL, headers)

View File

@@ -1,7 +1,6 @@
package deepseek
package client
import (
"bufio"
"compress/gzip"
"io"
"net/http"
@@ -41,17 +40,10 @@ func (c *Client) jsonHeaders(headers map[string]string) map[string]string {
return out
}
func ScanSSELines(resp *http.Response, onLine func([]byte) bool) error {
scanner := bufio.NewScanner(resp.Body)
buf := make([]byte, 0, 64*1024)
scanner.Buffer(buf, 2*1024*1024)
for scanner.Scan() {
if !onLine(scanner.Bytes()) {
break
}
func cloneStringMap(in map[string]string) map[string]string {
out := make(map[string]string, len(in))
for k, v := range in {
out[k] = v
}
if err := scanner.Err(); err != nil {
return err
}
return nil
return out
}

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import (
"bytes"

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import (
"context"

View File

@@ -1,7 +1,8 @@
package deepseek
package client
import (
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"errors"
"fmt"
"net/http"
@@ -49,7 +50,7 @@ func (c *Client) GetSessionCount(ctx context.Context, a *auth.RequestAuth, maxAt
headers := c.authHeaders(a.DeepSeekToken)
// 构建请求 URL
reqURL := DeepSeekFetchSessionURL + "?lte_cursor.pinned=false"
reqURL := dsprotocol.DeepSeekFetchSessionURL + "?lte_cursor.pinned=false"
resp, status, err := c.getJSONWithStatus(ctx, clients.regular, reqURL, headers)
if err != nil {
@@ -109,7 +110,7 @@ func (c *Client) GetSessionCount(ctx context.Context, a *auth.RequestAuth, maxAt
func (c *Client) GetSessionCountForToken(ctx context.Context, token string) (*SessionStats, error) {
clients := c.requestClientsFromContext(ctx)
headers := c.authHeaders(token)
reqURL := DeepSeekFetchSessionURL + "?lte_cursor.pinned=false"
reqURL := dsprotocol.DeepSeekFetchSessionURL + "?lte_cursor.pinned=false"
resp, status, err := c.getJSONWithStatus(ctx, clients.regular, reqURL, headers)
if err != nil {
@@ -202,7 +203,7 @@ func (c *Client) FetchSessionPage(ctx context.Context, a *auth.RequestAuth, curs
if cursor != "" {
params.Set("lte_cursor", cursor)
}
reqURL := DeepSeekFetchSessionURL + "?" + params.Encode()
reqURL := dsprotocol.DeepSeekFetchSessionURL + "?" + params.Encode()
resp, status, err := c.getJSONWithStatus(ctx, clients.regular, reqURL, headers)
if err != nil {

View File

@@ -1,7 +1,8 @@
package deepseek
package client
import (
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"errors"
"fmt"
"net/http"
@@ -43,7 +44,7 @@ func (c *Client) DeleteSession(ctx context.Context, a *auth.RequestAuth, session
"chat_session_id": sessionID,
}
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, DeepSeekDeleteSessionURL, headers, payload)
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekDeleteSessionURL, headers, payload)
if err != nil {
config.Logger.Warn("[delete_session] request error", "error", err, "session_id", sessionID)
attempts++
@@ -97,7 +98,7 @@ func (c *Client) DeleteSessionForToken(ctx context.Context, token string, sessio
"chat_session_id": sessionID,
}
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, DeepSeekDeleteSessionURL, headers, payload)
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekDeleteSessionURL, headers, payload)
if err != nil {
result.ErrorMessage = err.Error()
return result, err
@@ -120,7 +121,7 @@ func (c *Client) DeleteAllSessions(ctx context.Context, a *auth.RequestAuth) err
headers := c.authHeaders(a.DeepSeekToken)
payload := map[string]any{}
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, DeepSeekDeleteAllSessionsURL, headers, payload)
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekDeleteAllSessionsURL, headers, payload)
if err != nil {
config.Logger.Warn("[delete_all_sessions] request error", "error", err)
return err
@@ -142,7 +143,7 @@ func (c *Client) DeleteAllSessionsForToken(ctx context.Context, token string) er
headers := c.authHeaders(token)
payload := map[string]any{}
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, DeepSeekDeleteAllSessionsURL, headers, payload)
resp, status, err := c.postJSONWithStatus(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekDeleteAllSessionsURL, headers, payload)
if err != nil {
config.Logger.Warn("[delete_all_sessions_for_token] request error", "error", err)
return err

View File

@@ -1,8 +1,9 @@
package deepseek
package client
import (
"bytes"
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"encoding/json"
"errors"
"fmt"
@@ -63,14 +64,16 @@ func (c *Client) UploadFile(ctx context.Context, a *auth.RequestAuth, req Upload
"purpose": purpose,
"bytes": len(req.Data),
}
captureSession := c.capture.Start("deepseek_upload_file", DeepSeekUploadFileURL, a.AccountID, capturePayload)
captureSession := c.capture.Start("deepseek_upload_file", dsprotocol.DeepSeekUploadFileURL, a.AccountID, capturePayload)
attempts := 0
refreshed := false
powHeader := ""
lastFailureKind := FailureUnknown
lastFailureMessage := ""
for attempts < maxAttempts {
clients := c.requestClientsForAuth(ctx, a)
if strings.TrimSpace(powHeader) == "" {
powHeader, err = c.GetPowForTarget(ctx, a, DeepSeekUploadTargetPath, maxAttempts)
powHeader, err = c.GetPowForTarget(ctx, a, dsprotocol.DeepSeekUploadTargetPath, maxAttempts)
if err != nil {
return nil, err
}
@@ -81,10 +84,12 @@ func (c *Client) UploadFile(ctx context.Context, a *auth.RequestAuth, req Upload
headers["x-ds-pow-response"] = powHeader
headers["x-file-size"] = strconv.Itoa(len(req.Data))
headers["x-thinking-enabled"] = "1"
resp, err := c.doUpload(ctx, clients.regular, clients.fallback, DeepSeekUploadFileURL, headers, body)
resp, err := c.doUpload(ctx, clients.regular, clients.fallback, dsprotocol.DeepSeekUploadFileURL, headers, body)
if err != nil {
config.Logger.Warn("[upload_file] request error", "error", err, "account", a.AccountID, "filename", filename)
powHeader = ""
lastFailureKind = FailureUnknown
lastFailureMessage = err.Error()
attempts++
continue
}
@@ -131,6 +136,12 @@ func (c *Client) UploadFile(ctx context.Context, a *auth.RequestAuth, req Upload
}
config.Logger.Warn("[upload_file] failed", "status", resp.StatusCode, "code", code, "biz_code", bizCode, "msg", msg, "biz_msg", bizMsg, "account", a.AccountID, "filename", filename)
powHeader = ""
lastFailureMessage = failureMessage(msg, bizMsg, "upload file failed")
if isTokenInvalid(resp.StatusCode, code, bizCode, msg, bizMsg) || isAuthIndicativeBizFailure(msg, bizMsg) {
lastFailureKind = authFailureKind(a.UseConfigToken)
} else {
lastFailureKind = FailureUnknown
}
if a.UseConfigToken {
if !refreshed && shouldAttemptRefresh(resp.StatusCode, code, bizCode, msg, bizMsg) {
if c.Auth.RefreshToken(ctx, a) {
@@ -147,6 +158,9 @@ func (c *Client) UploadFile(ctx context.Context, a *auth.RequestAuth, req Upload
}
attempts++
}
if lastFailureKind != FailureUnknown {
return nil, &RequestFailure{Op: "upload file", Kind: lastFailureKind, Message: lastFailureMessage}
}
return nil, errors.New("upload file failed")
}

View File

@@ -1,7 +1,8 @@
package deepseek
package client
import (
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"encoding/base64"
"encoding/hex"
"encoding/json"
@@ -75,7 +76,7 @@ func TestExtractUploadFileResultSupportsNestedShapes(t *testing.T) {
func TestUploadFileUsesUploadTargetPowAndMultipartHeaders(t *testing.T) {
challengeHash := powpkg.DeepSeekHashV1([]byte(powpkg.BuildPrefix("salt", 1712345678) + "42"))
powResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"challenge":{"algorithm":"DeepSeekHashV1","challenge":"` + hex.EncodeToString(challengeHash[:]) + `","salt":"salt","expire_at":1712345678,"difficulty":1000,"signature":"sig","target_path":"` + DeepSeekUploadTargetPath + `"}}}}`
powResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"challenge":{"algorithm":"DeepSeekHashV1","challenge":"` + hex.EncodeToString(challengeHash[:]) + `","salt":"salt","expire_at":1712345678,"difficulty":1000,"signature":"sig","target_path":"` + dsprotocol.DeepSeekUploadTargetPath + `"}}}}`
uploadResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"file":{"file_id":"file_789","filename":"demo.txt","bytes":5,"status":"processed","purpose":"assistants","is_image":false}}}}`
var seenPow string
var seenTargetPath string
@@ -119,7 +120,7 @@ func TestUploadFileUsesUploadTargetPowAndMultipartHeaders(t *testing.T) {
if result.ID != "file_789" {
t.Fatalf("expected uploaded file id file_789, got %#v", result)
}
if !strings.Contains(seenTargetPath, `"target_path":"`+DeepSeekUploadTargetPath+`"`) {
if !strings.Contains(seenTargetPath, `"target_path":"`+dsprotocol.DeepSeekUploadTargetPath+`"`) {
t.Fatalf("expected upload target_path in pow request, got %q", seenTargetPath)
}
if strings.TrimSpace(seenPow) == "" {
@@ -133,8 +134,8 @@ func TestUploadFileUsesUploadTargetPowAndMultipartHeaders(t *testing.T) {
if err := json.Unmarshal(rawPow, &powHeader); err != nil {
t.Fatalf("unmarshal pow header failed: %v", err)
}
if powHeader["target_path"] != DeepSeekUploadTargetPath {
t.Fatalf("expected pow target_path %q, got %#v", DeepSeekUploadTargetPath, powHeader["target_path"])
if powHeader["target_path"] != dsprotocol.DeepSeekUploadTargetPath {
t.Fatalf("expected pow target_path %q, got %#v", dsprotocol.DeepSeekUploadTargetPath, powHeader["target_path"])
}
if seenFileSize != "5" {
t.Fatalf("expected x-file-size=5, got %q", seenFileSize)
@@ -153,7 +154,7 @@ func TestUploadFileWaitsForProcessedFetchFiles(t *testing.T) {
defer func() { fileReadySleep = oldSleep }()
challengeHash := powpkg.DeepSeekHashV1([]byte(powpkg.BuildPrefix("salt", 1712345678) + "42"))
powResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"challenge":{"algorithm":"DeepSeekHashV1","challenge":"` + hex.EncodeToString(challengeHash[:]) + `","salt":"salt","expire_at":1712345678,"difficulty":1000,"signature":"sig","target_path":"` + DeepSeekUploadTargetPath + `"}}}}`
powResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"challenge":{"algorithm":"DeepSeekHashV1","challenge":"` + hex.EncodeToString(challengeHash[:]) + `","salt":"salt","expire_at":1712345678,"difficulty":1000,"signature":"sig","target_path":"` + dsprotocol.DeepSeekUploadTargetPath + `"}}}}`
uploadResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"file":{"file_id":"file_789","filename":"demo.txt","bytes":5,"status":"PENDING","purpose":"assistants","is_image":false}}}}`
pendingFetchResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"files":[{"file_id":"file_789","filename":"demo.txt","bytes":5,"status":"PENDING","purpose":"assistants","is_image":false}]}}}`
processedFetchResponse := `{"code":0,"msg":"ok","data":{"biz_code":0,"biz_data":{"files":[{"file_id":"file_789","filename":"demo.txt","bytes":5,"status":"processed","purpose":"assistants","is_image":true}]}}}`
@@ -165,7 +166,7 @@ func TestUploadFileWaitsForProcessedFetchFiles(t *testing.T) {
switch call {
case 1:
bodyBytes, _ := io.ReadAll(req.Body)
if !strings.Contains(string(bodyBytes), `"target_path":"`+DeepSeekUploadTargetPath+`"`) {
if !strings.Contains(string(bodyBytes), `"target_path":"`+dsprotocol.DeepSeekUploadTargetPath+`"`) {
t.Fatalf("expected pow target path request, got %s", string(bodyBytes))
}
return &http.Response{StatusCode: http.StatusOK, Header: make(http.Header), Body: io.NopCloser(strings.NewReader(powResponse)), Request: req}, nil

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import (
"context"

View File

@@ -0,0 +1,46 @@
package client
import (
"errors"
"fmt"
)
type FailureKind string
const (
FailureUnknown FailureKind = ""
FailureDirectUnauthorized FailureKind = "direct_unauthorized"
FailureManagedUnauthorized FailureKind = "managed_unauthorized"
)
type RequestFailure struct {
Op string
Kind FailureKind
Message string
}
func (e *RequestFailure) Error() string {
if e == nil {
return ""
}
switch {
case e.Op != "" && e.Message != "":
return fmt.Sprintf("%s: %s", e.Op, e.Message)
case e.Op != "":
return e.Op + " failed"
case e.Message != "":
return e.Message
default:
return "request failed"
}
}
func IsManagedUnauthorizedError(err error) bool {
var failure *RequestFailure
return errors.As(err, &failure) && failure.Kind == FailureManagedUnauthorized
}
func IsDirectUnauthorizedError(err error) bool {
var failure *RequestFailure
return errors.As(err, &failure) && failure.Kind == FailureDirectUnauthorized
}

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import (
"context"

View File

@@ -1,4 +1,4 @@
package deepseek
package client
import (
"context"

View File

@@ -1,7 +1,8 @@
package deepseek
package client
import (
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"fmt"
"net"
"net/http"
@@ -172,7 +173,7 @@ func applyProxyConnectivityHeaders(req *http.Request) {
if req == nil {
return
}
for key, value := range BaseHeaders {
for key, value := range dsprotocol.BaseHeaders {
key = strings.TrimSpace(key)
value = strings.TrimSpace(value)
if key == "" || value == "" {

View File

@@ -1,7 +1,8 @@
package deepseek
package client
import (
"context"
dsprotocol "ds2api/internal/deepseek/protocol"
"net/http"
"strings"
"testing"
@@ -52,7 +53,7 @@ func TestApplyProxyConnectivityHeadersUsesBaseHeaders(t *testing.T) {
applyProxyConnectivityHeaders(req)
for key, want := range BaseHeaders {
for key, want := range dsprotocol.BaseHeaders {
if got := req.Header.Get(key); got != want {
t.Fatalf("expected header %q=%q, got %q", key, want, got)
}

View File

@@ -1,11 +0,0 @@
package deepseek
import "ds2api/internal/prompt"
func MessagesPrepare(messages []map[string]any) string {
return prompt.MessagesPrepare(messages)
}
func MessagesPrepareWithThinking(messages []map[string]any, thinkingEnabled bool) string {
return prompt.MessagesPrepareWithThinking(messages, thinkingEnabled)
}

View File

@@ -1,4 +1,4 @@
package deepseek
package protocol
import (
_ "embed"

View File

@@ -1,4 +1,4 @@
package deepseek
package protocol
import "testing"

View File

@@ -0,0 +1,21 @@
package protocol
import (
"bufio"
"net/http"
)
func ScanSSELines(resp *http.Response, onLine func([]byte) bool) error {
scanner := bufio.NewScanner(resp.Body)
buf := make([]byte, 0, 64*1024)
scanner.Buffer(buf, 2*1024*1024)
for scanner.Scan() {
if !onLine(scanner.Bytes()) {
break
}
}
if err := scanner.Err(); err != nil {
return err
}
return nil
}

View File

@@ -117,7 +117,7 @@ func BuildResponsesFunctionCallArgumentsDonePayload(responseID, itemID string, o
}
}
func BuildResponsesFailedPayload(responseID, model, message, code string) map[string]any {
func BuildResponsesFailedPayload(responseID, model string, status int, message, code string) map[string]any {
code = strings.TrimSpace(code)
if code == "" {
code = "api_error"
@@ -129,15 +129,36 @@ func BuildResponsesFailedPayload(responseID, model, message, code string) map[st
"object": "response",
"model": model,
"status": "failed",
"status_code": status,
"error": map[string]any{
"message": message,
"type": "invalid_request_error",
"type": responsesErrorType(status),
"code": code,
"param": nil,
},
}
}
func responsesErrorType(status int) string {
switch status {
case 400, 404, 422:
return "invalid_request_error"
case 401:
return "authentication_error"
case 403:
return "permission_error"
case 429:
return "rate_limit_error"
case 503:
return "service_unavailable_error"
default:
if status >= 500 {
return "api_error"
}
return "invalid_request_error"
}
}
func BuildResponsesCompletedPayload(response map[string]any) map[string]any {
responseID, _ := response["id"].(string)
return map[string]any{

View File

@@ -0,0 +1,46 @@
package accounts
import (
"net/http"
"ds2api/internal/chathistory"
"ds2api/internal/config"
adminshared "ds2api/internal/httpapi/admin/shared"
)
type Handler struct {
Store adminshared.ConfigStore
Pool adminshared.PoolController
DS adminshared.DeepSeekCaller
OpenAI adminshared.OpenAIChatCaller
ChatHistory *chathistory.Store
}
var writeJSON = adminshared.WriteJSON
func reverseAccounts(a []config.Account) { adminshared.ReverseAccounts(a) }
func intFromQuery(r *http.Request, key string, d int) int {
return adminshared.IntFromQuery(r, key, d)
}
func maskSecretPreview(secret string) string {
return adminshared.MaskSecretPreview(secret)
}
func toAccount(m map[string]any) config.Account {
return adminshared.ToAccount(m)
}
func fieldStringOptional(m map[string]any, key string) (string, bool) {
return adminshared.FieldStringOptional(m, key)
}
func accountMatchesIdentifier(acc config.Account, identifier string) bool {
return adminshared.AccountMatchesIdentifier(acc, identifier)
}
func findProxyByID(c config.Config, proxyID string) (config.Proxy, bool) {
return adminshared.FindProxyByID(c, proxyID)
}
func findAccountByIdentifier(store adminshared.ConfigStore, identifier string) (config.Account, bool) {
return adminshared.FindAccountByIdentifier(store, identifier)
}
func newRequestError(detail string) error { return adminshared.NewRequestError(detail) }
func requestErrorDetail(err error) (string, bool) {
return adminshared.RequestErrorDetail(err)
}

View File

@@ -1,4 +1,4 @@
package admin
package accounts
import (
"encoding/json"
@@ -32,6 +32,8 @@ func (h *Handler) listAccounts(w http.ResponseWriter, r *http.Request) {
for _, acc := range accounts {
id := strings.ToLower(acc.Identifier())
if strings.Contains(id, q) ||
strings.Contains(strings.ToLower(acc.Name), q) ||
strings.Contains(strings.ToLower(acc.Remark), q) ||
strings.Contains(strings.ToLower(acc.Email), q) ||
strings.Contains(strings.ToLower(acc.Mobile), q) {
filtered = append(filtered, acc)
@@ -56,22 +58,16 @@ func (h *Handler) listAccounts(w http.ResponseWriter, r *http.Request) {
for _, acc := range accounts[start:end] {
testStatus, _ := h.Store.AccountTestStatus(acc.Identifier())
token := strings.TrimSpace(acc.Token)
preview := ""
if token != "" {
if len(token) > 20 {
preview = token[:20] + "..."
} else {
preview = token
}
}
items = append(items, map[string]any{
"identifier": acc.Identifier(),
"name": acc.Name,
"remark": acc.Remark,
"email": acc.Email,
"mobile": acc.Mobile,
"proxy_id": acc.ProxyID,
"has_password": acc.Password != "",
"has_token": token != "",
"token_preview": preview,
"token_preview": maskSecretPreview(token),
"test_status": testStatus,
})
}
@@ -112,6 +108,46 @@ func (h *Handler) addAccount(w http.ResponseWriter, r *http.Request) {
writeJSON(w, http.StatusOK, map[string]any{"success": true, "total_accounts": len(h.Store.Snapshot().Accounts)})
}
func (h *Handler) updateAccount(w http.ResponseWriter, r *http.Request) {
identifier := chi.URLParam(r, "identifier")
if decoded, err := url.PathUnescape(identifier); err == nil {
identifier = decoded
}
var req map[string]any
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeJSON(w, http.StatusBadRequest, map[string]any{"detail": "invalid json"})
return
}
name, nameOK := fieldStringOptional(req, "name")
remark, remarkOK := fieldStringOptional(req, "remark")
err := h.Store.Update(func(c *config.Config) error {
for i, acc := range c.Accounts {
if !accountMatchesIdentifier(acc, identifier) {
continue
}
if nameOK {
c.Accounts[i].Name = name
}
if remarkOK {
c.Accounts[i].Remark = remark
}
return nil
}
return newRequestError("账号不存在")
})
if err != nil {
if detail, ok := requestErrorDetail(err); ok {
writeJSON(w, http.StatusNotFound, map[string]any{"detail": detail})
return
}
writeJSON(w, http.StatusBadRequest, map[string]any{"detail": err.Error()})
return
}
writeJSON(w, http.StatusOK, map[string]any{"success": true, "total_accounts": len(h.Store.Snapshot().Accounts)})
}
func (h *Handler) deleteAccount(w http.ResponseWriter, r *http.Request) {
identifier := chi.URLParam(r, "identifier")
if decoded, err := url.PathUnescape(identifier); err == nil {

View File

@@ -0,0 +1,118 @@
package accounts
import (
"encoding/json"
"fmt"
"net/http"
"net/http/httptest"
"strings"
"testing"
"github.com/go-chi/chi/v5"
)
func TestListAccountsPageSizeCapIs5000(t *testing.T) {
accounts := make([]string, 0, 150)
for i := range 150 {
accounts = append(accounts, fmt.Sprintf(`{"email":"u%d@example.com","password":"pwd"}`, i))
}
raw := fmt.Sprintf(`{"accounts":[%s]}`, strings.Join(accounts, ","))
router := newHTTPAdminHarness(t, raw, &testingDSMock{})
rec := httptest.NewRecorder()
router.ServeHTTP(rec, adminReq(http.MethodGet, "/accounts?page=1&page_size=200", nil))
if rec.Code != http.StatusOK {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
var payload map[string]any
if err := json.Unmarshal(rec.Body.Bytes(), &payload); err != nil {
t.Fatalf("decode response: %v", err)
}
items, _ := payload["items"].([]any)
if len(items) != 150 {
t.Fatalf("expected all 150 accounts with page_size=200, got %d", len(items))
}
if ps, _ := payload["page_size"].(float64); ps != 200 {
t.Fatalf("expected page_size=200 in response, got %v", payload["page_size"])
}
}
func TestListAccountsPageSizeAbove5000ClampedTo5000(t *testing.T) {
router := newHTTPAdminHarness(t, `{"accounts":[{"email":"u@example.com","password":"pwd"}]}`, &testingDSMock{})
rec := httptest.NewRecorder()
router.ServeHTTP(rec, adminReq(http.MethodGet, "/accounts?page=1&page_size=9999", nil))
if rec.Code != http.StatusOK {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
var payload map[string]any
if err := json.Unmarshal(rec.Body.Bytes(), &payload); err != nil {
t.Fatalf("decode response: %v", err)
}
if ps, _ := payload["page_size"].(float64); ps != 5000 {
t.Fatalf("expected page_size clamped to 5000, got %v", payload["page_size"])
}
}
func TestUpdateAccountMetadataPreservesCredentials(t *testing.T) {
h := newAdminTestHandler(t, `{
"accounts":[{"email":"u@example.com","name":"old name","remark":"old remark","password":"secret"}]
}`)
r := chi.NewRouter()
r.Put("/admin/accounts/{identifier}", h.updateAccount)
body := []byte(`{"name":"new name","remark":"new remark"}`)
req := httptest.NewRequest(http.MethodPut, "/admin/accounts/u@example.com", strings.NewReader(string(body)))
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
snap := h.Store.Snapshot()
if len(snap.Accounts) != 1 {
t.Fatalf("unexpected accounts after update: %#v", snap.Accounts)
}
acc := snap.Accounts[0]
if acc.Email != "u@example.com" {
t.Fatalf("identifier changed unexpectedly: %#v", acc)
}
if acc.Name != "new name" || acc.Remark != "new remark" {
t.Fatalf("metadata update did not persist: %#v", acc)
}
if acc.Password != "secret" {
t.Fatalf("password should be preserved, got %#v", acc)
}
}
func TestListAccountsMasksTokenPreview(t *testing.T) {
h := newAdminTestHandler(t, `{
"accounts":[{"email":"u@example.com","password":"pwd"}]
}`)
if err := h.Store.UpdateAccountToken("u@example.com", "abcdefgh"); err != nil {
t.Fatalf("seed runtime token: %v", err)
}
req := httptest.NewRequest(http.MethodGet, "/admin/accounts?page=1&page_size=10", nil)
rec := httptest.NewRecorder()
h.listAccounts(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
var payload map[string]any
if err := json.Unmarshal(rec.Body.Bytes(), &payload); err != nil {
t.Fatalf("decode response failed: %v", err)
}
items, _ := payload["items"].([]any)
if len(items) != 1 {
t.Fatalf("expected 1 item, got %d", len(items))
}
first, _ := items[0].(map[string]any)
if got, _ := first["token_preview"].(string); got != "ab****gh" {
t.Fatalf("expected masked token preview, got %q", got)
}
}

View File

@@ -1,4 +1,4 @@
package admin
package accounts
import "net/http"

View File

@@ -1,4 +1,4 @@
package admin
package accounts
import (
"bytes"
@@ -13,9 +13,9 @@ import (
authn "ds2api/internal/auth"
"ds2api/internal/config"
"ds2api/internal/deepseek"
"ds2api/internal/prompt"
"ds2api/internal/promptcompat"
"ds2api/internal/sse"
"ds2api/internal/util"
)
type modelAliasSnapshotReader struct {
@@ -41,7 +41,7 @@ func (h *Handler) testSingleAccount(w http.ResponseWriter, r *http.Request) {
}
model, _ := req["model"].(string)
if model == "" {
model = "deepseek-chat"
model = "deepseek-v4-flash"
}
message, _ := req["message"].(string)
result := h.testAccount(r.Context(), acc, model, message)
@@ -53,7 +53,7 @@ func (h *Handler) testAllAccounts(w http.ResponseWriter, r *http.Request) {
_ = json.NewDecoder(r.Body).Decode(&req)
model, _ := req["model"].(string)
if model == "" {
model = "deepseek-chat"
model = "deepseek-v4-flash"
}
accounts := h.Store.Snapshot().Accounts
if len(accounts) == 0 {
@@ -174,9 +174,9 @@ func (h *Handler) testAccount(ctx context.Context, acc config.Account, model, me
result["message"] = "获取 PoW 失败: " + err.Error()
return result
}
payload := util.StandardRequest{
payload := promptcompat.StandardRequest{
ResolvedModel: model,
FinalPrompt: deepseek.MessagesPrepare([]map[string]any{{"role": "user", "content": message}}),
FinalPrompt: prompt.MessagesPrepare([]map[string]any{{"role": "user", "content": message}}),
Thinking: thinking,
Search: search,
}.CompletionPayload(sessionID)
@@ -211,7 +211,7 @@ func (h *Handler) testAPI(w http.ResponseWriter, r *http.Request) {
message, _ := req["message"].(string)
apiKey, _ := req["api_key"].(string)
if model == "" {
model = "deepseek-chat"
model = "deepseek-v4-flash"
}
if message == "" {
message = "你好"

View File

@@ -1,4 +1,4 @@
package admin
package accounts
import (
"bytes"
@@ -13,7 +13,7 @@ import (
"ds2api/internal/auth"
"ds2api/internal/config"
"ds2api/internal/deepseek"
dsclient "ds2api/internal/deepseek/client"
)
type testingDSMock struct {
@@ -58,8 +58,8 @@ func (m *testingDSMock) DeleteAllSessionsForToken(_ context.Context, _ string) e
return nil
}
func (m *testingDSMock) GetSessionCountForToken(_ context.Context, _ string) (*deepseek.SessionStats, error) {
return &deepseek.SessionStats{Success: true}, nil
func (m *testingDSMock) GetSessionCountForToken(_ context.Context, _ string) (*dsclient.SessionStats, error) {
return &dsclient.SessionStats{Success: true}, nil
}
func TestTestAccount_BatchModeOnlyCreatesSession(t *testing.T) {
@@ -72,7 +72,7 @@ func TestTestAccount_BatchModeOnlyCreatesSession(t *testing.T) {
t.Fatal("expected test account")
}
result := h.testAccount(context.Background(), acc, "deepseek-chat", "")
result := h.testAccount(context.Background(), acc, "deepseek-v4-flash", "")
if ok, _ := result["success"].(bool); !ok {
t.Fatalf("expected success=true, got %#v", result)
@@ -163,8 +163,8 @@ func (m *completionPayloadDSMock) DeleteAllSessionsForToken(_ context.Context, _
return nil
}
func (m *completionPayloadDSMock) GetSessionCountForToken(_ context.Context, _ string) (*deepseek.SessionStats, error) {
return &deepseek.SessionStats{Success: true}, nil
func (m *completionPayloadDSMock) GetSessionCountForToken(_ context.Context, _ string) (*dsclient.SessionStats, error) {
return &dsclient.SessionStats{Success: true}, nil
}
func TestTestAccount_MessageModeUsesExpertModelTypeForExpertModel(t *testing.T) {
@@ -177,7 +177,7 @@ func TestTestAccount_MessageModeUsesExpertModelTypeForExpertModel(t *testing.T)
t.Fatal("expected test account")
}
result := h.testAccount(context.Background(), acc, "deepseek-expert-chat", "hello")
result := h.testAccount(context.Background(), acc, "deepseek-v4-pro", "hello")
if ok, _ := result["success"].(bool); !ok {
t.Fatalf("expected success=true, got %#v", result)
@@ -200,7 +200,7 @@ func TestTestAccount_MessageModeUsesVisionModelTypeForVisionModel(t *testing.T)
t.Fatal("expected test account")
}
result := h.testAccount(context.Background(), acc, "deepseek-vision-chat", "hello")
result := h.testAccount(context.Background(), acc, "deepseek-v4-vision", "hello")
if ok, _ := result["success"].(bool); !ok {
t.Fatalf("expected success=true, got %#v", result)

View File

@@ -0,0 +1,38 @@
package accounts
import (
"context"
"net/http"
"github.com/go-chi/chi/v5"
"ds2api/internal/config"
)
func RegisterRoutes(r chi.Router, h *Handler) {
r.Get("/accounts", h.listAccounts)
r.Post("/accounts", h.addAccount)
r.Put("/accounts/{identifier}", h.updateAccount)
r.Delete("/accounts/{identifier}", h.deleteAccount)
r.Get("/queue/status", h.queueStatus)
r.Post("/accounts/test", h.testSingleAccount)
r.Post("/accounts/test-all", h.testAllAccounts)
r.Post("/accounts/sessions/delete-all", h.deleteAllSessions)
r.Post("/test", h.testAPI)
}
func RunAccountTestsConcurrently(accounts []config.Account, maxConcurrency int, testFn func(int, config.Account) map[string]any) []map[string]any {
return runAccountTestsConcurrently(accounts, maxConcurrency, testFn)
}
func (h *Handler) TestAccount(ctx context.Context, acc config.Account, model, message string) map[string]any {
return h.testAccount(ctx, acc, model, message)
}
func (h *Handler) ListAccounts(w http.ResponseWriter, r *http.Request) { h.listAccounts(w, r) }
func (h *Handler) AddAccount(w http.ResponseWriter, r *http.Request) { h.addAccount(w, r) }
func (h *Handler) UpdateAccount(w http.ResponseWriter, r *http.Request) { h.updateAccount(w, r) }
func (h *Handler) DeleteAccount(w http.ResponseWriter, r *http.Request) { h.deleteAccount(w, r) }
func (h *Handler) DeleteAllSessions(w http.ResponseWriter, r *http.Request) {
h.deleteAllSessions(w, r)
}

View File

@@ -0,0 +1,35 @@
package accounts
import (
"bytes"
"net/http"
"net/http/httptest"
"testing"
"github.com/go-chi/chi/v5"
"ds2api/internal/account"
"ds2api/internal/config"
adminshared "ds2api/internal/httpapi/admin/shared"
)
func newHTTPAdminHarness(t *testing.T, rawConfig string, ds adminshared.DeepSeekCaller) http.Handler {
t.Helper()
t.Setenv("DS2API_CONFIG_JSON", rawConfig)
store := config.LoadStore()
h := &Handler{
Store: store,
Pool: account.NewPool(store),
DS: ds,
}
r := chi.NewRouter()
RegisterRoutes(r, h)
return r
}
func adminReq(method, path string, body []byte) *http.Request {
req := httptest.NewRequest(method, path, bytes.NewReader(body))
req.Header.Set("Authorization", "Bearer admin")
req.Header.Set("Content-Type", "application/json")
return req
}

View File

@@ -0,0 +1,19 @@
package auth
import (
"ds2api/internal/chathistory"
adminshared "ds2api/internal/httpapi/admin/shared"
)
type Handler struct {
Store adminshared.ConfigStore
Pool adminshared.PoolController
DS adminshared.DeepSeekCaller
OpenAI adminshared.OpenAIChatCaller
ChatHistory *chathistory.Store
}
var writeJSON = adminshared.WriteJSON
var intFrom = adminshared.IntFrom
func nilIfEmpty(s string) any { return adminshared.NilIfEmpty(s) }

View File

@@ -1,4 +1,4 @@
package admin
package auth
import (
"encoding/json"

View File

@@ -0,0 +1,20 @@
package auth
import (
"net/http"
"github.com/go-chi/chi/v5"
)
func (h *Handler) RequireAdmin(next http.Handler) http.Handler {
return h.requireAdmin(next)
}
func RegisterPublicRoutes(r chi.Router, h *Handler) {
r.Post("/login", h.login)
r.Get("/verify", h.verify)
}
func RegisterProtectedRoutes(r chi.Router, h *Handler) {
r.Get("/vercel/config", h.getVercelConfig)
}

View File

@@ -0,0 +1,50 @@
package configmgmt
import (
"ds2api/internal/chathistory"
"ds2api/internal/config"
adminshared "ds2api/internal/httpapi/admin/shared"
)
type Handler struct {
Store adminshared.ConfigStore
Pool adminshared.PoolController
DS adminshared.DeepSeekCaller
OpenAI adminshared.OpenAIChatCaller
ChatHistory *chathistory.Store
}
var writeJSON = adminshared.WriteJSON
func maskSecretPreview(secret string) string {
return adminshared.MaskSecretPreview(secret)
}
func toStringSlice(v any) ([]string, bool) { return adminshared.ToStringSlice(v) }
func toAccount(m map[string]any) config.Account {
return adminshared.ToAccount(m)
}
func toAPIKeys(v any) ([]config.APIKey, bool) { return adminshared.ToAPIKeys(v) }
func mergeAPIKeysPreferStructured(existing, incoming []config.APIKey) ([]config.APIKey, int) {
return adminshared.MergeAPIKeysPreferStructured(existing, incoming)
}
func fieldString(m map[string]any, key string) string {
return adminshared.FieldString(m, key)
}
func fieldStringOptional(m map[string]any, key string) (string, bool) {
return adminshared.FieldStringOptional(m, key)
}
func normalizeAccountForStorage(acc config.Account) config.Account {
return adminshared.NormalizeAccountForStorage(acc)
}
func accountDedupeKey(acc config.Account) string { return adminshared.AccountDedupeKey(acc) }
func normalizeAndDedupeAccounts(accounts []config.Account) []config.Account {
return adminshared.NormalizeAndDedupeAccounts(accounts)
}
func newRequestError(detail string) error { return adminshared.NewRequestError(detail) }
func requestErrorDetail(err error) (string, bool) {
return adminshared.RequestErrorDetail(err)
}
func normalizeSettingsConfig(c *config.Config) { adminshared.NormalizeSettingsConfig(c) }
func validateSettingsConfig(c config.Config) error {
return adminshared.ValidateSettingsConfig(c)
}

View File

@@ -1,9 +1,7 @@
package admin
package configmgmt
import (
"crypto/md5"
"encoding/json"
"fmt"
"net/http"
"strings"
@@ -53,25 +51,12 @@ func (h *Handler) configImport(w http.ResponseWriter, r *http.Request) {
next.Accounts = normalizeAndDedupeAccounts(next.Accounts)
next.VercelSyncHash = c.VercelSyncHash
next.VercelSyncTime = c.VercelSyncTime
importedKeys = len(next.Keys)
importedKeys = len(next.APIKeys)
importedAccounts = len(next.Accounts)
} else {
existingKeys := map[string]struct{}{}
for _, k := range next.Keys {
existingKeys[k] = struct{}{}
}
for _, k := range incoming.Keys {
key := strings.TrimSpace(k)
if key == "" {
continue
}
if _, ok := existingKeys[key]; ok {
continue
}
existingKeys[key] = struct{}{}
next.Keys = append(next.Keys, key)
importedKeys++
}
var changed int
next.APIKeys, changed = mergeAPIKeysPreferStructured(next.APIKeys, incoming.APIKeys)
importedKeys += changed
existingAccounts := map[string]struct{}{}
for _, acc := range next.Accounts {
@@ -95,23 +80,6 @@ func (h *Handler) configImport(w http.ResponseWriter, r *http.Request) {
importedAccounts++
}
if len(incoming.ClaudeMapping) > 0 {
if next.ClaudeMapping == nil {
next.ClaudeMapping = map[string]string{}
}
for k, v := range incoming.ClaudeMapping {
next.ClaudeMapping[k] = v
}
}
if len(incoming.ClaudeModelMap) > 0 {
if next.ClaudeModelMap == nil {
next.ClaudeModelMap = map[string]string{}
}
for k, v := range incoming.ClaudeModelMap {
next.ClaudeModelMap[k] = v
}
}
if len(incoming.ModelAliases) > 0 {
if next.ModelAliases == nil {
next.ModelAliases = map[string]string{}
@@ -175,13 +143,3 @@ func (h *Handler) configImport(w http.ResponseWriter, r *http.Request) {
"message": "config imported",
})
}
func (h *Handler) computeSyncHash() string {
snap := h.Store.Snapshot().Clone()
snap.ClearAccountTokens()
snap.VercelSyncHash = ""
snap.VercelSyncTime = 0
b, _ := json.Marshal(snap)
sum := md5.Sum(b)
return fmt.Sprintf("%x", sum)
}

View File

@@ -1,4 +1,4 @@
package admin
package configmgmt
import (
"net/http"
@@ -11,38 +11,28 @@ func (h *Handler) getConfig(w http.ResponseWriter, _ *http.Request) {
snap := h.Store.Snapshot()
safe := map[string]any{
"keys": snap.Keys,
"api_keys": snap.APIKeys,
"accounts": []map[string]any{},
"proxies": []map[string]any{},
"env_backed": h.Store.IsEnvBacked(),
"env_source_present": h.Store.HasEnvConfigSource(),
"env_writeback_enabled": h.Store.IsEnvWritebackEnabled(),
"config_path": h.Store.ConfigPath(),
"claude_mapping": func() map[string]string {
if len(snap.ClaudeMapping) > 0 {
return snap.ClaudeMapping
}
return snap.ClaudeModelMap
}(),
"model_aliases": snap.ModelAliases,
}
accounts := make([]map[string]any, 0, len(snap.Accounts))
for _, acc := range snap.Accounts {
token := strings.TrimSpace(acc.Token)
preview := ""
if token != "" {
if len(token) > 20 {
preview = token[:20] + "..."
} else {
preview = token
}
}
accounts = append(accounts, map[string]any{
"identifier": acc.Identifier(),
"name": acc.Name,
"remark": acc.Remark,
"email": acc.Email,
"mobile": acc.Mobile,
"proxy_id": acc.ProxyID,
"has_password": strings.TrimSpace(acc.Password) != "",
"has_token": token != "",
"token_preview": preview,
"token_preview": maskSecretPreview(token),
})
}
safe["accounts"] = accounts

View File

@@ -1,4 +1,4 @@
package admin
package configmgmt
import (
"encoding/json"
@@ -19,7 +19,9 @@ func (h *Handler) updateConfig(w http.ResponseWriter, r *http.Request) {
}
old := h.Store.Snapshot()
err := h.Store.Update(func(c *config.Config) error {
if keys, ok := toStringSlice(req["keys"]); ok {
if apiKeys, ok := toAPIKeys(req["api_keys"]); ok {
c.APIKeys = apiKeys
} else if keys, ok := toStringSlice(req["keys"]); ok {
c.Keys = keys
}
if accountsRaw, ok := req["accounts"].([]any); ok {
@@ -56,12 +58,12 @@ func (h *Handler) updateConfig(w http.ResponseWriter, r *http.Request) {
}
c.Accounts = accounts
}
if m, ok := req["claude_mapping"].(map[string]any); ok {
newMap := map[string]string{}
if m, ok := req["model_aliases"].(map[string]any); ok {
aliases := make(map[string]string, len(m))
for k, v := range m {
newMap[k] = fmt.Sprintf("%v", v)
aliases[k] = fmt.Sprintf("%v", v)
}
c.ClaudeMapping = newMap
c.ModelAliases = aliases
}
return nil
})
@@ -78,17 +80,19 @@ func (h *Handler) addKey(w http.ResponseWriter, r *http.Request) {
_ = json.NewDecoder(r.Body).Decode(&req)
key, _ := req["key"].(string)
key = strings.TrimSpace(key)
name := fieldString(req, "name")
remark := fieldString(req, "remark")
if key == "" {
writeJSON(w, http.StatusBadRequest, map[string]any{"detail": "Key 不能为空"})
return
}
err := h.Store.Update(func(c *config.Config) error {
for _, k := range c.Keys {
if k == key {
for _, item := range c.APIKeys {
if item.Key == key {
return fmt.Errorf("key 已存在")
}
}
c.Keys = append(c.Keys, key)
c.APIKeys = append(c.APIKeys, config.APIKey{Key: key, Name: name, Remark: remark})
return nil
})
if err != nil {
@@ -98,12 +102,25 @@ func (h *Handler) addKey(w http.ResponseWriter, r *http.Request) {
writeJSON(w, http.StatusOK, map[string]any{"success": true, "total_keys": len(h.Store.Snapshot().Keys)})
}
func (h *Handler) deleteKey(w http.ResponseWriter, r *http.Request) {
key := chi.URLParam(r, "key")
func (h *Handler) updateKey(w http.ResponseWriter, r *http.Request) {
key := strings.TrimSpace(chi.URLParam(r, "key"))
if key == "" {
writeJSON(w, http.StatusBadRequest, map[string]any{"detail": "key 不能为空"})
return
}
var req map[string]any
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
writeJSON(w, http.StatusBadRequest, map[string]any{"detail": "invalid json"})
return
}
name, nameOK := fieldStringOptional(req, "name")
remark, remarkOK := fieldStringOptional(req, "remark")
err := h.Store.Update(func(c *config.Config) error {
idx := -1
for i, k := range c.Keys {
if k == key {
for i, item := range c.APIKeys {
if item.Key == key {
idx = i
break
}
@@ -111,7 +128,35 @@ func (h *Handler) deleteKey(w http.ResponseWriter, r *http.Request) {
if idx < 0 {
return fmt.Errorf("key 不存在")
}
c.Keys = append(c.Keys[:idx], c.Keys[idx+1:]...)
if nameOK {
c.APIKeys[idx].Name = name
}
if remarkOK {
c.APIKeys[idx].Remark = remark
}
return nil
})
if err != nil {
writeJSON(w, http.StatusNotFound, map[string]any{"detail": err.Error()})
return
}
writeJSON(w, http.StatusOK, map[string]any{"success": true, "total_keys": len(h.Store.Snapshot().Keys)})
}
func (h *Handler) deleteKey(w http.ResponseWriter, r *http.Request) {
key := chi.URLParam(r, "key")
err := h.Store.Update(func(c *config.Config) error {
idx := -1
for i, item := range c.APIKeys {
if item.Key == key {
idx = i
break
}
}
if idx < 0 {
return fmt.Errorf("key 不存在")
}
c.APIKeys = append(c.APIKeys[:idx], c.APIKeys[idx+1:]...)
return nil
})
if err != nil {
@@ -129,20 +174,23 @@ func (h *Handler) batchImport(w http.ResponseWriter, r *http.Request) {
}
importedKeys, importedAccounts := 0, 0
err := h.Store.Update(func(c *config.Config) error {
if apiKeys, ok := toAPIKeys(req["api_keys"]); ok {
var changed int
c.APIKeys, changed = mergeAPIKeysPreferStructured(c.APIKeys, apiKeys)
importedKeys += changed
}
if keys, ok := req["keys"].([]any); ok {
existing := map[string]bool{}
for _, k := range c.Keys {
existing[k] = true
}
legacy := make([]config.APIKey, 0, len(keys))
for _, k := range keys {
key := strings.TrimSpace(fmt.Sprintf("%v", k))
if key == "" || existing[key] {
if key == "" {
continue
}
c.Keys = append(c.Keys, key)
existing[key] = true
importedKeys++
legacy = append(legacy, config.APIKey{Key: key})
}
var changed int
c.APIKeys, changed = mergeAPIKeysPreferStructured(c.APIKeys, legacy)
importedKeys += changed
}
if accounts, ok := req["accounts"].([]any); ok {
existing := map[string]bool{}

View File

@@ -0,0 +1,76 @@
package configmgmt
import (
"bytes"
"encoding/json"
"net/http"
"net/http/httptest"
"testing"
"github.com/go-chi/chi/v5"
)
func TestKeyEndpointsPreserveStructuredMetadata(t *testing.T) {
h := newAdminTestHandler(t, `{
"api_keys":[{"key":"k1","name":"primary","remark":"prod"}]
}`)
r := chi.NewRouter()
r.Post("/admin/keys", h.addKey)
r.Put("/admin/keys/{key}", h.updateKey)
r.Delete("/admin/keys/{key}", h.deleteKey)
addBody := []byte(`{"key":"k2","name":"secondary","remark":"staging"}`)
addReq := httptest.NewRequest(http.MethodPost, "/admin/keys", bytes.NewReader(addBody))
addRec := httptest.NewRecorder()
r.ServeHTTP(addRec, addReq)
if addRec.Code != http.StatusOK {
t.Fatalf("add status=%d body=%s", addRec.Code, addRec.Body.String())
}
snap := h.Store.Snapshot()
if len(snap.APIKeys) != 2 {
t.Fatalf("unexpected api keys after add: %#v", snap.APIKeys)
}
if snap.APIKeys[0].Name != "primary" || snap.APIKeys[0].Remark != "prod" {
t.Fatalf("existing metadata was lost after add: %#v", snap.APIKeys[0])
}
if snap.APIKeys[1].Name != "secondary" || snap.APIKeys[1].Remark != "staging" {
t.Fatalf("new metadata was lost after add: %#v", snap.APIKeys[1])
}
updateBody := map[string]any{
"name": "primary-updated",
"remark": "prod-updated",
}
updateBytes, _ := json.Marshal(updateBody)
updateReq := httptest.NewRequest(http.MethodPut, "/admin/keys/k1", bytes.NewReader(updateBytes))
updateRec := httptest.NewRecorder()
r.ServeHTTP(updateRec, updateReq)
if updateRec.Code != http.StatusOK {
t.Fatalf("update status=%d body=%s", updateRec.Code, updateRec.Body.String())
}
snap = h.Store.Snapshot()
if len(snap.APIKeys) != 2 {
t.Fatalf("unexpected api keys after update: %#v", snap.APIKeys)
}
if snap.APIKeys[0].Key != "k1" || snap.APIKeys[0].Name != "primary-updated" || snap.APIKeys[0].Remark != "prod-updated" {
t.Fatalf("metadata update did not persist: %#v", snap.APIKeys[0])
}
deleteReq := httptest.NewRequest(http.MethodDelete, "/admin/keys/k1", nil)
deleteRec := httptest.NewRecorder()
r.ServeHTTP(deleteRec, deleteReq)
if deleteRec.Code != http.StatusOK {
t.Fatalf("delete status=%d body=%s", deleteRec.Code, deleteRec.Body.String())
}
snap = h.Store.Snapshot()
if len(snap.APIKeys) != 1 || snap.APIKeys[0].Key != "k2" {
t.Fatalf("unexpected api keys after delete: %#v", snap.APIKeys)
}
if len(snap.Keys) != 1 || snap.Keys[0] != "k2" {
t.Fatalf("unexpected legacy keys after delete: %#v", snap.Keys)
}
}

View File

@@ -0,0 +1,27 @@
package configmgmt
import (
"net/http"
"github.com/go-chi/chi/v5"
)
func RegisterRoutes(r chi.Router, h *Handler) {
r.Get("/config", h.getConfig)
r.Post("/config", h.updateConfig)
r.Post("/config/import", h.configImport)
r.Get("/config/export", h.configExport)
r.Get("/export", h.exportConfig)
r.Post("/keys", h.addKey)
r.Put("/keys/{key}", h.updateKey)
r.Delete("/keys/{key}", h.deleteKey)
r.Post("/import", h.batchImport)
}
func (h *Handler) GetConfig(w http.ResponseWriter, r *http.Request) { h.getConfig(w, r) }
func (h *Handler) UpdateConfig(w http.ResponseWriter, r *http.Request) { h.updateConfig(w, r) }
func (h *Handler) ConfigImport(w http.ResponseWriter, r *http.Request) { h.configImport(w, r) }
func (h *Handler) BatchImport(w http.ResponseWriter, r *http.Request) { h.batchImport(w, r) }
func (h *Handler) AddKey(w http.ResponseWriter, r *http.Request) { h.addKey(w, r) }
func (h *Handler) UpdateKey(w http.ResponseWriter, r *http.Request) { h.updateKey(w, r) }
func (h *Handler) DeleteKey(w http.ResponseWriter, r *http.Request) { h.deleteKey(w, r) }

View File

@@ -0,0 +1,18 @@
package configmgmt
import (
"testing"
"ds2api/internal/account"
"ds2api/internal/config"
)
func newAdminTestHandler(t *testing.T, raw string) *Handler {
t.Helper()
t.Setenv("DS2API_CONFIG_JSON", raw)
store := config.LoadStore()
return &Handler{
Store: store,
Pool: account.NewPool(store),
}
}

View File

@@ -0,0 +1,16 @@
package devcapture
import (
"ds2api/internal/chathistory"
adminshared "ds2api/internal/httpapi/admin/shared"
)
type Handler struct {
Store adminshared.ConfigStore
Pool adminshared.PoolController
DS adminshared.DeepSeekCaller
OpenAI adminshared.OpenAIChatCaller
ChatHistory *chathistory.Store
}
var writeJSON = adminshared.WriteJSON

View File

@@ -1,4 +1,4 @@
package admin
package devcapture
import (
"net/http"

View File

@@ -1,4 +1,4 @@
package admin
package devcapture
import (
"encoding/json"

View File

@@ -0,0 +1,8 @@
package devcapture
import "github.com/go-chi/chi/v5"
func RegisterRoutes(r chi.Router, h *Handler) {
r.Get("/dev/captures", h.getDevCaptures)
r.Delete("/dev/captures", h.clearDevCaptures)
}

Some files were not shown because too many files have changed in this diff Show More