mirror of
https://github.com/CJackHwang/ds2api.git
synced 2026-05-10 19:27:41 +08:00
Compare commits
33 Commits
94c1acace5
...
v4.2.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
445c95a4f2 | ||
|
|
0a6ef8e3f2 | ||
|
|
fd0ec29991 | ||
|
|
2671298439 | ||
|
|
92e321fe2c | ||
|
|
fca8c01397 | ||
|
|
667e1e3710 | ||
|
|
4438d03c5c | ||
|
|
95b7665643 | ||
|
|
9896b1fc33 | ||
|
|
966f21211d | ||
|
|
7dc3af40b2 | ||
|
|
2f6b5ffda0 | ||
|
|
85e256ad4d | ||
|
|
7c3ff6ee7e | ||
|
|
63e62fd1b0 | ||
|
|
483d7af3d2 | ||
|
|
0f89823526 | ||
|
|
6a778e0d35 | ||
|
|
9035c350a7 | ||
|
|
ba80052a26 | ||
|
|
78fdd63470 | ||
|
|
4b4f097006 | ||
|
|
d3018c281b | ||
|
|
415a2359ad | ||
|
|
f702d45a24 | ||
|
|
90817cb9e2 | ||
|
|
b96f736bd2 | ||
|
|
8ab028c52a | ||
|
|
78366afec5 | ||
|
|
bd41c8a90c | ||
|
|
bc2a78ae29 | ||
|
|
192cdf8562 |
1
.gitignore
vendored
1
.gitignore
vendored
@@ -29,6 +29,7 @@ yarn.lock
|
||||
pnpm-lock.yaml
|
||||
|
||||
# Build artifacts
|
||||
dist/
|
||||
*.tsbuildinfo
|
||||
.cache/
|
||||
.parcel-cache/
|
||||
|
||||
@@ -165,6 +165,8 @@ Gemini-compatible clients can also send `x-goog-api-key`, `?key=`, or `?api_key=
|
||||
| PUT | `/admin/chat-history/settings` | Admin | Update conversation history retention limit |
|
||||
| GET | `/admin/version` | Admin | Check current version and latest Release |
|
||||
|
||||
OpenAI `/v1/*` paths are canonical. For clients configured with the bare DS2API service URL, the same OpenAI handlers are also exposed through root shortcuts: `/models`, `/models/{id}`, `/chat/completions`, `/responses`, `/responses/{response_id}`, `/embeddings`, and `/files`.
|
||||
|
||||
---
|
||||
|
||||
## Health Endpoints
|
||||
|
||||
2
API.md
2
API.md
@@ -165,6 +165,8 @@ Gemini 兼容客户端还可以使用 `x-goog-api-key`、`?key=` 或 `?api_key=`
|
||||
| PUT | `/admin/chat-history/settings` | Admin | 更新对话记录保留条数 |
|
||||
| GET | `/admin/version` | Admin | 查询当前版本与最新 Release |
|
||||
|
||||
OpenAI `/v1/*` 仍是规范路径。对于只配置 DS2API 根地址的客户端,同一套 OpenAI handler 也通过根路径快捷路由暴露:`/models`、`/models/{id}`、`/chat/completions`、`/responses`、`/responses/{response_id}`、`/embeddings`、`/files`。
|
||||
|
||||
---
|
||||
|
||||
## 健康检查
|
||||
|
||||
@@ -29,7 +29,7 @@ WORKDIR /app
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends ca-certificates \
|
||||
&& groupadd -r ds2api && useradd -r -g ds2api -d /app -s /sbin/nologin ds2api \
|
||||
&& mkdir -p /app/data && chown -R ds2api:ds2api /app \
|
||||
&& mkdir -p /app/data /data && chown -R ds2api:ds2api /app /data \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=busybox-tools /bin/busybox /usr/local/bin/busybox
|
||||
EXPOSE 5001
|
||||
|
||||
@@ -131,6 +131,8 @@ flowchart LR
|
||||
| WebUI 管理台 | `/admin` 单页应用(中英文双语、深色模式,支持查看服务器端对话记录) |
|
||||
| 运维探针 | `GET /healthz`(存活)、`GET /readyz`(就绪) |
|
||||
|
||||
OpenAI `/v1/*` 仍是推荐的规范路径;同时支持 `/models`、`/chat/completions`、`/responses`、`/embeddings`、`/files` 等根路径快捷路由,方便只配置 DS2API 根地址的第三方客户端。
|
||||
|
||||
## 平台兼容矩阵
|
||||
|
||||
| 级别 | 平台 | 当前状态 |
|
||||
@@ -245,6 +247,7 @@ docker-compose logs -f
|
||||
|
||||
默认 `docker-compose.yml` 会把宿主机 `6011` 映射到容器内的 `5001`。如果你希望直接对外暴露 `5001`,请设置 `DS2API_HOST_PORT=5001`(或者手动调整 `ports` 配置)。
|
||||
同时默认把 `./config.json` 挂载到容器 `/data/config.json`,并设置 `DS2API_CONFIG_PATH=/data/config.json`,用于避免 `/app` 只读导致运行时 token 持久化失败。
|
||||
镜像会预创建 `/data` 并授权给非 root 的 `ds2api` 用户;如果使用单文件 bind mount,请确保宿主机 `config.json` 对容器用户可读写,例如 `chmod 644 config.json`。
|
||||
|
||||
更新镜像:`docker-compose up -d --build`
|
||||
|
||||
|
||||
@@ -128,6 +128,8 @@ For the full module-by-module architecture and directory responsibilities, see [
|
||||
| WebUI Admin Panel | SPA at `/admin` (bilingual Chinese/English, dark mode, with server-side conversation history) |
|
||||
| Health Probes | `GET /healthz` (liveness), `GET /readyz` (readiness) |
|
||||
|
||||
OpenAI `/v1/*` routes remain canonical, and DS2API also accepts root shortcuts such as `/models`, `/chat/completions`, `/responses`, `/embeddings`, and `/files` for clients configured with the bare service URL.
|
||||
|
||||
## Platform Compatibility Matrix
|
||||
|
||||
| Tier | Platform | Status |
|
||||
|
||||
@@ -131,6 +131,7 @@ docker-compose logs -f
|
||||
|
||||
The default `docker-compose.yml` directly uses `ghcr.io/cjackhwang/ds2api:latest` and maps host port `6011` to container port `5001`. If you want `5001` exposed directly, set `DS2API_HOST_PORT=5001` (or adjust the `ports` mapping).
|
||||
The compose template also defaults to `DS2API_CONFIG_PATH=/data/config.json` with `./config.json:/data/config.json` mounted, so deployments avoid read-only `/app` persistence issues by default.
|
||||
The image pre-creates `/data` and grants it to the non-root `ds2api` user. If you bind-mount a single host file, make sure `config.json` is readable/writable by the container user, for example with `chmod 644 config.json`; otherwise Linux UID/GID mismatches can still cause `open /data/config.json: permission denied`.
|
||||
Compatibility note: when `DS2API_CONFIG_PATH` is unset and runtime base dir is `/app`, newer versions prefer `/data/config.json`; if that file is missing but legacy `/app/config.json` exists, DS2API automatically falls back to the legacy path to avoid post-upgrade config loss.
|
||||
|
||||
If you want a pinned version instead of `latest`, you can also pull a specific tag directly:
|
||||
|
||||
@@ -131,6 +131,7 @@ docker-compose logs -f
|
||||
|
||||
默认 `docker-compose.yml` 直接使用 `ghcr.io/cjackhwang/ds2api:latest`,并把宿主机 `6011` 映射到容器内的 `5001`。如果你希望直接对外暴露 `5001`,请设置 `DS2API_HOST_PORT=5001`(或者手动调整 `ports` 配置)。
|
||||
Compose 模板还会默认设置 `DS2API_CONFIG_PATH=/data/config.json` 并挂载 `./config.json:/data/config.json`,优先避免 `/app` 只读带来的配置持久化问题。
|
||||
镜像内会预创建 `/data` 并授权给非 root 的 `ds2api` 用户;如果你使用 bind mount 单文件,请确保宿主机 `config.json` 至少可被容器用户读取/写入,例如 `chmod 644 config.json`,否则 Linux UID/GID 不一致时仍可能出现 `open /data/config.json: permission denied`。
|
||||
兼容说明:若未设置 `DS2API_CONFIG_PATH` 且运行目录是 `/app`,新版本会优先使用 `/data/config.json`;当该文件不存在但检测到历史 `/app/config.json` 时,会自动回退读取旧路径,避免升级后“配置丢失”。
|
||||
|
||||
如需固定版本,也可以直接拉取指定 tag:
|
||||
|
||||
@@ -313,3 +313,14 @@ parse SSE block
|
||||
- 解析器应当对未知字段、未知路径、未知事件保持容忍。
|
||||
|
||||
如果你要把这份说明用于实际开发,建议同时保留原始流样本、回放脚本和回归测试,不要只依赖本文。
|
||||
|
||||
## 2026-04-29 最近线上样本增量观察
|
||||
|
||||
基于 `longtext-deepseek-v4-flash-20260429` 与 `longtext-deepseek-v4-pro-20260429` 两个真实账号长文本样本,近期格式变化要点如下:
|
||||
|
||||
1. `data:` 事件中仍大量出现 `{"v":"..."}` 的无路径增量(`p` 缺失),解析器必须把空路径视为可见正文候选,而不能只依赖 `response/content`。
|
||||
2. 对象形态 `v`(如 `{"text":"..."}` / `{"content":"..."}`)仍会出现,且可能与无路径 chunk 混用;仅按字符串处理会导致正文丢块。
|
||||
3. 多轮 continuation 场景下,后续 chunk 可能不再重复显式 `status`,状态机需要保留上一轮 `INCOMPLETE` 语义直到出现终态。
|
||||
4. 2026-04-29 起客户端头部版本基线上调到 `x-client-version: 2.0.3`,否则部分账号会出现上游行为不一致(包括空输出与补轮异常)。
|
||||
|
||||
建议:新增样本默认回放应优先覆盖「长文本 + 多轮 + 无路径 chunk」组合,避免只用短样本导致回归漏检。
|
||||
|
||||
@@ -98,6 +98,7 @@ DS2API 当前的核心思路,不是把客户端传来的 `messages`、`tools`
|
||||
- `prompt` 才是对话上下文主载体。
|
||||
- `ref_file_ids` 只承载文件引用,不承载普通文本消息。
|
||||
- `tools` 不会作为“原生工具 schema”直接下发给下游,而是被改写进 `prompt`。
|
||||
- 对外返回给客户端的 `prompt_tokens` / `input_tokens` / `promptTokenCount` 不再按“最后一条消息”或字符粗估近似返回,而是基于**完整上下文 prompt**做 tokenizer 计数;为了避免上下文实际超限但客户端误以为还能塞下,请求侧上下文 token 会额外保守上浮一点,宁可略大也不低估。
|
||||
- 当前 `/v1/chat/completions` 业务路径仍是“每次请求新建一个远端 `chat_session_id`,并默认发送 `parent_message_id: null`”;因此 DS2API 对外默认表现为“新会话 + prompt 拼历史”,而不是复用 DeepSeek 原生会话树。
|
||||
- 但 DeepSeek 远端本身支持同一 `chat_session_id` 的跨轮次持续对话。2026-04-27 已用项目内现有 DeepSeek client 做过一次不改业务代码的双轮实测:同一 `chat_session_id` 下,第 1 轮返回 `request_message_id=1` / `response_message_id=2` / 文本 `SESSION_TEST_ONE`;第 2 轮重新获取一次 PoW,并发送 `parent_message_id=2` 后,成功返回 `request_message_id=3` / `response_message_id=4` / 文本 `SESSION_TEST_TWO`。这说明“同远端会话持续聊天”能力存在,且每轮需要携带正确的 parent/message 链接信息,同时重新获取对应轮次可用的 PoW。
|
||||
- OpenAI Chat / Responses 原生走统一 OpenAI 标准化与 DeepSeek payload 组装;Claude / Gemini 会尽量复用 OpenAI prompt/tool 语义,其中 Gemini 直接复用 `promptcompat.BuildOpenAIPromptForAdapter`,Claude 消息接口在可代理场景会转换为 OpenAI chat 形态再执行。
|
||||
@@ -155,10 +156,12 @@ OpenAI Chat / Responses 在标准化后、current input file 之前,会默认
|
||||
工具调用正例现在优先示范官方 DSML 风格:`<|DSML|tool_calls>` → `<|DSML|invoke name="...">` → `<|DSML|parameter name="...">`。
|
||||
兼容层仍接受旧式纯 `<tool_calls>` wrapper,但提示词会优先要求模型输出官方 DSML 标签,并强调不能只输出 closing wrapper 而漏掉 opening tag。需要注意:这是“兼容 DSML 外壳,内部仍以 XML 解析语义为准”,不是原生 DSML 全链路实现;DSML 标签会在解析入口归一化回现有 XML 标签后继续走同一套 parser。
|
||||
数组参数使用 `<item>...</item>` 子节点表示;当某个参数体只包含 item 子节点时,Go / Node 解析器会把它还原成数组,避免 `questions` / `options` 这类 schema 中要求 array 的参数被误解析成 `{ "item": ... }` 对象。若模型把完整结构化 XML fragment 误包进 CDATA,兼容层会在保护 `content` / `command` 等原文字段的前提下,尝试把非原文字段中的 CDATA XML fragment 还原成 object / array。不过,如果 CDATA 只是单个平面的 XML/HTML 标签,例如 `<b>urgent</b>` 这种行内标记,兼容层会保留原始字符串,不会强行升成 object / array;只有明显表示结构的 CDATA 片段,例如多兄弟节点、嵌套子节点或 `item` 列表,才会触发结构化恢复。
|
||||
Go 侧读取 DeepSeek SSE 时不再依赖 `bufio.Scanner` 的固定 2MiB 单行上限;当写文件类工具把很长的 `content` 放在单个 `data:` 行里返回时,非流式收集、流式解析和 auto-continue 透传都会保留完整行,再进入同一套工具解析与序列化流程。
|
||||
在 assistant 最终回包阶段,如果某个 tool 参数在声明 schema 中明确是 `string`,兼容层会在把解析后的 `tool_calls` / `function_call` 重新序列化成 OpenAI / Responses / Claude 可见参数前,递归把该路径上的 number / bool / object / array 统一转成字符串;其中 object / array 会压成紧凑 JSON 字符串。这个保护只对 schema 明确声明为 string 的路径生效,不会改写本来就是 `number` / `boolean` / `object` / `array` 的参数。这样可以兼容 DeepSeek 输出了结构化片段、但上游客户端工具 schema 又严格要求字符串参数的场景(例如 `content`、`prompt`、`path`、`taskId` 等)。
|
||||
工具 schema 的权威来源始终是**当前请求实际携带的 schema**,而不是同名工具在其他 runtime(Claude Code / OpenCode / Codex 等)里的默认印象。兼容层现在会同时兼容 OpenAI 风格 `function.parameters`、直接工具对象上的 `parameters` / `input_schema`、以及 camelCase 的 `inputSchema` / `schema`,并在最终输出阶段按这份请求内 schema 决定是保留 array/object,还是仅对明确声明为 `string` 的路径做字符串化。该规则同样适用于 Claude 的流式收尾和 Vercel Node 流式 tool-call formatter,避免不同 runtime 因 schema shape 差异而出现同名工具参数类型漂移。
|
||||
正例中的工具名只会来自当前请求实际声明的工具;如果当前请求没有足够的已知工具形态,就省略对应的单工具、多工具或嵌套示例,避免把不可用工具名写进 prompt。
|
||||
对执行类工具,脚本内容必须进入执行参数本身:`Bash` / `execute_command` 使用 `command`,`exec_command` 使用 `cmd`;不要把脚本示范成 `path` / `content` 文件写入参数。
|
||||
如果当前请求声明了 `Read` / `read_file` 这类读取工具,兼容层会额外注入一条 read-tool cache guard:当读取结果只表示“文件未变更 / 已在历史中 / 请引用先前上下文 / 没有正文内容”时,模型必须把它视为内容不可用,不能反复调用同一个无正文读取;应改为请求完整正文读取能力,或向用户说明需要重新提供文件内容。这个约束只缓解客户端缓存返回空内容导致的死循环,DS2API 不会也无法凭空恢复客户端本地文件正文。
|
||||
|
||||
OpenAI 路径实现:
|
||||
[internal/promptcompat/tool_prompt.go](../internal/promptcompat/tool_prompt.go)
|
||||
@@ -249,6 +252,7 @@ OpenAI 文件相关实现:
|
||||
- `current_input_file` 默认开启;它用于把“完整上下文”合并进 `history.txt` 上下文文件。当最新 user turn 的纯文本长度达到 `current_input_file.min_chars`(默认 `0`)时,兼容层会上传一个文件名为 `history.txt` 的上下文文件,并在 live prompt 中只保留一个中性的 user 消息要求模型直接回答最新请求,不再暴露文件名或要求模型读取本地文件。
|
||||
- 如果 `current_input_file.enabled=false`,请求会直接透传,不上传任何拆分上下文文件。
|
||||
- 旧的 `history_split.enabled` / `history_split.trigger_after_turns` 会被读取进配置对象以保持兼容,但不会触发拆分上传,也不会影响 `current_input_file` 的默认开启。
|
||||
- 即使触发 `current_input_file` 后 live prompt 被缩短,对客户端回包里的上下文 token 统计,仍会沿用**拆分前的完整 prompt 语义**做计数,而不是按缩短后的占位 prompt 计算;否则会把真实上下文显著算小。
|
||||
|
||||
相关实现:
|
||||
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
</tool_calls>
|
||||
```
|
||||
|
||||
这不是原生 DSML 全链路实现。DSML 只作为 prompt 外壳和解析入口别名;进入 parser 前会被归一化成 `<tool_calls>` / `<invoke>` / `<parameter>`,内部仍以现有 XML 解析语义为准。
|
||||
这不是原生 DSML 全链路实现。DSML 主要用于让模型有意识地输出协议标识,隔离普通 XML 语义;进入 parser 前会按固定本地标签名归一化成 `<tool_calls>` / `<invoke>` / `<parameter>`,内部仍以现有 XML 解析语义为准。
|
||||
|
||||
约束:
|
||||
|
||||
@@ -39,7 +39,8 @@
|
||||
兼容修复:
|
||||
|
||||
- 如果模型漏掉 opening wrapper,但后面仍输出了一个或多个 invoke 并以 closing wrapper 收尾,Go 解析链路会在解析前补回缺失的 opening wrapper。
|
||||
- 如果模型把 DSML 标签里的分隔符 `|` 写漏成空格(例如 `<|DSML tool_calls>` / `<|DSML invoke>` / `<|DSML parameter>`,或无 leading pipe 的 `<DSML tool_calls>` 形态),或把 `DSML` 与工具标签名直接黏连(例如 `<DSMLtool_calls>` / `<DSMLinvoke>` / `<DSMLparameter>`),或把最前面的 pipe 误写成全宽竖线(例如 `<|DSML|tool_calls>` / `<|DSML|invoke>` / `<|DSML|parameter>`),Go / Node 会在固定工具标签名范围内归一化;相似但非工具标签名(如 `tool_calls_extra`)仍按普通文本处理。
|
||||
- Go / Node 解析层不再枚举每一种 DSML typo。它会把工具标签名前的 `DSML`、管道符 `|` / `|`、空白、重复 leading `<` 视为可容忍的协议噪声,然后只匹配固定本地标签名 `tool_calls` / `invoke` / `parameter`。例如 `<DSML|tool_calls>`、`<<|DSML|tool_calls>`、`<|DSML tool_calls>`、`<DSMLtool_calls>`、`<<DSML|DSML|tool_calls>` 都会归一化;相似但非固定标签名(如 `tool_calls_extra`)仍按普通文本处理。
|
||||
- 如果模型在固定工具标签名后多输出一个尾部管道符,例如 `<|DSML|tool_calls|` / `<|DSML|invoke|` / `<|DSML|parameter|`,兼容层会把这个尾部 `|` 当作异常标签终止符并补齐缺失的 `>`;如果后面已经有 `>`,也会消费这个多余 `|` 后再归一化。
|
||||
- 这是一个针对常见模型失误的窄修复,不改变推荐输出格式;prompt 仍要求模型直接输出完整 DSML 外壳。
|
||||
- 裸 `<invoke ...>` / `<parameter ...>` 不会被当成“已支持的工具语法”;只有 `tool_calls` wrapper 或可修复的缺失 opening wrapper 才会进入工具调用路径。
|
||||
|
||||
@@ -53,7 +54,7 @@
|
||||
|
||||
在流式链路中(Go / Node 一致):
|
||||
|
||||
- DSML `<|DSML|tool_calls>` wrapper、兼容变体(`<dsml|tool_calls>`、`<|tool_calls>`、`<|tool_calls>`、`<|DSML|tool_calls>`)、窄容错空格分隔形态(如 `<|DSML tool_calls>`)、黏连形态(如 `<DSMLtool_calls>`)和 canonical `<tool_calls>` wrapper 都会进入结构化捕获
|
||||
- DSML `<|DSML|tool_calls>` wrapper、基于固定本地标签名的 DSML 噪声容错形态、尾部管道符形态(如 `<|DSML|tool_calls|`)和 canonical `<tool_calls>` wrapper 都会进入结构化捕获
|
||||
- 如果流里直接从 invoke 开始,但后面补上了 closing wrapper,Go 流式筛分也会按缺失 opening wrapper 的修复路径尝试恢复
|
||||
- 已识别成功的工具调用不会再次回流到普通文本
|
||||
- 不符合新格式的块不会执行,并继续按原样文本透传
|
||||
@@ -61,6 +62,7 @@
|
||||
- 支持嵌套围栏(如 4 反引号嵌套 3 反引号)和 CDATA 内围栏保护
|
||||
- 如果模型把 `<![CDATA[` 打开后却没有闭合,流式扫描阶段仍会保守地继续缓冲,不会误把 CDATA 里的示例 XML 当成真实工具调用;在最终 parse / flush 恢复阶段,会对这类 loose CDATA 做窄修复,尽量保住外层已完整包裹的真实工具调用
|
||||
- 当文本中 mention 了某种标签名(如 `<dsml|tool_calls>` 或 Markdown inline code 里的 `<|DSML|tool_calls>`)而后面紧跟真正工具调用时,sieve 会跳过不可解析的 mention 候选并继续匹配后续真实工具块,不会因 mention 导致工具调用丢失,也不会截断 mention 后的正文
|
||||
- Go 侧 SSE 读取不再使用 `bufio.Scanner` 的固定 token 上限;单个 `data:` 行中包含很长的写文件参数时,非流式收集、流式解析与 auto-continue 透传都应保留完整行,再交给 tool parser 处理
|
||||
|
||||
另外,`<parameter>` 的值如果本身是合法 JSON 字面量,也会按结构化值解析,而不是一律保留为字符串。例如 `123`、`true`、`null`、`[1,2]`、`{"a":1}` 都会还原成对应的 number / boolean / null / array / object。
|
||||
结构化 XML 参数也会还原为 JSON 结构:如果参数体只包含一个或多个 `<item>...</item>` 子节点,会输出数组;嵌套对象里的 item-only 字段也同样按数组处理。例如 `<parameter name="questions"><item><question>...</question></item></parameter>` 会输出 `{"questions":[{"question":"..."}]}`,而不是 `{"questions":{"item":...}}`。
|
||||
@@ -94,7 +96,7 @@ node --test tests/node/stream-tool-sieve.test.js
|
||||
|
||||
- DSML `<|DSML|tool_calls>` wrapper 正常解析
|
||||
- legacy canonical `<tool_calls>` wrapper 正常解析
|
||||
- 别名变体(`<dsml|tool_calls>`、`<|tool_calls>`、`<|tool_calls>`)、DSML 空格分隔 typo(如 `<|DSML tool_calls>`)和黏连 typo(如 `<DSMLtool_calls>`)正常解析
|
||||
- 固定本地标签名的 DSML 噪声容错形态(如 `<DSML|tool_calls>`、`<<|DSML|tool_calls>`、`<|DSML tool_calls>`、`<DSMLtool_calls>`、`<<DSML|DSML|tool_calls>`)正常解析
|
||||
- 混搭标签(DSML wrapper + canonical inner)归一化后正常解析
|
||||
- 波浪线围栏 `~~~` 内的示例不执行
|
||||
- 嵌套围栏(4 反引号嵌套 3 反引号)内的示例不执行
|
||||
|
||||
3
go.mod
3
go.mod
@@ -6,10 +6,13 @@ require (
|
||||
github.com/andybalholm/brotli v1.2.1
|
||||
github.com/go-chi/chi/v5 v5.2.5
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/hupe1980/go-tiktoken v0.0.10
|
||||
github.com/refraction-networking/utls v1.8.2
|
||||
github.com/router-for-me/CLIProxyAPI/v6 v6.9.14
|
||||
)
|
||||
|
||||
require github.com/dlclark/regexp2 v1.11.5 // indirect
|
||||
|
||||
require (
|
||||
github.com/klauspost/compress v1.18.5 // indirect
|
||||
github.com/sirupsen/logrus v1.9.4 // indirect
|
||||
|
||||
6
go.sum
6
go.sum
@@ -2,10 +2,14 @@ github.com/andybalholm/brotli v1.2.1 h1:R+f5xP285VArJDRgowrfb9DqL18yVK0gKAW/F+eT
|
||||
github.com/andybalholm/brotli v1.2.1/go.mod h1:rzTDkvFWvIrjDXZHkuS16NPggd91W3kUSvPlQ1pLaKY=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dlclark/regexp2 v1.11.5 h1:Q/sSnsKerHeCkc/jSTNq1oCm7KiVgUMZRDUoRu0JQZQ=
|
||||
github.com/dlclark/regexp2 v1.11.5/go.mod h1:DHkYz0B9wPfa6wondMfaivmHpzrQ3v9q8cnmRbL6yW8=
|
||||
github.com/go-chi/chi/v5 v5.2.5 h1:Eg4myHZBjyvJmAFjFvWgrqDTXFyOzjj7YIm3L3mu6Ug=
|
||||
github.com/go-chi/chi/v5 v5.2.5/go.mod h1:X7Gx4mteadT3eDOMTsXzmI4/rwUpOwBHLpAfupzFJP0=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/hupe1980/go-tiktoken v0.0.10 h1:m6phOJaGyctqWdGIgwn9X8AfJvaG74tnQoDL+ntOUEQ=
|
||||
github.com/hupe1980/go-tiktoken v0.0.10/go.mod h1:NME6d8hrE+Jo+kLUZHhXShYV8e40hYkm4BbSLQKtvAo=
|
||||
github.com/klauspost/compress v1.18.5 h1:/h1gH5Ce+VWNLSWqPzOVn6XBO+vJbCNGvjoaGBFW2IE=
|
||||
github.com/klauspost/compress v1.18.5/go.mod h1:cwPg85FWrGar70rWktvGQj8/hthj3wpl0PGDogxkrSQ=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
@@ -37,6 +41,8 @@ golang.org/x/net v0.52.0 h1:He/TN1l0e4mmR3QqHMT2Xab3Aj3L9qjbhRm78/6jrW0=
|
||||
golang.org/x/net v0.52.0/go.mod h1:R1MAz7uMZxVMualyPXb+VaqGSa3LIaUqk0eEt3w36Sw=
|
||||
golang.org/x/sys v0.42.0 h1:omrd2nAlyT5ESRdCLYdm3+fMfNFE/+Rf4bDIQImRJeo=
|
||||
golang.org/x/sys v0.42.0/go.mod h1:4GL1E5IUh+htKOUEOaiffhrAeqysfVGipDYzABqnCmw=
|
||||
golang.org/x/text v0.35.0 h1:JOVx6vVDFokkpaq1AEptVzLTpDe9KGpj5tR4/X+ybL8=
|
||||
golang.org/x/text v0.35.0/go.mod h1:khi/HExzZJ2pGnjenulevKNX1W67CUy0AsXcNubPGCA=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
|
||||
@@ -310,8 +310,12 @@ func (s *Store) Update(id string, params UpdateParams) (Entry, error) {
|
||||
if params.Status != "" {
|
||||
item.Status = params.Status
|
||||
}
|
||||
item.ReasoningContent = params.ReasoningContent
|
||||
item.Content = params.Content
|
||||
if params.ReasoningContent != "" || item.ReasoningContent == "" {
|
||||
item.ReasoningContent = params.ReasoningContent
|
||||
}
|
||||
if params.Content != "" || item.Content == "" {
|
||||
item.Content = params.Content
|
||||
}
|
||||
item.Error = strings.TrimSpace(params.Error)
|
||||
item.StatusCode = params.StatusCode
|
||||
item.ElapsedMs = params.ElapsedMs
|
||||
|
||||
@@ -493,3 +493,112 @@ func TestStoreWritesOnlyChangedDetailFiles(t *testing.T) {
|
||||
t.Fatalf("expected untouched detail file to remain byte-identical")
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdatePreservesContentWhenNewContentIsEmpty(t *testing.T) {
|
||||
path := filepath.Join(t.TempDir(), "chat_history.json")
|
||||
store := New(path)
|
||||
|
||||
started, err := store.Start(StartParams{
|
||||
CallerID: "caller:abc",
|
||||
Model: "deepseek-v4-flash",
|
||||
Stream: true,
|
||||
UserInput: "hello",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("start entry failed: %v", err)
|
||||
}
|
||||
|
||||
if _, err := store.Update(started.ID, UpdateParams{
|
||||
Status: "streaming",
|
||||
ReasoningContent: "let me think",
|
||||
Content: "I'll help you with that.",
|
||||
}); err != nil {
|
||||
t.Fatalf("progress update failed: %v", err)
|
||||
}
|
||||
|
||||
updated, err := store.Update(started.ID, UpdateParams{
|
||||
Status: "success",
|
||||
Content: "",
|
||||
Completed: true,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("success update failed: %v", err)
|
||||
}
|
||||
|
||||
if updated.Content != "I'll help you with that." {
|
||||
t.Fatalf("expected content to be preserved, got %q", updated.Content)
|
||||
}
|
||||
if updated.ReasoningContent != "let me think" {
|
||||
t.Fatalf("expected reasoning content to be preserved, got %q", updated.ReasoningContent)
|
||||
}
|
||||
|
||||
full, err := store.Get(started.ID)
|
||||
if err != nil {
|
||||
t.Fatalf("get entry failed: %v", err)
|
||||
}
|
||||
if full.Content != "I'll help you with that." {
|
||||
t.Fatalf("expected persisted content to be preserved, got %q", full.Content)
|
||||
}
|
||||
if full.ReasoningContent != "let me think" {
|
||||
t.Fatalf("expected persisted reasoning content to be preserved, got %q", full.ReasoningContent)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateAllowsSettingContentFromEmpty(t *testing.T) {
|
||||
path := filepath.Join(t.TempDir(), "chat_history.json")
|
||||
store := New(path)
|
||||
|
||||
started, err := store.Start(StartParams{
|
||||
CallerID: "caller:abc",
|
||||
Model: "deepseek-v4-flash",
|
||||
Stream: true,
|
||||
UserInput: "hello",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("start entry failed: %v", err)
|
||||
}
|
||||
|
||||
updated, err := store.Update(started.ID, UpdateParams{
|
||||
Status: "success",
|
||||
Content: "final answer",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("update failed: %v", err)
|
||||
}
|
||||
if updated.Content != "final answer" {
|
||||
t.Fatalf("expected content to be set, got %q", updated.Content)
|
||||
}
|
||||
}
|
||||
|
||||
func TestUpdateAllowsOverwritingContentWithNewValue(t *testing.T) {
|
||||
path := filepath.Join(t.TempDir(), "chat_history.json")
|
||||
store := New(path)
|
||||
|
||||
started, err := store.Start(StartParams{
|
||||
CallerID: "caller:abc",
|
||||
Model: "deepseek-v4-flash",
|
||||
Stream: true,
|
||||
UserInput: "hello",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("start entry failed: %v", err)
|
||||
}
|
||||
|
||||
if _, err := store.Update(started.ID, UpdateParams{
|
||||
Status: "streaming",
|
||||
Content: "partial",
|
||||
}); err != nil {
|
||||
t.Fatalf("first update failed: %v", err)
|
||||
}
|
||||
|
||||
updated, err := store.Update(started.ID, UpdateParams{
|
||||
Status: "success",
|
||||
Content: "final answer",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("second update failed: %v", err)
|
||||
}
|
||||
if updated.Content != "final answer" {
|
||||
t.Fatalf("expected content to be overwritten, got %q", updated.Content)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -133,33 +133,51 @@ func pumpAutoContinue(ctx context.Context, pw *io.PipeWriter, initial io.ReadClo
|
||||
// sentinels are consumed (not forwarded) so that the downstream only sees
|
||||
// one final [DONE] at the very end.
|
||||
func streamBodyWithContinueState(ctx context.Context, pw *io.PipeWriter, body io.Reader, state *continueState) (bool, error) {
|
||||
scanner := bufio.NewScanner(body)
|
||||
scanner.Buffer(make([]byte, 0, 64*1024), 2*1024*1024)
|
||||
reader := bufio.NewReaderSize(body, 64*1024)
|
||||
hadDone := false
|
||||
for scanner.Scan() {
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return hadDone, ctx.Err()
|
||||
default:
|
||||
}
|
||||
line := append([]byte{}, scanner.Bytes()...)
|
||||
trimmed := strings.TrimSpace(string(line))
|
||||
if trimmed == "" {
|
||||
continue
|
||||
}
|
||||
if strings.HasPrefix(trimmed, "data:") {
|
||||
data := strings.TrimSpace(strings.TrimPrefix(trimmed, "data:"))
|
||||
if data == "[DONE]" {
|
||||
hadDone = true
|
||||
continue
|
||||
line, err := reader.ReadBytes('\n')
|
||||
if len(line) == 0 && err != nil {
|
||||
if err == io.EOF {
|
||||
return hadDone, nil
|
||||
}
|
||||
state.observe(data)
|
||||
return hadDone, err
|
||||
}
|
||||
if _, err := io.Copy(pw, bytes.NewReader(append(line, '\n'))); err != nil {
|
||||
trimmed := strings.TrimSpace(string(line))
|
||||
if trimmed != "" {
|
||||
if strings.HasPrefix(trimmed, "data:") {
|
||||
data := strings.TrimSpace(strings.TrimPrefix(trimmed, "data:"))
|
||||
if data == "[DONE]" {
|
||||
hadDone = true
|
||||
if err != nil && err != io.EOF {
|
||||
return hadDone, err
|
||||
}
|
||||
if err == io.EOF {
|
||||
return hadDone, nil
|
||||
}
|
||||
continue
|
||||
}
|
||||
state.observe(data)
|
||||
}
|
||||
if !strings.HasSuffix(string(line), "\n") {
|
||||
line = append(line, '\n')
|
||||
}
|
||||
if _, copyErr := io.Copy(pw, bytes.NewReader(line)); copyErr != nil {
|
||||
return hadDone, copyErr
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return hadDone, nil
|
||||
}
|
||||
return hadDone, err
|
||||
}
|
||||
}
|
||||
return hadDone, scanner.Err()
|
||||
}
|
||||
|
||||
// observe extracts continue-relevant signals from an SSE JSON chunk.
|
||||
@@ -175,34 +193,48 @@ func (s *continueState) observe(data string) {
|
||||
if id := intFrom(chunk["response_message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
// Path-based status: {"p": "response/status", "v": "FINISHED"}
|
||||
if p, _ := chunk["p"].(string); p == "response/status" {
|
||||
s.setStatus(asString(chunk["v"]))
|
||||
}
|
||||
s.observeDirectPatch(asString(chunk["p"]), chunk["v"])
|
||||
if p, _ := chunk["p"].(string); p == "response" {
|
||||
s.observeBatchPatches("response", chunk["v"])
|
||||
} else {
|
||||
s.observeBatchPatches("", chunk["v"])
|
||||
}
|
||||
// Nested v.response
|
||||
v, _ := chunk["v"].(map[string]any)
|
||||
if response, _ := v["response"].(map[string]any); response != nil {
|
||||
if id := intFrom(response["message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
s.setStatus(asString(response["status"]))
|
||||
if autoContinue, ok := response["auto_continue"].(bool); ok && autoContinue {
|
||||
if v, _ := chunk["v"].(map[string]any); v != nil {
|
||||
s.observeResponseObject(v["response"])
|
||||
}
|
||||
if message, _ := chunk["message"].(map[string]any); message != nil {
|
||||
s.observeResponseObject(message["response"])
|
||||
}
|
||||
}
|
||||
|
||||
func (s *continueState) observeDirectPatch(path string, value any) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
switch strings.Trim(strings.TrimSpace(path), "/") {
|
||||
case "response/status", "status", "response/quasi_status", "quasi_status":
|
||||
s.setStatus(asString(value))
|
||||
case "response/auto_continue", "auto_continue":
|
||||
if v, ok := value.(bool); ok && v {
|
||||
s.lastStatus = "AUTO_CONTINUE"
|
||||
}
|
||||
}
|
||||
// Nested message.response
|
||||
if message, _ := chunk["message"].(map[string]any); message != nil {
|
||||
if response, _ := message["response"].(map[string]any); response != nil {
|
||||
if id := intFrom(response["message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
s.setStatus(asString(response["status"]))
|
||||
}
|
||||
}
|
||||
|
||||
func (s *continueState) observeResponseObject(raw any) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
response, _ := raw.(map[string]any)
|
||||
if response == nil {
|
||||
return
|
||||
}
|
||||
if id := intFrom(response["message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
s.setStatus(asString(response["status"]))
|
||||
if autoContinue, ok := response["auto_continue"].(bool); ok && autoContinue {
|
||||
s.lastStatus = "AUTO_CONTINUE"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -230,6 +262,10 @@ func (s *continueState) observeBatchPatches(parentPath string, raw any) {
|
||||
switch strings.Trim(strings.TrimSpace(fullPath), "/") {
|
||||
case "response/status", "status", "response/quasi_status", "quasi_status":
|
||||
s.setStatus(asString(m["v"]))
|
||||
case "response/auto_continue", "auto_continue":
|
||||
if v, ok := m["v"].(bool); ok && v {
|
||||
s.lastStatus = "AUTO_CONTINUE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -150,6 +150,62 @@ func TestAutoContinueDoesNotTriggerOnPlainWIPWithoutExplicitContinuationSignal(t
|
||||
}
|
||||
}
|
||||
|
||||
func TestAutoContinuePassesThroughLongSingleSSELine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
initialBody := `data: {"p":"response/content","v":"` + payload + `"}` + "\n" +
|
||||
`data: [DONE]` + "\n"
|
||||
|
||||
body := newAutoContinueBody(context.Background(), io.NopCloser(strings.NewReader(initialBody)), "session-123", 8, func(context.Context, string, int) (*http.Response, error) {
|
||||
return nil, errors.New("continue should not have been called")
|
||||
})
|
||||
defer func() { _ = body.Close() }()
|
||||
|
||||
out, err := io.ReadAll(body)
|
||||
if err != nil {
|
||||
t.Fatalf("read body failed: %v", err)
|
||||
}
|
||||
if !bytes.Contains(out, []byte(payload)) {
|
||||
t.Fatalf("expected long SSE payload to pass through, got len=%d want payload len=%d", len(out), len(payload))
|
||||
}
|
||||
if !bytes.Contains(out, []byte(`data: [DONE]`)) {
|
||||
t.Fatalf("expected final DONE sentinel in body, got len=%d", len(out))
|
||||
}
|
||||
}
|
||||
|
||||
func TestAutoContinueTriggersOnDirectQuasiStatusIncomplete(t *testing.T) {
|
||||
initialBody := strings.Join([]string{
|
||||
`data: {"response_message_id":321,"p":"response/content","v":"<tool_calls><invoke name=\"write_file\"><parameter name=\"content\"><![CDATA[part-one"}`,
|
||||
`data: {"p":"response/quasi_status","v":"INCOMPLETE"}`,
|
||||
`data: [DONE]`,
|
||||
}, "\n") + "\n"
|
||||
|
||||
var continueCalls atomic.Int32
|
||||
body := newAutoContinueBody(context.Background(), io.NopCloser(strings.NewReader(initialBody)), "session-123", 8, func(context.Context, string, int) (*http.Response, error) {
|
||||
continueCalls.Add(1)
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Header: make(http.Header),
|
||||
Body: io.NopCloser(strings.NewReader(
|
||||
`data: {"response_message_id":322,"p":"response/content","v":"-part-two]]></parameter></invoke></tool_calls>"}` + "\n" +
|
||||
`data: {"p":"response/status","v":"FINISHED"}` + "\n" +
|
||||
`data: [DONE]` + "\n",
|
||||
)),
|
||||
}, nil
|
||||
})
|
||||
defer func() { _ = body.Close() }()
|
||||
|
||||
out, err := io.ReadAll(body)
|
||||
if err != nil {
|
||||
t.Fatalf("read body failed: %v", err)
|
||||
}
|
||||
if continueCalls.Load() != 1 {
|
||||
t.Fatalf("expected exactly one continue call, got %d", continueCalls.Load())
|
||||
}
|
||||
if !bytes.Contains(out, []byte("part-one")) || !bytes.Contains(out, []byte("-part-two")) {
|
||||
t.Fatalf("expected continued tool content in body, got=%s", string(out))
|
||||
}
|
||||
}
|
||||
|
||||
func TestAutoContinueTriggersOnResponseBatchQuasiStatusIncomplete(t *testing.T) {
|
||||
initialBody := strings.Join([]string{
|
||||
`data: {"response_message_id":321,"v":{"response":{"message_id":321,"status":"WIP","auto_continue":false}}}`,
|
||||
@@ -219,3 +275,33 @@ func (d failingOrCompletionDoer) Do(req *http.Request) (*http.Response, error) {
|
||||
}
|
||||
return nil, errors.New("forced stream failure")
|
||||
}
|
||||
|
||||
func TestAutoContinuePreservesIncompleteStateWhenNextChunkOmitsStatus(t *testing.T) {
|
||||
initialBody := strings.Join([]string{
|
||||
`data: {"response_message_id":321,"v":{"response":{"message_id":321,"status":"INCOMPLETE"}}}`,
|
||||
`data: {"p":"response/content","v":{"text":"continued"}}`,
|
||||
`data: [DONE]`,
|
||||
}, "\n") + "\n"
|
||||
|
||||
var continueCalls atomic.Int32
|
||||
body := newAutoContinueBody(context.Background(), io.NopCloser(strings.NewReader(initialBody)), "session-123", 8, func(context.Context, string, int) (*http.Response, error) {
|
||||
continueCalls.Add(1)
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Header: make(http.Header),
|
||||
Body: io.NopCloser(strings.NewReader(
|
||||
`data: {"response_message_id":322,"p":"response/status","v":"FINISHED"}` + "\n" +
|
||||
`data: [DONE]` + "\n",
|
||||
)),
|
||||
}, nil
|
||||
})
|
||||
defer func() { _ = body.Close() }()
|
||||
|
||||
_, err := io.ReadAll(body)
|
||||
if err != nil {
|
||||
t.Fatalf("read body failed: %v", err)
|
||||
}
|
||||
if continueCalls.Load() != 1 {
|
||||
t.Fatalf("expected exactly one continue call, got %d", continueCalls.Load())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"client": {
|
||||
"name": "DeepSeek",
|
||||
"platform": "android",
|
||||
"version": "2.0.1",
|
||||
"version": "2.0.4",
|
||||
"android_api_level": "35",
|
||||
"locale": "zh_CN"
|
||||
},
|
||||
@@ -24,4 +24,4 @@
|
||||
"skip_exact_paths": [
|
||||
"response/search_status"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -2,20 +2,24 @@ package protocol
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
func ScanSSELines(resp *http.Response, onLine func([]byte) bool) error {
|
||||
scanner := bufio.NewScanner(resp.Body)
|
||||
buf := make([]byte, 0, 64*1024)
|
||||
scanner.Buffer(buf, 2*1024*1024)
|
||||
for scanner.Scan() {
|
||||
if !onLine(scanner.Bytes()) {
|
||||
break
|
||||
reader := bufio.NewReaderSize(resp.Body, 64*1024)
|
||||
for {
|
||||
line, err := reader.ReadBytes('\n')
|
||||
if len(line) > 0 {
|
||||
if !onLine(line) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
26
internal/deepseek/protocol/sse_test.go
Normal file
26
internal/deepseek/protocol/sse_test.go
Normal file
@@ -0,0 +1,26 @@
|
||||
package protocol
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestScanSSELinesHandlesLongSingleLine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
body := "data: {\"p\":\"response/content\",\"v\":\"" + payload + "\"}\n"
|
||||
resp := &http.Response{Body: io.NopCloser(strings.NewReader(body))}
|
||||
|
||||
var got string
|
||||
err := ScanSSELines(resp, func(line []byte) bool {
|
||||
got = string(line)
|
||||
return true
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("ScanSSELines returned error: %v", err)
|
||||
}
|
||||
if !strings.Contains(got, payload) {
|
||||
t.Fatalf("long SSE line was not preserved: got len=%d want payload len=%d", len(got), len(payload))
|
||||
}
|
||||
}
|
||||
@@ -5,6 +5,7 @@ import (
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"ds2api/internal/prompt"
|
||||
"ds2api/internal/util"
|
||||
)
|
||||
|
||||
@@ -43,8 +44,23 @@ func BuildMessageResponse(messageID, model string, normalizedMessages []any, fin
|
||||
"stop_reason": stopReason,
|
||||
"stop_sequence": nil,
|
||||
"usage": map[string]any{
|
||||
"input_tokens": util.EstimateTokens(fmt.Sprintf("%v", normalizedMessages)),
|
||||
"output_tokens": util.EstimateTokens(finalThinking) + util.EstimateTokens(finalText),
|
||||
"input_tokens": util.CountPromptTokens(prompt.MessagesPrepareWithThinking(claudeMessageMaps(normalizedMessages), false), model),
|
||||
"output_tokens": util.CountOutputTokens(finalThinking, model) + util.CountOutputTokens(finalText, model),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func claudeMessageMaps(messages []any) []map[string]any {
|
||||
if len(messages) == 0 {
|
||||
return nil
|
||||
}
|
||||
out := make([]map[string]any, 0, len(messages))
|
||||
for _, item := range messages {
|
||||
msg, ok := item.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
out = append(out, msg)
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
@@ -29,7 +29,7 @@ func BuildChatCompletionWithToolCalls(completionID, model, finalPrompt, finalThi
|
||||
"created": time.Now().Unix(),
|
||||
"model": model,
|
||||
"choices": []map[string]any{{"index": 0, "message": messageObj, "finish_reason": finishReason}},
|
||||
"usage": BuildChatUsage(finalPrompt, finalThinking, finalText),
|
||||
"usage": BuildChatUsageForModel(model, finalPrompt, finalThinking, finalText, 0),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -70,7 +70,7 @@ func BuildResponseObjectFromItems(responseID, model, finalPrompt, finalThinking,
|
||||
"model": model,
|
||||
"output": output,
|
||||
"output_text": outputText,
|
||||
"usage": BuildResponsesUsage(finalPrompt, finalThinking, finalText),
|
||||
"usage": BuildResponsesUsageForModel(model, finalPrompt, finalThinking, finalText, 0),
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"testing"
|
||||
|
||||
"ds2api/internal/toolcall"
|
||||
"ds2api/internal/util"
|
||||
)
|
||||
|
||||
func TestBuildResponseObjectKeepsFencedToolPayloadAsText(t *testing.T) {
|
||||
@@ -177,3 +178,17 @@ func TestBuildResponseObjectWithToolCallsCoercesSchemaDeclaredStringArguments(t
|
||||
t.Fatalf("expected response content stringified by schema, got %#v", args["content"])
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildChatUsageForModelUsesConservativePromptCount(t *testing.T) {
|
||||
prompt := strings.Repeat("上下文token ", 40)
|
||||
usage := BuildChatUsageForModel("deepseek-v4-flash", prompt, "", "ok", 0)
|
||||
promptTokens, _ := usage["prompt_tokens"].(int)
|
||||
if promptTokens <= util.EstimateTokens(prompt) {
|
||||
t.Fatalf("expected conservative prompt token count > rough estimate, got=%d estimate=%d", promptTokens, util.EstimateTokens(prompt))
|
||||
}
|
||||
totalTokens, _ := usage["total_tokens"].(int)
|
||||
completionTokens, _ := usage["completion_tokens"].(int)
|
||||
if totalTokens != promptTokens+completionTokens {
|
||||
t.Fatalf("expected total tokens to add up, got usage=%#v", usage)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -2,10 +2,10 @@ package openai
|
||||
|
||||
import "ds2api/internal/util"
|
||||
|
||||
func BuildChatUsage(finalPrompt, finalThinking, finalText string) map[string]any {
|
||||
promptTokens := util.EstimateTokens(finalPrompt)
|
||||
reasoningTokens := util.EstimateTokens(finalThinking)
|
||||
completionTokens := util.EstimateTokens(finalText)
|
||||
func BuildChatUsageForModel(model, finalPrompt, finalThinking, finalText string, refFileTokens int) map[string]any {
|
||||
promptTokens := util.CountPromptTokens(finalPrompt, model) + refFileTokens
|
||||
reasoningTokens := util.CountOutputTokens(finalThinking, model)
|
||||
completionTokens := util.CountOutputTokens(finalText, model)
|
||||
return map[string]any{
|
||||
"prompt_tokens": promptTokens,
|
||||
"completion_tokens": reasoningTokens + completionTokens,
|
||||
@@ -16,13 +16,21 @@ func BuildChatUsage(finalPrompt, finalThinking, finalText string) map[string]any
|
||||
}
|
||||
}
|
||||
|
||||
func BuildResponsesUsage(finalPrompt, finalThinking, finalText string) map[string]any {
|
||||
promptTokens := util.EstimateTokens(finalPrompt)
|
||||
reasoningTokens := util.EstimateTokens(finalThinking)
|
||||
completionTokens := util.EstimateTokens(finalText)
|
||||
func BuildChatUsage(finalPrompt, finalThinking, finalText string) map[string]any {
|
||||
return BuildChatUsageForModel("", finalPrompt, finalThinking, finalText, 0)
|
||||
}
|
||||
|
||||
func BuildResponsesUsageForModel(model, finalPrompt, finalThinking, finalText string, refFileTokens int) map[string]any {
|
||||
promptTokens := util.CountPromptTokens(finalPrompt, model) + refFileTokens
|
||||
reasoningTokens := util.CountOutputTokens(finalThinking, model)
|
||||
completionTokens := util.CountOutputTokens(finalText, model)
|
||||
return map[string]any{
|
||||
"input_tokens": promptTokens,
|
||||
"output_tokens": reasoningTokens + completionTokens,
|
||||
"total_tokens": promptTokens + reasoningTokens + completionTokens,
|
||||
}
|
||||
}
|
||||
|
||||
func BuildResponsesUsage(finalPrompt, finalThinking, finalText string) map[string]any {
|
||||
return BuildResponsesUsageForModel("", finalPrompt, finalThinking, finalText, 0)
|
||||
}
|
||||
|
||||
@@ -206,6 +206,7 @@ func (h *Handler) handleClaudeStreamRealtime(w http.ResponseWriter, r *http.Requ
|
||||
h.compatStripReferenceMarkers(),
|
||||
toolNames,
|
||||
toolsRaw,
|
||||
buildClaudePromptTokenText(messages, thinkingEnabled),
|
||||
)
|
||||
streamRuntime.sendMessageStart()
|
||||
|
||||
|
||||
@@ -3,8 +3,6 @@ package claude
|
||||
import (
|
||||
"encoding/json"
|
||||
"net/http"
|
||||
|
||||
"ds2api/internal/util"
|
||||
)
|
||||
|
||||
func (h *Handler) CountTokens(w http.ResponseWriter, r *http.Request) {
|
||||
@@ -26,26 +24,11 @@ func (h *Handler) CountTokens(w http.ResponseWriter, r *http.Request) {
|
||||
writeClaudeError(w, http.StatusBadRequest, "Request must include 'model' and 'messages'.")
|
||||
return
|
||||
}
|
||||
inputTokens := 0
|
||||
if sys, ok := req["system"].(string); ok {
|
||||
inputTokens += util.EstimateTokens(sys)
|
||||
}
|
||||
for _, item := range messages {
|
||||
msg, ok := item.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
inputTokens += 2
|
||||
inputTokens += util.EstimateTokens(extractMessageContent(msg["content"]))
|
||||
}
|
||||
if tools, ok := req["tools"].([]any); ok {
|
||||
for _, t := range tools {
|
||||
b, _ := json.Marshal(t)
|
||||
inputTokens += util.EstimateTokens(string(b))
|
||||
}
|
||||
}
|
||||
if inputTokens < 1 {
|
||||
inputTokens = 1
|
||||
normalized, err := normalizeClaudeRequest(h.Store, req)
|
||||
if err != nil {
|
||||
writeClaudeError(w, http.StatusBadRequest, err.Error())
|
||||
return
|
||||
}
|
||||
inputTokens := countClaudeInputTokens(normalized.Standard)
|
||||
writeJSON(w, http.StatusOK, map[string]any{"input_tokens": inputTokens})
|
||||
}
|
||||
|
||||
7
internal/httpapi/claude/prompt_token_text.go
Normal file
7
internal/httpapi/claude/prompt_token_text.go
Normal file
@@ -0,0 +1,7 @@
|
||||
package claude
|
||||
|
||||
import "ds2api/internal/prompt"
|
||||
|
||||
func buildClaudePromptTokenText(messages []any, thinkingEnabled bool) string {
|
||||
return prompt.MessagesPrepareWithThinking(toMessageMaps(messages), thinkingEnabled)
|
||||
}
|
||||
@@ -48,17 +48,18 @@ func normalizeClaudeRequest(store ConfigReader, req map[string]any) (claudeNorma
|
||||
|
||||
return claudeNormalizedRequest{
|
||||
Standard: promptcompat.StandardRequest{
|
||||
Surface: "anthropic_messages",
|
||||
RequestedModel: strings.TrimSpace(model),
|
||||
ResolvedModel: dsModel,
|
||||
ResponseModel: strings.TrimSpace(model),
|
||||
Messages: payload["messages"].([]any),
|
||||
ToolsRaw: toolsRequested,
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
Stream: util.ToBool(req["stream"]),
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
Surface: "anthropic_messages",
|
||||
RequestedModel: strings.TrimSpace(model),
|
||||
ResolvedModel: dsModel,
|
||||
ResponseModel: strings.TrimSpace(model),
|
||||
Messages: payload["messages"].([]any),
|
||||
PromptTokenText: finalPrompt,
|
||||
ToolsRaw: toolsRequested,
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
Stream: util.ToBool(req["stream"]),
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
},
|
||||
NormalizedMessages: normalizedMessages,
|
||||
}, nil
|
||||
|
||||
@@ -15,10 +15,11 @@ type claudeStreamRuntime struct {
|
||||
rc *http.ResponseController
|
||||
canFlush bool
|
||||
|
||||
model string
|
||||
toolNames []string
|
||||
messages []any
|
||||
toolsRaw any
|
||||
model string
|
||||
toolNames []string
|
||||
messages []any
|
||||
toolsRaw any
|
||||
promptTokenText string
|
||||
|
||||
thinkingEnabled bool
|
||||
searchEnabled bool
|
||||
@@ -49,6 +50,7 @@ func newClaudeStreamRuntime(
|
||||
stripReferenceMarkers bool,
|
||||
toolNames []string,
|
||||
toolsRaw any,
|
||||
promptTokenText string,
|
||||
) *claudeStreamRuntime {
|
||||
return &claudeStreamRuntime{
|
||||
w: w,
|
||||
@@ -62,6 +64,7 @@ func newClaudeStreamRuntime(
|
||||
stripReferenceMarkers: stripReferenceMarkers,
|
||||
toolNames: toolNames,
|
||||
toolsRaw: toolsRaw,
|
||||
promptTokenText: promptTokenText,
|
||||
messageID: fmt.Sprintf("msg_%d", time.Now().UnixNano()),
|
||||
thinkingBlockIndex: -1,
|
||||
textBlockIndex: -1,
|
||||
|
||||
@@ -42,7 +42,10 @@ func (s *claudeStreamRuntime) sendPing() {
|
||||
}
|
||||
|
||||
func (s *claudeStreamRuntime) sendMessageStart() {
|
||||
inputTokens := util.EstimateTokens(fmt.Sprintf("%v", s.messages))
|
||||
inputTokens := countClaudeInputTokensFromText(s.promptTokenText, s.model)
|
||||
if inputTokens == 0 {
|
||||
inputTokens = util.CountPromptTokens(fmt.Sprintf("%v", s.messages), s.model)
|
||||
}
|
||||
s.send("message_start", map[string]any{
|
||||
"type": "message_start",
|
||||
"message": map[string]any{
|
||||
|
||||
@@ -109,7 +109,7 @@ func (s *claudeStreamRuntime) finalize(stopReason string) {
|
||||
}
|
||||
}
|
||||
|
||||
outputTokens := util.EstimateTokens(finalThinking) + util.EstimateTokens(finalText)
|
||||
outputTokens := util.CountOutputTokens(finalThinking, s.model) + util.CountOutputTokens(finalText, s.model)
|
||||
s.send("message_delta", map[string]any{
|
||||
"type": "message_delta",
|
||||
"delta": map[string]any{
|
||||
|
||||
20
internal/httpapi/claude/token_count.go
Normal file
20
internal/httpapi/claude/token_count.go
Normal file
@@ -0,0 +1,20 @@
|
||||
package claude
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/promptcompat"
|
||||
"ds2api/internal/util"
|
||||
)
|
||||
|
||||
func countClaudeInputTokens(stdReq promptcompat.StandardRequest) int {
|
||||
promptText := stdReq.PromptTokenText
|
||||
if strings.TrimSpace(promptText) == "" {
|
||||
promptText = stdReq.FinalPrompt
|
||||
}
|
||||
return countClaudeInputTokensFromText(promptText, stdReq.ResolvedModel)
|
||||
}
|
||||
|
||||
func countClaudeInputTokensFromText(promptText, model string) int {
|
||||
return util.CountPromptTokens(promptText, model)
|
||||
}
|
||||
@@ -36,16 +36,17 @@ func normalizeGeminiRequest(store ConfigReader, routeModel string, req map[strin
|
||||
passThrough := collectGeminiPassThrough(req)
|
||||
|
||||
return promptcompat.StandardRequest{
|
||||
Surface: "google_gemini",
|
||||
RequestedModel: requestedModel,
|
||||
ResolvedModel: resolvedModel,
|
||||
ResponseModel: requestedModel,
|
||||
Messages: messagesRaw,
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
Stream: stream,
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
PassThrough: passThrough,
|
||||
Surface: "google_gemini",
|
||||
RequestedModel: requestedModel,
|
||||
ResolvedModel: resolvedModel,
|
||||
ResponseModel: requestedModel,
|
||||
Messages: messagesRaw,
|
||||
PromptTokenText: finalPrompt,
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
Stream: stream,
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
PassThrough: passThrough,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -227,7 +227,7 @@ func (h *Handler) handleNonStreamGenerateContent(w http.ResponseWriter, resp *ht
|
||||
//nolint:unused // retained for native Gemini non-stream handling path.
|
||||
func buildGeminiGenerateContentResponse(model, finalPrompt, finalThinking, finalText string, toolNames []string) map[string]any {
|
||||
parts := buildGeminiPartsFromFinal(finalText, finalThinking, toolNames)
|
||||
usage := buildGeminiUsage(finalPrompt, finalThinking, finalText)
|
||||
usage := buildGeminiUsage(model, finalPrompt, finalThinking, finalText)
|
||||
return map[string]any{
|
||||
"candidates": []map[string]any{
|
||||
{
|
||||
@@ -245,10 +245,10 @@ func buildGeminiGenerateContentResponse(model, finalPrompt, finalThinking, final
|
||||
}
|
||||
|
||||
//nolint:unused // retained for native Gemini non-stream handling path.
|
||||
func buildGeminiUsage(finalPrompt, finalThinking, finalText string) map[string]any {
|
||||
promptTokens := util.EstimateTokens(finalPrompt)
|
||||
reasoningTokens := util.EstimateTokens(finalThinking)
|
||||
completionTokens := util.EstimateTokens(finalText)
|
||||
func buildGeminiUsage(model, finalPrompt, finalThinking, finalText string) map[string]any {
|
||||
promptTokens := util.CountPromptTokens(finalPrompt, model)
|
||||
reasoningTokens := util.CountOutputTokens(finalThinking, model)
|
||||
completionTokens := util.CountOutputTokens(finalText, model)
|
||||
return map[string]any{
|
||||
"promptTokenCount": promptTokens,
|
||||
"candidatesTokenCount": reasoningTokens + completionTokens,
|
||||
|
||||
@@ -194,6 +194,6 @@ func (s *geminiStreamRuntime) finalize() {
|
||||
},
|
||||
},
|
||||
"modelVersion": s.model,
|
||||
"usageMetadata": buildGeminiUsage(s.finalPrompt, finalThinking, finalText),
|
||||
"usageMetadata": buildGeminiUsage(s.model, s.finalPrompt, finalThinking, finalText),
|
||||
})
|
||||
}
|
||||
|
||||
@@ -126,6 +126,7 @@ func TestStartChatHistoryRecoversFromTransientWriteFailure(t *testing.T) {
|
||||
session := startChatHistory(historyStore, req, a, stdReq)
|
||||
if session == nil {
|
||||
t.Fatalf("expected session even when initial persistence fails")
|
||||
return
|
||||
}
|
||||
if session.disabled {
|
||||
t.Fatalf("expected session to remain active after transient start failure")
|
||||
@@ -194,7 +195,7 @@ func TestHandleStreamContextCancelledMarksHistoryStopped(t *testing.T) {
|
||||
rec := httptest.NewRecorder()
|
||||
resp := makeOpenAISSEHTTPResponse(`data: {"p":"response/content","v":"hello"}`, `data: [DONE]`)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-stop", "deepseek-v4-flash", "prompt", false, false, nil, nil, session)
|
||||
h.handleStream(rec, req, resp, "cid-stop", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, session)
|
||||
|
||||
snapshot, err := historyStore.Snapshot()
|
||||
if err != nil {
|
||||
|
||||
@@ -16,12 +16,13 @@ type chatStreamRuntime struct {
|
||||
rc *http.ResponseController
|
||||
canFlush bool
|
||||
|
||||
completionID string
|
||||
created int64
|
||||
model string
|
||||
finalPrompt string
|
||||
toolNames []string
|
||||
toolsRaw any
|
||||
completionID string
|
||||
created int64
|
||||
model string
|
||||
finalPrompt string
|
||||
refFileTokens int
|
||||
toolNames []string
|
||||
toolsRaw any
|
||||
|
||||
thinkingEnabled bool
|
||||
searchEnabled bool
|
||||
@@ -52,6 +53,32 @@ type chatStreamRuntime struct {
|
||||
finalErrorCode string
|
||||
}
|
||||
|
||||
type chatDeltaBatch struct {
|
||||
runtime *chatStreamRuntime
|
||||
field string
|
||||
text strings.Builder
|
||||
}
|
||||
|
||||
func (b *chatDeltaBatch) append(field, text string) {
|
||||
if text == "" {
|
||||
return
|
||||
}
|
||||
if b.field != "" && b.field != field {
|
||||
b.flush()
|
||||
}
|
||||
b.field = field
|
||||
b.text.WriteString(text)
|
||||
}
|
||||
|
||||
func (b *chatDeltaBatch) flush() {
|
||||
if b.field == "" || b.text.Len() == 0 {
|
||||
return
|
||||
}
|
||||
b.runtime.sendDelta(map[string]any{b.field: b.text.String()})
|
||||
b.field = ""
|
||||
b.text.Reset()
|
||||
}
|
||||
|
||||
func newChatStreamRuntime(
|
||||
w http.ResponseWriter,
|
||||
rc *http.ResponseController,
|
||||
@@ -106,6 +133,23 @@ func (s *chatStreamRuntime) sendChunk(v any) {
|
||||
}
|
||||
}
|
||||
|
||||
func (s *chatStreamRuntime) sendDelta(delta map[string]any) {
|
||||
if len(delta) == 0 {
|
||||
return
|
||||
}
|
||||
if !s.firstChunkSent {
|
||||
delta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
s.sendChunk(openaifmt.BuildChatStreamChunk(
|
||||
s.completionID,
|
||||
s.created,
|
||||
s.model,
|
||||
[]map[string]any{openaifmt.BuildChatStreamDeltaChoice(0, delta)},
|
||||
nil,
|
||||
))
|
||||
}
|
||||
|
||||
func (s *chatStreamRuntime) sendDone() {
|
||||
_, _ = s.w.Write([]byte("data: [DONE]\n\n"))
|
||||
if s.canFlush {
|
||||
@@ -146,42 +190,22 @@ func (s *chatStreamRuntime) finalize(finishReason string, deferEmptyOutput bool)
|
||||
detected := detectAssistantToolCalls(s.rawText.String(), finalText, s.rawThinking.String(), finalToolDetectionThinking, s.toolNames)
|
||||
if len(detected.Calls) > 0 && !s.toolCallsDoneEmitted {
|
||||
finishReason = "tool_calls"
|
||||
delta := map[string]any{
|
||||
s.sendDelta(map[string]any{
|
||||
"tool_calls": formatFinalStreamToolCallsWithStableIDs(detected.Calls, s.streamToolCallIDs, s.toolsRaw),
|
||||
}
|
||||
if !s.firstChunkSent {
|
||||
delta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
s.sendChunk(openaifmt.BuildChatStreamChunk(
|
||||
s.completionID,
|
||||
s.created,
|
||||
s.model,
|
||||
[]map[string]any{openaifmt.BuildChatStreamDeltaChoice(0, delta)},
|
||||
nil,
|
||||
))
|
||||
})
|
||||
s.toolCallsEmitted = true
|
||||
s.toolCallsDoneEmitted = true
|
||||
} else if s.bufferToolContent {
|
||||
batch := chatDeltaBatch{runtime: s}
|
||||
for _, evt := range toolstream.Flush(&s.toolSieve, s.toolNames) {
|
||||
if len(evt.ToolCalls) > 0 {
|
||||
batch.flush()
|
||||
finishReason = "tool_calls"
|
||||
s.toolCallsEmitted = true
|
||||
s.toolCallsDoneEmitted = true
|
||||
tcDelta := map[string]any{
|
||||
s.sendDelta(map[string]any{
|
||||
"tool_calls": formatFinalStreamToolCallsWithStableIDs(evt.ToolCalls, s.streamToolCallIDs, s.toolsRaw),
|
||||
}
|
||||
if !s.firstChunkSent {
|
||||
tcDelta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
s.sendChunk(openaifmt.BuildChatStreamChunk(
|
||||
s.completionID,
|
||||
s.created,
|
||||
s.model,
|
||||
[]map[string]any{openaifmt.BuildChatStreamDeltaChoice(0, tcDelta)},
|
||||
nil,
|
||||
))
|
||||
})
|
||||
s.resetStreamToolCallState()
|
||||
}
|
||||
if evt.Content == "" {
|
||||
@@ -191,21 +215,9 @@ func (s *chatStreamRuntime) finalize(finishReason string, deferEmptyOutput bool)
|
||||
if cleaned == "" || (s.searchEnabled && sse.IsCitation(cleaned)) {
|
||||
continue
|
||||
}
|
||||
delta := map[string]any{
|
||||
"content": cleaned,
|
||||
}
|
||||
if !s.firstChunkSent {
|
||||
delta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
s.sendChunk(openaifmt.BuildChatStreamChunk(
|
||||
s.completionID,
|
||||
s.created,
|
||||
s.model,
|
||||
[]map[string]any{openaifmt.BuildChatStreamDeltaChoice(0, delta)},
|
||||
nil,
|
||||
))
|
||||
batch.append("content", cleaned)
|
||||
}
|
||||
batch.flush()
|
||||
}
|
||||
|
||||
if len(detected.Calls) > 0 || s.toolCallsEmitted {
|
||||
@@ -222,7 +234,7 @@ func (s *chatStreamRuntime) finalize(finishReason string, deferEmptyOutput bool)
|
||||
s.sendFailedChunk(status, message, code)
|
||||
return true
|
||||
}
|
||||
usage := openaifmt.BuildChatUsage(s.finalPrompt, finalThinking, finalText)
|
||||
usage := openaifmt.BuildChatUsageForModel(s.model, s.finalPrompt, finalThinking, finalText, s.refFileTokens)
|
||||
s.finalFinishReason = finishReason
|
||||
s.finalUsage = usage
|
||||
s.sendChunk(openaifmt.BuildChatStreamChunk(
|
||||
@@ -256,8 +268,8 @@ func (s *chatStreamRuntime) onParsed(parsed sse.LineResult) streamengine.ParsedD
|
||||
return streamengine.ParsedDecision{Stop: true, StopReason: streamengine.StopReasonHandlerRequested}
|
||||
}
|
||||
|
||||
newChoices := make([]map[string]any, 0, len(parsed.Parts))
|
||||
contentSeen := false
|
||||
batch := chatDeltaBatch{runtime: s}
|
||||
for _, p := range parsed.ToolDetectionThinkingParts {
|
||||
trimmed := sse.TrimContinuationOverlap(s.toolDetectionThinking.String(), p.Text)
|
||||
if trimmed != "" {
|
||||
@@ -265,11 +277,6 @@ func (s *chatStreamRuntime) onParsed(parsed sse.LineResult) streamengine.ParsedD
|
||||
}
|
||||
}
|
||||
for _, p := range parsed.Parts {
|
||||
delta := map[string]any{}
|
||||
if !s.firstChunkSent {
|
||||
delta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
if p.Type == "thinking" {
|
||||
rawTrimmed := sse.TrimContinuationOverlap(s.rawThinking.String(), p.Text)
|
||||
if rawTrimmed != "" {
|
||||
@@ -286,7 +293,7 @@ func (s *chatStreamRuntime) onParsed(parsed sse.LineResult) streamengine.ParsedD
|
||||
continue
|
||||
}
|
||||
s.thinking.WriteString(trimmed)
|
||||
delta["reasoning_content"] = trimmed
|
||||
batch.append("reasoning_content", trimmed)
|
||||
}
|
||||
} else {
|
||||
rawTrimmed := sse.TrimContinuationOverlap(s.rawText.String(), p.Text)
|
||||
@@ -307,7 +314,7 @@ func (s *chatStreamRuntime) onParsed(parsed sse.LineResult) streamengine.ParsedD
|
||||
if trimmed == "" {
|
||||
continue
|
||||
}
|
||||
delta["content"] = trimmed
|
||||
batch.append("content", trimmed)
|
||||
} else {
|
||||
events := toolstream.ProcessChunk(&s.toolSieve, rawTrimmed, s.toolNames)
|
||||
for _, evt := range events {
|
||||
@@ -323,28 +330,22 @@ func (s *chatStreamRuntime) onParsed(parsed sse.LineResult) streamengine.ParsedD
|
||||
if len(formatted) == 0 {
|
||||
continue
|
||||
}
|
||||
batch.flush()
|
||||
tcDelta := map[string]any{
|
||||
"tool_calls": formatted,
|
||||
}
|
||||
s.toolCallsEmitted = true
|
||||
if !s.firstChunkSent {
|
||||
tcDelta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
newChoices = append(newChoices, openaifmt.BuildChatStreamDeltaChoice(0, tcDelta))
|
||||
s.sendDelta(tcDelta)
|
||||
continue
|
||||
}
|
||||
if len(evt.ToolCalls) > 0 {
|
||||
batch.flush()
|
||||
s.toolCallsEmitted = true
|
||||
s.toolCallsDoneEmitted = true
|
||||
tcDelta := map[string]any{
|
||||
"tool_calls": formatFinalStreamToolCallsWithStableIDs(evt.ToolCalls, s.streamToolCallIDs, s.toolsRaw),
|
||||
}
|
||||
if !s.firstChunkSent {
|
||||
tcDelta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
newChoices = append(newChoices, openaifmt.BuildChatStreamDeltaChoice(0, tcDelta))
|
||||
s.sendDelta(tcDelta)
|
||||
s.resetStreamToolCallState()
|
||||
continue
|
||||
}
|
||||
@@ -353,25 +354,12 @@ func (s *chatStreamRuntime) onParsed(parsed sse.LineResult) streamengine.ParsedD
|
||||
if cleaned == "" || (s.searchEnabled && sse.IsCitation(cleaned)) {
|
||||
continue
|
||||
}
|
||||
contentDelta := map[string]any{
|
||||
"content": cleaned,
|
||||
}
|
||||
if !s.firstChunkSent {
|
||||
contentDelta["role"] = "assistant"
|
||||
s.firstChunkSent = true
|
||||
}
|
||||
newChoices = append(newChoices, openaifmt.BuildChatStreamDeltaChoice(0, contentDelta))
|
||||
batch.append("content", cleaned)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
if len(delta) > 0 {
|
||||
newChoices = append(newChoices, openaifmt.BuildChatStreamDeltaChoice(0, delta))
|
||||
}
|
||||
}
|
||||
|
||||
if len(newChoices) > 0 {
|
||||
s.sendChunk(openaifmt.BuildChatStreamChunk(s.completionID, s.created, s.model, newChoices, nil))
|
||||
}
|
||||
batch.flush()
|
||||
return streamengine.ParsedDecision{ContentSeen: contentSeen}
|
||||
}
|
||||
|
||||
@@ -28,7 +28,7 @@ type chatNonStreamResult struct {
|
||||
responseMessageID int
|
||||
}
|
||||
|
||||
func (h *Handler) handleNonStreamWithRetry(w http.ResponseWriter, ctx context.Context, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, completionID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
func (h *Handler) handleNonStreamWithRetry(w http.ResponseWriter, ctx context.Context, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, completionID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
attempts := 0
|
||||
currentResp := resp
|
||||
usagePrompt := finalPrompt
|
||||
@@ -49,9 +49,10 @@ func (h *Handler) handleNonStreamWithRetry(w http.ResponseWriter, ctx context.Co
|
||||
detected := detectAssistantToolCalls(result.rawText, result.text, result.rawThinking, result.toolDetectionThinking, toolNames)
|
||||
result.detectedCalls = len(detected.Calls)
|
||||
result.body = openaifmt.BuildChatCompletionWithToolCalls(completionID, model, usagePrompt, result.thinking, result.text, detected.Calls, toolsRaw)
|
||||
addRefFileTokensToUsage(result.body, refFileTokens)
|
||||
result.finishReason = chatFinishReason(result.body)
|
||||
if !shouldRetryChatNonStream(result, attempts) {
|
||||
h.finishChatNonStreamResult(w, result, attempts, usagePrompt, historySession)
|
||||
h.finishChatNonStreamResult(w, result, attempts, usagePrompt, refFileTokens, historySession)
|
||||
return
|
||||
}
|
||||
|
||||
@@ -72,7 +73,7 @@ func (h *Handler) handleNonStreamWithRetry(w http.ResponseWriter, ctx context.Co
|
||||
config.Logger.Warn("[openai_empty_retry] retry request failed", "surface", "chat.completions", "stream", false, "retry_attempt", attempts, "error", err)
|
||||
return
|
||||
}
|
||||
usagePrompt = usagePromptWithEmptyOutputRetry(finalPrompt, attempts)
|
||||
usagePrompt = usagePromptWithEmptyOutputRetry(usagePrompt, attempts)
|
||||
currentResp = nextResp
|
||||
}
|
||||
}
|
||||
@@ -107,7 +108,7 @@ func (h *Handler) collectChatNonStreamAttempt(w http.ResponseWriter, resp *http.
|
||||
}, true
|
||||
}
|
||||
|
||||
func (h *Handler) finishChatNonStreamResult(w http.ResponseWriter, result chatNonStreamResult, attempts int, usagePrompt string, historySession *chatHistorySession) {
|
||||
func (h *Handler) finishChatNonStreamResult(w http.ResponseWriter, result chatNonStreamResult, attempts int, usagePrompt string, refFileTokens int, historySession *chatHistorySession) {
|
||||
if result.detectedCalls == 0 && shouldWriteUpstreamEmptyOutputError(result.text) {
|
||||
status, message, code := upstreamEmptyOutputDetail(result.contentFilter, result.text, result.thinking)
|
||||
if historySession != nil {
|
||||
@@ -118,7 +119,7 @@ func (h *Handler) finishChatNonStreamResult(w http.ResponseWriter, result chatNo
|
||||
return
|
||||
}
|
||||
if historySession != nil {
|
||||
historySession.success(http.StatusOK, result.thinking, result.text, result.finishReason, openaifmt.BuildChatUsage(usagePrompt, result.thinking, result.text))
|
||||
historySession.success(http.StatusOK, result.thinking, result.text, result.finishReason, openaifmt.BuildChatUsageForModel("", usagePrompt, result.thinking, result.text, refFileTokens))
|
||||
}
|
||||
writeJSON(w, http.StatusOK, result.body)
|
||||
source := "first_attempt"
|
||||
@@ -145,8 +146,8 @@ func shouldRetryChatNonStream(result chatNonStreamResult, attempts int) bool {
|
||||
strings.TrimSpace(result.text) == ""
|
||||
}
|
||||
|
||||
func (h *Handler) handleStreamWithRetry(w http.ResponseWriter, r *http.Request, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, completionID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
streamRuntime, initialType, ok := h.prepareChatStreamRuntime(w, resp, completionID, model, finalPrompt, thinkingEnabled, searchEnabled, toolNames, toolsRaw, historySession)
|
||||
func (h *Handler) handleStreamWithRetry(w http.ResponseWriter, r *http.Request, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, completionID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
streamRuntime, initialType, ok := h.prepareChatStreamRuntime(w, resp, completionID, model, finalPrompt, refFileTokens, thinkingEnabled, searchEnabled, toolNames, toolsRaw, historySession)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
@@ -188,7 +189,7 @@ func (h *Handler) handleStreamWithRetry(w http.ResponseWriter, r *http.Request,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) prepareChatStreamRuntime(w http.ResponseWriter, resp *http.Response, completionID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) (*chatStreamRuntime, string, bool) {
|
||||
func (h *Handler) prepareChatStreamRuntime(w http.ResponseWriter, resp *http.Response, completionID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) (*chatStreamRuntime, string, bool) {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
@@ -216,6 +217,7 @@ func (h *Handler) prepareChatStreamRuntime(w http.ResponseWriter, resp *http.Res
|
||||
thinkingEnabled, searchEnabled, h.compatStripReferenceMarkers(), toolNames, toolsRaw,
|
||||
len(toolNames) > 0, h.toolcallFeatureMatchEnabled() && h.toolcallEarlyEmitHighConfidence(),
|
||||
)
|
||||
streamRuntime.refFileTokens = refFileTokens
|
||||
return streamRuntime, initialType, true
|
||||
}
|
||||
|
||||
|
||||
@@ -108,11 +108,12 @@ func (h *Handler) ChatCompletions(w http.ResponseWriter, r *http.Request) {
|
||||
writeOpenAIError(w, http.StatusInternalServerError, "Failed to get completion.")
|
||||
return
|
||||
}
|
||||
refFileTokens := stdReq.RefFileTokens
|
||||
if stdReq.Stream {
|
||||
h.handleStreamWithRetry(w, r, a, resp, payload, pow, sessionID, stdReq.ResponseModel, stdReq.FinalPrompt, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, historySession)
|
||||
h.handleStreamWithRetry(w, r, a, resp, payload, pow, sessionID, stdReq.ResponseModel, stdReq.PromptTokenText, refFileTokens, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, historySession)
|
||||
return
|
||||
}
|
||||
h.handleNonStreamWithRetry(w, r.Context(), a, resp, payload, pow, sessionID, stdReq.ResponseModel, stdReq.FinalPrompt, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, historySession)
|
||||
h.handleNonStreamWithRetry(w, r.Context(), a, resp, payload, pow, sessionID, stdReq.ResponseModel, stdReq.PromptTokenText, refFileTokens, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, historySession)
|
||||
}
|
||||
|
||||
func (h *Handler) autoDeleteRemoteSession(ctx context.Context, a *auth.RequestAuth, sessionID string) {
|
||||
@@ -148,7 +149,7 @@ func (h *Handler) autoDeleteRemoteSession(ctx context.Context, a *auth.RequestAu
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) handleNonStream(w http.ResponseWriter, resp *http.Response, completionID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
func (h *Handler) handleNonStream(w http.ResponseWriter, resp *http.Response, completionID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
@@ -176,6 +177,9 @@ func (h *Handler) handleNonStream(w http.ResponseWriter, resp *http.Response, co
|
||||
return
|
||||
}
|
||||
respBody := openaifmt.BuildChatCompletionWithToolCalls(completionID, model, finalPrompt, finalThinking, finalText, detected.Calls, toolsRaw)
|
||||
if refFileTokens > 0 {
|
||||
addRefFileTokensToUsage(respBody, refFileTokens)
|
||||
}
|
||||
finishReason := "stop"
|
||||
if choices, ok := respBody["choices"].([]map[string]any); ok && len(choices) > 0 {
|
||||
if fr, _ := choices[0]["finish_reason"].(string); strings.TrimSpace(fr) != "" {
|
||||
@@ -183,12 +187,12 @@ func (h *Handler) handleNonStream(w http.ResponseWriter, resp *http.Response, co
|
||||
}
|
||||
}
|
||||
if historySession != nil {
|
||||
historySession.success(http.StatusOK, finalThinking, finalText, finishReason, openaifmt.BuildChatUsage(finalPrompt, finalThinking, finalText))
|
||||
historySession.success(http.StatusOK, finalThinking, finalText, finishReason, openaifmt.BuildChatUsageForModel(model, finalPrompt, finalThinking, finalText, refFileTokens))
|
||||
}
|
||||
writeJSON(w, http.StatusOK, respBody)
|
||||
}
|
||||
|
||||
func (h *Handler) handleStream(w http.ResponseWriter, r *http.Request, resp *http.Response, completionID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
func (h *Handler) handleStream(w http.ResponseWriter, r *http.Request, resp *http.Response, completionID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, historySession *chatHistorySession) {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
@@ -233,6 +237,7 @@ func (h *Handler) handleStream(w http.ResponseWriter, r *http.Request, resp *htt
|
||||
bufferToolContent,
|
||||
emitEarlyToolDeltas,
|
||||
)
|
||||
streamRuntime.refFileTokens = refFileTokens
|
||||
|
||||
streamengine.ConsumeSSE(streamengine.ConsumeConfig{
|
||||
Context: r.Context(),
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package chat
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
@@ -93,7 +94,7 @@ func TestHandleNonStreamReturns429WhenUpstreamOutputEmpty(t *testing.T) {
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
h.handleNonStream(rec, resp, "cid-empty", "deepseek-v4-flash", "prompt", false, false, nil, nil, nil)
|
||||
h.handleNonStream(rec, resp, "cid-empty", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, nil)
|
||||
if rec.Code != http.StatusTooManyRequests {
|
||||
t.Fatalf("expected status 429 for empty upstream output, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -112,7 +113,7 @@ func TestHandleNonStreamReturnsContentFilterErrorWhenUpstreamFilteredWithoutOutp
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
h.handleNonStream(rec, resp, "cid-empty-filtered", "deepseek-v4-flash", "prompt", false, false, nil, nil, nil)
|
||||
h.handleNonStream(rec, resp, "cid-empty-filtered", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, nil)
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected status 400 for filtered upstream output, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -131,7 +132,7 @@ func TestHandleNonStreamReturns429WhenUpstreamHasOnlyThinking(t *testing.T) {
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
h.handleNonStream(rec, resp, "cid-thinking-only", "deepseek-v4-pro", "prompt", true, false, nil, nil, nil)
|
||||
h.handleNonStream(rec, resp, "cid-thinking-only", "deepseek-v4-pro", "prompt", 0, true, false, nil, nil, nil)
|
||||
if rec.Code != http.StatusTooManyRequests {
|
||||
t.Fatalf("expected status 429 for thinking-only upstream output, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -150,7 +151,7 @@ func TestHandleNonStreamPromotesThinkingToolCallsWhenTextEmpty(t *testing.T) {
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
h.handleNonStream(rec, resp, "cid-thinking-tool", "deepseek-v4-pro", "prompt", true, false, []string{"search"}, nil, nil)
|
||||
h.handleNonStream(rec, resp, "cid-thinking-tool", "deepseek-v4-pro", "prompt", 0, true, false, []string{"search"}, nil, nil)
|
||||
if rec.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200 for thinking tool calls, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -181,7 +182,7 @@ func TestHandleNonStreamPromotesHiddenThinkingDSMLToolCallsWhenTextEmpty(t *test
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
h.handleNonStream(rec, resp, "cid-hidden-thinking-tool", "deepseek-v4-pro", "prompt", false, false, []string{"search"}, nil, nil)
|
||||
h.handleNonStream(rec, resp, "cid-hidden-thinking-tool", "deepseek-v4-pro", "prompt", 0, false, false, []string{"search"}, nil, nil)
|
||||
if rec.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200 for hidden thinking tool calls, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -211,7 +212,7 @@ func TestHandleStreamToolsPlainTextStreamsBeforeFinish(t *testing.T) {
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid6", "deepseek-v4-flash", "prompt", false, false, []string{"search"}, nil, nil)
|
||||
h.handleStream(rec, req, resp, "cid6", "deepseek-v4-flash", "prompt", 0, false, false, []string{"search"}, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
@@ -239,6 +240,118 @@ func TestHandleStreamToolsPlainTextStreamsBeforeFinish(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleStreamThinkingDisabledDoesNotLeakHiddenFragmentContinuations(t *testing.T) {
|
||||
h := &Handler{}
|
||||
resp := makeSSEHTTPResponse(
|
||||
`data: {"p":"response/fragments","o":"APPEND","v":[{"type":"THINK","content":"我们"}]}`,
|
||||
`data: {"p":"response/fragments/-1/content","v":"被"}`,
|
||||
`data: {"v":"要求"}`,
|
||||
`data: {"p":"response/fragments","o":"APPEND","v":[{"type":"RESPONSE","content":"答"}]}`,
|
||||
`data: {"p":"response/fragments/-1/content","v":"案"}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-hidden-fragment", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
t.Fatalf("expected [DONE], body=%s", rec.Body.String())
|
||||
}
|
||||
content := strings.Builder{}
|
||||
for _, frame := range frames {
|
||||
choices, _ := frame["choices"].([]any)
|
||||
for _, item := range choices {
|
||||
choice, _ := item.(map[string]any)
|
||||
delta, _ := choice["delta"].(map[string]any)
|
||||
if c, ok := delta["content"].(string); ok {
|
||||
content.WriteString(c)
|
||||
}
|
||||
}
|
||||
}
|
||||
if got := content.String(); got != "答案" {
|
||||
t.Fatalf("expected only visible response text, got %q body=%s", got, rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleStreamEmitsSingleChoiceFramesForMultipleParsedParts(t *testing.T) {
|
||||
h := &Handler{}
|
||||
resp := makeSSEHTTPResponse(
|
||||
`data: {"p":"response/fragments","o":"APPEND","v":[{"type":"THINK","content":"我们"},{"type":"THINK","content":"被"},{"type":"THINK","content":"要求"},{"type":"RESPONSE","content":"答"},{"type":"RESPONSE","content":"案"}]}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-multi-parts", "deepseek-v4-pro", "prompt", 0, true, false, nil, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
t.Fatalf("expected [DONE], body=%s", rec.Body.String())
|
||||
}
|
||||
var reasoning, content strings.Builder
|
||||
for _, frame := range frames {
|
||||
choices, _ := frame["choices"].([]any)
|
||||
if len(choices) != 1 {
|
||||
t.Fatalf("expected exactly one choice per stream frame, got %d frame=%#v body=%s", len(choices), frame, rec.Body.String())
|
||||
}
|
||||
choice, _ := choices[0].(map[string]any)
|
||||
delta, _ := choice["delta"].(map[string]any)
|
||||
reasoning.WriteString(asString(delta["reasoning_content"]))
|
||||
content.WriteString(asString(delta["content"]))
|
||||
}
|
||||
if got := reasoning.String(); got != "我们被要求" {
|
||||
t.Fatalf("first-choice-only client would miss reasoning tokens: got %q body=%s", got, rec.Body.String())
|
||||
}
|
||||
if got := content.String(); got != "答案" {
|
||||
t.Fatalf("first-choice-only client would miss content tokens: got %q body=%s", got, rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleStreamCoalescesSmallContentDeltas(t *testing.T) {
|
||||
h := &Handler{}
|
||||
lines := make([]string, 0, 101)
|
||||
for i := 0; i < 100; i++ {
|
||||
b, _ := json.Marshal(map[string]any{
|
||||
"p": "response/content",
|
||||
"v": "字",
|
||||
})
|
||||
lines = append(lines, "data: "+string(b))
|
||||
}
|
||||
lines = append(lines, "data: [DONE]")
|
||||
resp := makeSSEHTTPResponse(lines...)
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-coalesce", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
t.Fatalf("expected [DONE], body=%s", rec.Body.String())
|
||||
}
|
||||
var content strings.Builder
|
||||
contentDeltaFrames := 0
|
||||
for _, frame := range frames {
|
||||
choices, _ := frame["choices"].([]any)
|
||||
if len(choices) != 1 {
|
||||
t.Fatalf("expected exactly one choice per stream frame, got %d frame=%#v body=%s", len(choices), frame, rec.Body.String())
|
||||
}
|
||||
choice, _ := choices[0].(map[string]any)
|
||||
delta, _ := choice["delta"].(map[string]any)
|
||||
if c, ok := delta["content"].(string); ok {
|
||||
contentDeltaFrames++
|
||||
content.WriteString(c)
|
||||
}
|
||||
}
|
||||
if got, want := content.String(), strings.Repeat("字", 100); got != want {
|
||||
t.Fatalf("coalesced stream content mismatch: got %q want %q body=%s", got, want, rec.Body.String())
|
||||
}
|
||||
if contentDeltaFrames >= 100 {
|
||||
t.Fatalf("expected coalescing to reduce 100 tiny content frames, got %d body=%s", contentDeltaFrames, rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleStreamIncompleteCapturedToolJSONFlushesAsTextOnFinalize(t *testing.T) {
|
||||
h := &Handler{}
|
||||
resp := makeSSEHTTPResponse(
|
||||
@@ -248,7 +361,7 @@ func TestHandleStreamIncompleteCapturedToolJSONFlushesAsTextOnFinalize(t *testin
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid10", "deepseek-v4-flash", "prompt", false, false, []string{"search"}, nil, nil)
|
||||
h.handleStream(rec, req, resp, "cid10", "deepseek-v4-flash", "prompt", 0, false, false, []string{"search"}, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
@@ -282,7 +395,7 @@ func TestHandleStreamPromotesThinkingToolCallsOnFinalizeWithoutMidstreamIntercep
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-thinking-stream", "deepseek-v4-pro", "prompt", true, false, []string{"search"}, nil, nil)
|
||||
h.handleStream(rec, req, resp, "cid-thinking-stream", "deepseek-v4-pro", "prompt", 0, true, false, []string{"search"}, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
@@ -315,7 +428,7 @@ func TestHandleStreamPromotesHiddenThinkingDSMLToolCallsOnFinalize(t *testing.T)
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-hidden-thinking-stream", "deepseek-v4-pro", "prompt", false, false, []string{"search"}, nil, nil)
|
||||
h.handleStream(rec, req, resp, "cid-hidden-thinking-stream", "deepseek-v4-pro", "prompt", 0, false, false, []string{"search"}, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
@@ -349,7 +462,7 @@ func TestHandleStreamEmitsDistinctToolCallIDsAcrossSeparateToolBlocks(t *testing
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-multi", "deepseek-v4-flash", "prompt", false, false, []string{"read_file", "search"}, nil, nil)
|
||||
h.handleStream(rec, req, resp, "cid-multi", "deepseek-v4-flash", "prompt", 0, false, false, []string{"read_file", "search"}, nil, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
@@ -415,7 +528,7 @@ func TestHandleStreamCoercesSchemaDeclaredStringArgumentsOnFinalize(t *testing.T
|
||||
},
|
||||
}
|
||||
|
||||
h.handleStream(rec, req, resp, "cid-string-protect", "deepseek-v4-flash", "prompt", false, false, []string{"Write"}, toolsRaw, nil)
|
||||
h.handleStream(rec, req, resp, "cid-string-protect", "deepseek-v4-flash", "prompt", 0, false, false, []string{"Write"}, toolsRaw, nil)
|
||||
|
||||
frames, done := parseSSEDataFrames(t, rec.Body.String())
|
||||
if !done {
|
||||
@@ -447,3 +560,45 @@ func TestHandleStreamCoercesSchemaDeclaredStringArgumentsOnFinalize(t *testing.T
|
||||
}
|
||||
t.Fatalf("expected at least one streamed tool call delta, body=%s", rec.Body.String())
|
||||
}
|
||||
|
||||
func TestHandleNonStreamWithRetryIncludesRefFileTokensInUsage(t *testing.T) {
|
||||
h := &Handler{}
|
||||
|
||||
run := func(refFileTokens int) map[string]any {
|
||||
resp := makeSSEHTTPResponse(
|
||||
`data: {"p":"response/content","v":"hello world"}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
h.handleNonStreamWithRetry(rec, context.Background(), nil, resp, nil, "", "cid-ref", "deepseek-v4-flash", "prompt", refFileTokens, false, false, nil, nil, nil)
|
||||
if rec.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
return decodeJSONBody(t, rec.Body.String())
|
||||
}
|
||||
|
||||
base := run(0)
|
||||
withRef := run(7)
|
||||
|
||||
baseUsage, _ := base["usage"].(map[string]any)
|
||||
refUsage, _ := withRef["usage"].(map[string]any)
|
||||
if baseUsage == nil || refUsage == nil {
|
||||
t.Fatalf("expected usage objects, base=%#v ref=%#v", base["usage"], withRef["usage"])
|
||||
}
|
||||
|
||||
getInt := func(m map[string]any, key string) int {
|
||||
t.Helper()
|
||||
v, ok := m[key].(float64)
|
||||
if !ok {
|
||||
t.Fatalf("expected numeric %s, got %#v", key, m[key])
|
||||
}
|
||||
return int(v)
|
||||
}
|
||||
|
||||
if got := getInt(refUsage, "prompt_tokens") - getInt(baseUsage, "prompt_tokens"); got != 7 {
|
||||
t.Fatalf("expected prompt_tokens delta 7, got %d", got)
|
||||
}
|
||||
if got := getInt(refUsage, "total_tokens") - getInt(baseUsage, "total_tokens"); got != 7 {
|
||||
t.Fatalf("expected total_tokens delta 7, got %d", got)
|
||||
}
|
||||
}
|
||||
|
||||
26
internal/httpapi/openai/chat/ref_file_tokens.go
Normal file
26
internal/httpapi/openai/chat/ref_file_tokens.go
Normal file
@@ -0,0 +1,26 @@
|
||||
package chat
|
||||
|
||||
// addRefFileTokensToUsage adds inline-uploaded file token estimates to an existing
|
||||
// usage map inside a response object. This keeps the token accounting aware of file
|
||||
// content that the upstream model processes but that is not part of the prompt text.
|
||||
func addRefFileTokensToUsage(obj map[string]any, refFileTokens int) {
|
||||
if refFileTokens <= 0 || obj == nil {
|
||||
return
|
||||
}
|
||||
usage, ok := obj["usage"].(map[string]any)
|
||||
if !ok || usage == nil {
|
||||
return
|
||||
}
|
||||
for _, key := range []string{"input_tokens", "prompt_tokens"} {
|
||||
if v, ok := usage[key]; ok {
|
||||
if n, ok := v.(int); ok {
|
||||
usage[key] = n + refFileTokens
|
||||
}
|
||||
}
|
||||
}
|
||||
if v, ok := usage["total_tokens"]; ok {
|
||||
if n, ok := v.(int); ok {
|
||||
usage["total_tokens"] = n + refFileTokens
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -216,6 +216,45 @@ func TestChatCompletionsInlineUploadFailureReturnsBadRequest(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestChatCompletionsInlineUploadLimitReturnsBadRequest(t *testing.T) {
|
||||
ds := &inlineUploadDSStub{}
|
||||
h := &openAITestSurface{Store: mockOpenAIConfig{wideInput: true}, Auth: streamStatusAuthStub{}, DS: ds}
|
||||
content := []any{map[string]any{"type": "input_text", "text": "hi"}}
|
||||
for i := 0; i < 51; i++ {
|
||||
content = append(content, map[string]any{
|
||||
"type": "image_url",
|
||||
"image_url": map[string]any{"url": "data:image/png;base64,QUJDRA=="},
|
||||
})
|
||||
}
|
||||
body, err := json.Marshal(map[string]any{
|
||||
"model": "deepseek-v4-flash",
|
||||
"messages": []any{map[string]any{
|
||||
"role": "user",
|
||||
"content": content,
|
||||
}},
|
||||
"stream": false,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("marshal request: %v", err)
|
||||
}
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", strings.NewReader(string(body)))
|
||||
req.Header.Set("Authorization", "Bearer direct-token")
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
h.ChatCompletions(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(rec.Body.String(), "exceeded maximum of 50 inline files per request") {
|
||||
t.Fatalf("expected inline file limit error, got body=%s", rec.Body.String())
|
||||
}
|
||||
if ds.completionReq != nil {
|
||||
t.Fatalf("did not expect completion call after inline file limit error")
|
||||
}
|
||||
}
|
||||
|
||||
func TestResponsesInlineUploadFailureReturnsInternalServerError(t *testing.T) {
|
||||
ds := &inlineUploadDSStub{uploadErr: errors.New("boom")}
|
||||
h := &openAITestSurface{Store: mockOpenAIConfig{wideInput: true}, Auth: streamStatusAuthStub{}, DS: ds}
|
||||
|
||||
@@ -39,11 +39,12 @@ func (e *inlineFileUploadError) Error() string {
|
||||
}
|
||||
|
||||
type inlineUploadState struct {
|
||||
ctx context.Context
|
||||
handler *Handler
|
||||
auth *auth.RequestAuth
|
||||
uploadedByID map[string]string
|
||||
uploadCount int
|
||||
ctx context.Context
|
||||
handler *Handler
|
||||
auth *auth.RequestAuth
|
||||
uploadedByID map[string]string
|
||||
uploadCount int
|
||||
inlineFileBytes int
|
||||
}
|
||||
|
||||
type inlineDecodedFile struct {
|
||||
@@ -75,6 +76,9 @@ func (h *Handler) PreprocessInlineFileInputs(ctx context.Context, a *auth.Reques
|
||||
if refIDs := promptcompat.CollectOpenAIRefFileIDs(req); len(refIDs) > 0 {
|
||||
req["ref_file_ids"] = stringsToAnySlice(refIDs)
|
||||
}
|
||||
if state.inlineFileBytes > 0 {
|
||||
req["_inline_file_bytes"] = state.inlineFileBytes
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -135,13 +139,15 @@ func (s *inlineUploadState) tryUploadBlock(block map[string]any) (map[string]any
|
||||
return nil, false, nil
|
||||
}
|
||||
if s.uploadCount >= maxInlineFilesPerRequest {
|
||||
return nil, true, fmt.Errorf("exceeded maximum of %d inline files per request", maxInlineFilesPerRequest)
|
||||
err := fmt.Errorf("exceeded maximum of %d inline files per request", maxInlineFilesPerRequest)
|
||||
return nil, true, &inlineFileUploadError{status: http.StatusBadRequest, message: err.Error(), err: err}
|
||||
}
|
||||
fileID, err := s.uploadInlineFile(decoded)
|
||||
if err != nil {
|
||||
return nil, true, &inlineFileUploadError{status: http.StatusInternalServerError, message: "Failed to upload inline file.", err: err}
|
||||
}
|
||||
s.uploadCount++
|
||||
s.inlineFileBytes += len(decoded.Data)
|
||||
replacement := map[string]any{
|
||||
"type": decoded.ReplacementType,
|
||||
"file_id": fileID,
|
||||
|
||||
@@ -35,7 +35,6 @@ func (s Service) ApplyCurrentInputFile(ctx context.Context, a *auth.RequestAuth,
|
||||
if strings.TrimSpace(fileText) == "" {
|
||||
return stdReq, errors.New("current user input file produced empty transcript")
|
||||
}
|
||||
|
||||
result, err := s.DS.UploadFile(ctx, a, dsclient.UploadFileRequest{
|
||||
Filename: currentInputFilename,
|
||||
ContentType: currentInputContentType,
|
||||
@@ -62,6 +61,9 @@ func (s Service) ApplyCurrentInputFile(ctx context.Context, a *auth.RequestAuth,
|
||||
stdReq.CurrentInputFileApplied = true
|
||||
stdReq.RefFileIDs = prependUniqueRefFileID(stdReq.RefFileIDs, fileID)
|
||||
stdReq.FinalPrompt, stdReq.ToolNames = promptcompat.BuildOpenAIPrompt(messages, stdReq.ToolsRaw, "", stdReq.ToolChoice, stdReq.Thinking)
|
||||
// Token accounting must reflect the actual downstream context:
|
||||
// the uploaded history.txt file content + the neutral live prompt.
|
||||
stdReq.PromptTokenText = fileText + "\n" + stdReq.FinalPrompt
|
||||
return stdReq, nil
|
||||
}
|
||||
|
||||
|
||||
@@ -14,6 +14,7 @@ import (
|
||||
"ds2api/internal/auth"
|
||||
dsclient "ds2api/internal/deepseek/client"
|
||||
"ds2api/internal/promptcompat"
|
||||
"ds2api/internal/util"
|
||||
)
|
||||
|
||||
func historySplitTestMessages() []any {
|
||||
@@ -298,6 +299,52 @@ func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T)
|
||||
if len(out.RefFileIDs) != 1 || out.RefFileIDs[0] != "file-inline-1" {
|
||||
t.Fatalf("expected current input file id in ref_file_ids, got %#v", out.RefFileIDs)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "first turn content that is long enough") {
|
||||
t.Fatalf("expected prompt token text to preserve original full context, got %q", out.PromptTokenText)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCurrentInputFilePreservesFullContextPromptForTokenCounting(t *testing.T) {
|
||||
ds := &inlineUploadDSStub{}
|
||||
h := &openAITestSurface{
|
||||
Store: mockOpenAIConfig{
|
||||
wideInput: true,
|
||||
currentInputEnabled: true,
|
||||
currentInputMin: 0,
|
||||
thinkingInjection: boolPtr(true),
|
||||
},
|
||||
DS: ds,
|
||||
}
|
||||
req := map[string]any{
|
||||
"model": "deepseek-v4-flash",
|
||||
"messages": historySplitTestMessages(),
|
||||
}
|
||||
stdReq, err := promptcompat.NormalizeOpenAIChatRequest(h.Store, req, "")
|
||||
if err != nil {
|
||||
t.Fatalf("normalize failed: %v", err)
|
||||
}
|
||||
|
||||
out, err := h.applyCurrentInputFile(context.Background(), &auth.RequestAuth{DeepSeekToken: "token"}, stdReq)
|
||||
if err != nil {
|
||||
t.Fatalf("apply current input file failed: %v", err)
|
||||
}
|
||||
if out.FinalPrompt == stdReq.FinalPrompt {
|
||||
t.Fatalf("expected live prompt to be rewritten after current input file")
|
||||
}
|
||||
// PromptTokenText must include the uploaded file content (which contains the full context)
|
||||
// plus the neutral live prompt — reflecting the actual downstream token cost.
|
||||
if !strings.Contains(out.PromptTokenText, "first user turn") || !strings.Contains(out.PromptTokenText, "latest user turn") {
|
||||
t.Fatalf("expected prompt token text to contain file context with full conversation, got %q", out.PromptTokenText)
|
||||
}
|
||||
if strings.Contains(out.PromptTokenText, "[file content end]") || strings.Contains(out.PromptTokenText, "[file name]:") {
|
||||
t.Fatalf("expected prompt token text to use raw transcript without wrapper tags, got %q", out.PromptTokenText)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected prompt token text to also include neutral live prompt, got %q", out.PromptTokenText)
|
||||
}
|
||||
if strings.Contains(out.FinalPrompt, "first user turn") || strings.Contains(out.FinalPrompt, "latest user turn") {
|
||||
t.Fatalf("expected live prompt to hide original turns, got %q", out.FinalPrompt)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCurrentInputFileUploadsFullContextFile(t *testing.T) {
|
||||
@@ -434,6 +481,16 @@ func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *t
|
||||
if len(refIDs) == 0 || refIDs[0] != "file-inline-1" {
|
||||
t.Fatalf("expected uploaded current input file to be first ref_file_id, got %#v", ds.completionReq["ref_file_ids"])
|
||||
}
|
||||
var body map[string]any
|
||||
if err := json.Unmarshal(rec.Body.Bytes(), &body); err != nil {
|
||||
t.Fatalf("decode response failed: %v", err)
|
||||
}
|
||||
usage, _ := body["usage"].(map[string]any)
|
||||
promptTokens := int(usage["prompt_tokens"].(float64))
|
||||
neutralCount := util.CountPromptTokens(promptText, "deepseek-v4-flash")
|
||||
if promptTokens <= neutralCount {
|
||||
t.Fatalf("expected prompt_tokens to exceed neutral live prompt count (includes file context), got=%d neutral=%d", promptTokens, neutralCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestResponsesCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *testing.T) {
|
||||
@@ -476,6 +533,16 @@ func TestResponsesCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *testing
|
||||
if strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("expected prompt to hide original turns, got %s", promptText)
|
||||
}
|
||||
var body map[string]any
|
||||
if err := json.Unmarshal(rec.Body.Bytes(), &body); err != nil {
|
||||
t.Fatalf("decode response failed: %v", err)
|
||||
}
|
||||
usage, _ := body["usage"].(map[string]any)
|
||||
inputTokens := int(usage["input_tokens"].(float64))
|
||||
neutralCount := util.CountPromptTokens(promptText, "deepseek-v4-flash")
|
||||
if inputTokens <= neutralCount {
|
||||
t.Fatalf("expected input_tokens to exceed neutral live prompt count (includes file context), got=%d neutral=%d", inputTokens, neutralCount)
|
||||
}
|
||||
}
|
||||
|
||||
func TestChatCompletionsCurrentInputFileMapsManagedAuthFailureTo401(t *testing.T) {
|
||||
|
||||
@@ -29,7 +29,7 @@ type responsesNonStreamResult struct {
|
||||
responseMessageID int
|
||||
}
|
||||
|
||||
func (h *Handler) handleResponsesNonStreamWithRetry(w http.ResponseWriter, ctx context.Context, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, owner, responseID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
func (h *Handler) handleResponsesNonStreamWithRetry(w http.ResponseWriter, ctx context.Context, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, owner, responseID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
attempts := 0
|
||||
currentResp := resp
|
||||
usagePrompt := finalPrompt
|
||||
@@ -49,6 +49,9 @@ func (h *Handler) handleResponsesNonStreamWithRetry(w http.ResponseWriter, ctx c
|
||||
result.toolDetectionThinking = accumulatedToolDetectionThinking
|
||||
result.parsed = detectAssistantToolCalls(result.rawText, result.text, result.rawThinking, result.toolDetectionThinking, toolNames)
|
||||
result.body = openaifmt.BuildResponseObjectWithToolCalls(responseID, model, usagePrompt, result.thinking, result.text, result.parsed.Calls, toolsRaw)
|
||||
if refFileTokens > 0 {
|
||||
addRefFileTokensToUsage(result.body, refFileTokens)
|
||||
}
|
||||
|
||||
if !shouldRetryResponsesNonStream(result, attempts) {
|
||||
h.finishResponsesNonStreamResult(w, result, attempts, owner, responseID, toolChoice, traceID)
|
||||
@@ -68,7 +71,7 @@ func (h *Handler) handleResponsesNonStreamWithRetry(w http.ResponseWriter, ctx c
|
||||
config.Logger.Warn("[openai_empty_retry] retry request failed", "surface", "responses", "stream", false, "retry_attempt", attempts, "error", err)
|
||||
return
|
||||
}
|
||||
usagePrompt = usagePromptWithEmptyOutputRetry(finalPrompt, attempts)
|
||||
usagePrompt = usagePromptWithEmptyOutputRetry(usagePrompt, attempts)
|
||||
currentResp = nextResp
|
||||
}
|
||||
}
|
||||
@@ -129,8 +132,8 @@ func shouldRetryResponsesNonStream(result responsesNonStreamResult, attempts int
|
||||
strings.TrimSpace(result.text) == ""
|
||||
}
|
||||
|
||||
func (h *Handler) handleResponsesStreamWithRetry(w http.ResponseWriter, r *http.Request, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, owner, responseID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
streamRuntime, initialType, ok := h.prepareResponsesStreamRuntime(w, resp, owner, responseID, model, finalPrompt, thinkingEnabled, searchEnabled, toolNames, toolsRaw, toolChoice, traceID)
|
||||
func (h *Handler) handleResponsesStreamWithRetry(w http.ResponseWriter, r *http.Request, a *auth.RequestAuth, resp *http.Response, payload map[string]any, pow, owner, responseID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
streamRuntime, initialType, ok := h.prepareResponsesStreamRuntime(w, resp, owner, responseID, model, finalPrompt, refFileTokens, thinkingEnabled, searchEnabled, toolNames, toolsRaw, toolChoice, traceID)
|
||||
if !ok {
|
||||
return
|
||||
}
|
||||
@@ -171,7 +174,7 @@ func (h *Handler) handleResponsesStreamWithRetry(w http.ResponseWriter, r *http.
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) prepareResponsesStreamRuntime(w http.ResponseWriter, resp *http.Response, owner, responseID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) (*responsesStreamRuntime, string, bool) {
|
||||
func (h *Handler) prepareResponsesStreamRuntime(w http.ResponseWriter, resp *http.Response, owner, responseID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) (*responsesStreamRuntime, string, bool) {
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
@@ -196,6 +199,7 @@ func (h *Handler) prepareResponsesStreamRuntime(w http.ResponseWriter, resp *htt
|
||||
h.getResponseStore().put(owner, responseID, obj)
|
||||
},
|
||||
)
|
||||
streamRuntime.refFileTokens = refFileTokens
|
||||
streamRuntime.sendCreated()
|
||||
return streamRuntime, initialType, true
|
||||
}
|
||||
|
||||
26
internal/httpapi/openai/responses/ref_file_tokens.go
Normal file
26
internal/httpapi/openai/responses/ref_file_tokens.go
Normal file
@@ -0,0 +1,26 @@
|
||||
package responses
|
||||
|
||||
// addRefFileTokensToUsage adds inline-uploaded file token estimates to an existing
|
||||
// usage map inside a response object. This keeps the token accounting aware of file
|
||||
// content that the upstream model processes but that is not part of the prompt text.
|
||||
func addRefFileTokensToUsage(obj map[string]any, refFileTokens int) {
|
||||
if refFileTokens <= 0 || obj == nil {
|
||||
return
|
||||
}
|
||||
usage, ok := obj["usage"].(map[string]any)
|
||||
if !ok || usage == nil {
|
||||
return
|
||||
}
|
||||
for _, key := range []string{"input_tokens", "prompt_tokens"} {
|
||||
if v, ok := usage[key]; ok {
|
||||
if n, ok := v.(int); ok {
|
||||
usage[key] = n + refFileTokens
|
||||
}
|
||||
}
|
||||
}
|
||||
if v, ok := usage["total_tokens"]; ok {
|
||||
if n, ok := v.(int); ok {
|
||||
usage["total_tokens"] = n + refFileTokens
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -114,14 +114,15 @@ func (h *Handler) Responses(w http.ResponseWriter, r *http.Request) {
|
||||
}
|
||||
|
||||
responseID := "resp_" + strings.ReplaceAll(uuid.NewString(), "-", "")
|
||||
refFileTokens := stdReq.RefFileTokens
|
||||
if stdReq.Stream {
|
||||
h.handleResponsesStreamWithRetry(w, r, a, resp, payload, pow, owner, responseID, stdReq.ResponseModel, stdReq.FinalPrompt, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, stdReq.ToolChoice, traceID)
|
||||
h.handleResponsesStreamWithRetry(w, r, a, resp, payload, pow, owner, responseID, stdReq.ResponseModel, stdReq.PromptTokenText, refFileTokens, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, stdReq.ToolChoice, traceID)
|
||||
return
|
||||
}
|
||||
h.handleResponsesNonStreamWithRetry(w, r.Context(), a, resp, payload, pow, owner, responseID, stdReq.ResponseModel, stdReq.FinalPrompt, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, stdReq.ToolChoice, traceID)
|
||||
h.handleResponsesNonStreamWithRetry(w, r.Context(), a, resp, payload, pow, owner, responseID, stdReq.ResponseModel, stdReq.PromptTokenText, refFileTokens, stdReq.Thinking, stdReq.Search, stdReq.ToolNames, stdReq.ToolsRaw, stdReq.ToolChoice, traceID)
|
||||
}
|
||||
|
||||
func (h *Handler) handleResponsesNonStream(w http.ResponseWriter, resp *http.Response, owner, responseID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
func (h *Handler) handleResponsesNonStream(w http.ResponseWriter, resp *http.Response, owner, responseID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
@@ -148,11 +149,14 @@ func (h *Handler) handleResponsesNonStream(w http.ResponseWriter, resp *http.Res
|
||||
}
|
||||
|
||||
responseObj := openaifmt.BuildResponseObjectWithToolCalls(responseID, model, finalPrompt, sanitizedThinking, sanitizedText, textParsed.Calls, toolsRaw)
|
||||
if refFileTokens > 0 {
|
||||
addRefFileTokensToUsage(responseObj, refFileTokens)
|
||||
}
|
||||
h.getResponseStore().put(owner, responseID, responseObj)
|
||||
writeJSON(w, http.StatusOK, responseObj)
|
||||
}
|
||||
|
||||
func (h *Handler) handleResponsesStream(w http.ResponseWriter, r *http.Request, resp *http.Response, owner, responseID, model, finalPrompt string, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
func (h *Handler) handleResponsesStream(w http.ResponseWriter, r *http.Request, resp *http.Response, owner, responseID, model, finalPrompt string, refFileTokens int, thinkingEnabled, searchEnabled bool, toolNames []string, toolsRaw any, toolChoice promptcompat.ToolChoicePolicy, traceID string) {
|
||||
defer func() { _ = resp.Body.Close() }()
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
@@ -194,6 +198,7 @@ func (h *Handler) handleResponsesStream(w http.ResponseWriter, r *http.Request,
|
||||
h.getResponseStore().put(owner, responseID, obj)
|
||||
},
|
||||
)
|
||||
streamRuntime.refFileTokens = refFileTokens
|
||||
streamRuntime.sendCreated()
|
||||
|
||||
streamengine.ConsumeSSE(streamengine.ConsumeConfig{
|
||||
|
||||
@@ -0,0 +1,39 @@
|
||||
package responses
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
openaifmt "ds2api/internal/format/openai"
|
||||
)
|
||||
|
||||
type responsesDeltaBatch struct {
|
||||
runtime *responsesStreamRuntime
|
||||
kind string
|
||||
text strings.Builder
|
||||
}
|
||||
|
||||
func (b *responsesDeltaBatch) append(kind, text string) {
|
||||
if text == "" {
|
||||
return
|
||||
}
|
||||
if b.kind != "" && b.kind != kind {
|
||||
b.flush()
|
||||
}
|
||||
b.kind = kind
|
||||
b.text.WriteString(text)
|
||||
}
|
||||
|
||||
func (b *responsesDeltaBatch) flush() {
|
||||
if b.kind == "" || b.text.Len() == 0 {
|
||||
return
|
||||
}
|
||||
text := b.text.String()
|
||||
switch b.kind {
|
||||
case "reasoning":
|
||||
b.runtime.sendEvent("response.reasoning.delta", openaifmt.BuildResponsesReasoningDeltaPayload(b.runtime.responseID, text))
|
||||
case "text":
|
||||
b.runtime.emitTextDelta(text)
|
||||
}
|
||||
b.kind = ""
|
||||
b.text.Reset()
|
||||
}
|
||||
@@ -18,13 +18,14 @@ type responsesStreamRuntime struct {
|
||||
rc *http.ResponseController
|
||||
canFlush bool
|
||||
|
||||
responseID string
|
||||
model string
|
||||
finalPrompt string
|
||||
toolNames []string
|
||||
toolsRaw any
|
||||
traceID string
|
||||
toolChoice promptcompat.ToolChoicePolicy
|
||||
responseID string
|
||||
model string
|
||||
finalPrompt string
|
||||
refFileTokens int
|
||||
toolNames []string
|
||||
toolsRaw any
|
||||
traceID string
|
||||
toolChoice promptcompat.ToolChoicePolicy
|
||||
|
||||
thinkingEnabled bool
|
||||
searchEnabled bool
|
||||
@@ -221,6 +222,7 @@ func (s *responsesStreamRuntime) onParsed(parsed sse.LineResult) streamengine.Pa
|
||||
}
|
||||
|
||||
contentSeen := false
|
||||
batch := responsesDeltaBatch{runtime: s}
|
||||
for _, p := range parsed.ToolDetectionThinkingParts {
|
||||
trimmed := sse.TrimContinuationOverlap(s.toolDetectionThinking.String(), p.Text)
|
||||
if trimmed != "" {
|
||||
@@ -246,7 +248,7 @@ func (s *responsesStreamRuntime) onParsed(parsed sse.LineResult) streamengine.Pa
|
||||
continue
|
||||
}
|
||||
s.thinking.WriteString(trimmed)
|
||||
s.sendEvent("response.reasoning.delta", openaifmt.BuildResponsesReasoningDeltaPayload(s.responseID, trimmed))
|
||||
batch.append("reasoning", trimmed)
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -268,11 +270,13 @@ func (s *responsesStreamRuntime) onParsed(parsed sse.LineResult) streamengine.Pa
|
||||
if trimmed == "" {
|
||||
continue
|
||||
}
|
||||
s.emitTextDelta(trimmed)
|
||||
batch.append("text", trimmed)
|
||||
continue
|
||||
}
|
||||
batch.flush()
|
||||
s.processToolStreamEvents(toolstream.ProcessChunk(&s.sieve, rawTrimmed, s.toolNames), true, true)
|
||||
}
|
||||
|
||||
batch.flush()
|
||||
return streamengine.ParsedDecision{ContentSeen: contentSeen}
|
||||
}
|
||||
|
||||
@@ -145,7 +145,7 @@ func (s *responsesStreamRuntime) buildCompletedResponseObject(finalThinking, fin
|
||||
}
|
||||
}
|
||||
|
||||
return openaifmt.BuildResponseObjectFromItems(
|
||||
obj := openaifmt.BuildResponseObjectFromItems(
|
||||
s.responseID,
|
||||
s.model,
|
||||
s.finalPrompt,
|
||||
@@ -154,4 +154,8 @@ func (s *responsesStreamRuntime) buildCompletedResponseObject(finalThinking, fin
|
||||
output,
|
||||
outputText,
|
||||
)
|
||||
if s.refFileTokens > 0 {
|
||||
addRefFileTokensToUsage(obj, s.refFileTokens)
|
||||
}
|
||||
return obj
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@ func TestHandleResponsesStreamDoesNotEmitReasoningTextCompatEvents(t *testing.T)
|
||||
Body: io.NopCloser(strings.NewReader(streamBody)),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", true, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", 0, true, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
|
||||
body := rec.Body.String()
|
||||
if !strings.Contains(body, "event: response.reasoning.delta") {
|
||||
@@ -57,7 +57,7 @@ func TestHandleResponsesStreamEmitsOutputTextDoneBeforeContentPartDone(t *testin
|
||||
Body: io.NopCloser(strings.NewReader(streamBody)),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
body := rec.Body.String()
|
||||
if !strings.Contains(body, "event: response.output_text.done") {
|
||||
t.Fatalf("expected response.output_text.done payload, body=%s", body)
|
||||
@@ -91,7 +91,7 @@ func TestHandleResponsesStreamOutputTextDeltaCarriesItemIndexes(t *testing.T) {
|
||||
Body: io.NopCloser(strings.NewReader(streamBody)),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
body := rec.Body.String()
|
||||
|
||||
deltaPayload, ok := extractSSEEventPayload(body, "response.output_text.delta")
|
||||
@@ -109,6 +109,48 @@ func TestHandleResponsesStreamOutputTextDeltaCarriesItemIndexes(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleResponsesStreamCoalescesSmallOutputTextDeltas(t *testing.T) {
|
||||
h := &Handler{}
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil)
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
var streamBody strings.Builder
|
||||
for i := 0; i < 100; i++ {
|
||||
b, _ := json.Marshal(map[string]any{
|
||||
"p": "response/content",
|
||||
"v": "字",
|
||||
})
|
||||
streamBody.WriteString("data: ")
|
||||
streamBody.WriteString(string(b))
|
||||
streamBody.WriteString("\n")
|
||||
}
|
||||
streamBody.WriteString("data: [DONE]\n")
|
||||
resp := &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Body: io.NopCloser(strings.NewReader(streamBody.String())),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_coalesce", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
|
||||
payloads := extractSSEEventPayloads(rec.Body.String(), "response.output_text.delta")
|
||||
if len(payloads) == 0 {
|
||||
t.Fatalf("expected response.output_text.delta payloads, body=%s", rec.Body.String())
|
||||
}
|
||||
var content strings.Builder
|
||||
for _, payload := range payloads {
|
||||
content.WriteString(asString(payload["delta"]))
|
||||
}
|
||||
if got, want := content.String(), strings.Repeat("字", 100); got != want {
|
||||
t.Fatalf("coalesced response content mismatch: got %q want %q body=%s", got, want, rec.Body.String())
|
||||
}
|
||||
if len(payloads) >= 100 {
|
||||
t.Fatalf("expected coalescing to reduce 100 tiny text deltas, got %d body=%s", len(payloads), rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(rec.Body.String(), "event: response.completed") {
|
||||
t.Fatalf("expected completed event, body=%s", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandleResponsesStreamEmitsDistinctToolCallIDsAcrossSeparateToolBlocks(t *testing.T) {
|
||||
h := &Handler{}
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil)
|
||||
@@ -130,7 +172,7 @@ func TestHandleResponsesStreamEmitsDistinctToolCallIDsAcrossSeparateToolBlocks(t
|
||||
Body: io.NopCloser(strings.NewReader(streamBody)),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", false, false, []string{"read_file", "search"}, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, false, false, []string{"read_file", "search"}, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
|
||||
body := rec.Body.String()
|
||||
doneEvents := extractSSEEventPayloads(body, "response.function_call_arguments.done")
|
||||
@@ -183,7 +225,7 @@ func TestHandleResponsesStreamRequiredToolChoiceFailure(t *testing.T) {
|
||||
Mode: promptcompat.ToolChoiceRequired,
|
||||
Allowed: map[string]struct{}{"read_file": {}},
|
||||
}
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", false, false, []string{"read_file"}, nil, policy, "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, false, false, []string{"read_file"}, nil, policy, "")
|
||||
|
||||
body := rec.Body.String()
|
||||
if !strings.Contains(body, "event: response.failed") {
|
||||
@@ -213,7 +255,7 @@ func TestHandleResponsesStreamFailsWhenUpstreamHasOnlyThinking(t *testing.T) {
|
||||
Body: io.NopCloser(strings.NewReader(streamBody)),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", true, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", 0, true, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
|
||||
body := rec.Body.String()
|
||||
if !strings.Contains(body, "event: response.failed") {
|
||||
@@ -251,7 +293,7 @@ func TestHandleResponsesStreamPromotesThinkingToolCallsOnFinalizeWithoutMidstrea
|
||||
Body: io.NopCloser(strings.NewReader(streamBody)),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", true, false, []string{"read_file"}, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", 0, true, false, []string{"read_file"}, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
|
||||
body := rec.Body.String()
|
||||
if strings.Contains(body, "event: response.reasoning.delta") {
|
||||
@@ -288,7 +330,7 @@ func TestHandleResponsesStreamPromotesHiddenThinkingDSMLToolCallsOnFinalize(t *t
|
||||
Mode: promptcompat.ToolChoiceRequired,
|
||||
Allowed: map[string]struct{}{"read_file": {}},
|
||||
}
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_hidden", "deepseek-v4-pro", "prompt", false, false, []string{"read_file"}, nil, policy, "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_hidden", "deepseek-v4-pro", "prompt", 0, false, false, []string{"read_file"}, nil, policy, "")
|
||||
|
||||
body := rec.Body.String()
|
||||
if strings.Contains(body, "event: response.reasoning.delta") {
|
||||
@@ -317,7 +359,7 @@ func TestHandleResponsesNonStreamRequiredToolChoiceViolation(t *testing.T) {
|
||||
Allowed: map[string]struct{}{"read_file": {}},
|
||||
}
|
||||
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", false, false, []string{"read_file"}, nil, policy, "")
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, false, false, []string{"read_file"}, nil, policy, "")
|
||||
if rec.Code != http.StatusUnprocessableEntity {
|
||||
t.Fatalf("expected 422 for required tool_choice violation, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -344,7 +386,7 @@ func TestHandleResponsesNonStreamRequiredToolChoiceIgnoresThinkingToolPayloadWhe
|
||||
Allowed: map[string]struct{}{"read_file": {}},
|
||||
}
|
||||
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", true, false, []string{"read_file"}, nil, policy, "")
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, true, false, []string{"read_file"}, nil, policy, "")
|
||||
if rec.Code != http.StatusUnprocessableEntity {
|
||||
t.Fatalf("expected 422 for required tool_choice violation, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -366,7 +408,7 @@ func TestHandleResponsesNonStreamReturns429WhenUpstreamOutputEmpty(t *testing.T)
|
||||
)),
|
||||
}
|
||||
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
if rec.Code != http.StatusTooManyRequests {
|
||||
t.Fatalf("expected 429 for empty upstream output, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -388,7 +430,7 @@ func TestHandleResponsesNonStreamReturnsContentFilterErrorWhenUpstreamFilteredWi
|
||||
)),
|
||||
}
|
||||
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-flash", "prompt", 0, false, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for filtered empty upstream output, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -410,7 +452,7 @@ func TestHandleResponsesNonStreamReturns429WhenUpstreamHasOnlyThinking(t *testin
|
||||
)),
|
||||
}
|
||||
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", true, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", 0, true, false, nil, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
if rec.Code != http.StatusTooManyRequests {
|
||||
t.Fatalf("expected 429 for thinking-only upstream output, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -432,7 +474,7 @@ func TestHandleResponsesNonStreamPromotesThinkingToolCallsWhenTextEmpty(t *testi
|
||||
)),
|
||||
}
|
||||
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", true, false, []string{"read_file"}, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-v4-pro", "prompt", 0, true, false, []string{"read_file"}, nil, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
if rec.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200 for thinking tool calls, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -462,7 +504,7 @@ func TestHandleResponsesNonStreamPromotesHiddenThinkingDSMLToolCallsWhenTextEmpt
|
||||
Mode: promptcompat.ToolChoiceRequired,
|
||||
Allowed: map[string]struct{}{"read_file": {}},
|
||||
}
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_hidden", "deepseek-v4-pro", "prompt", false, false, []string{"read_file"}, nil, policy, "")
|
||||
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_hidden", "deepseek-v4-pro", "prompt", 0, false, false, []string{"read_file"}, nil, policy, "")
|
||||
if rec.Code != http.StatusOK {
|
||||
t.Fatalf("expected 200 for hidden thinking tool calls, got %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
@@ -509,7 +551,7 @@ func TestHandleResponsesStreamCoercesSchemaDeclaredStringArguments(t *testing.T)
|
||||
Body: io.NopCloser(strings.NewReader(streamBody)),
|
||||
}
|
||||
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_string_protect", "deepseek-v4-flash", "prompt", false, false, []string{"Write"}, toolsRaw, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_string_protect", "deepseek-v4-flash", "prompt", 0, false, false, []string{"Write"}, toolsRaw, promptcompat.DefaultToolChoicePolicy(), "")
|
||||
|
||||
payload, ok := extractSSEEventPayload(rec.Body.String(), "response.function_call_arguments.done")
|
||||
if !ok {
|
||||
|
||||
@@ -70,7 +70,6 @@ function finalizeThinkingParts(parts, thinkingEnabled, newType) {
|
||||
}
|
||||
if (!thinkingEnabled) {
|
||||
finalParts = dropThinkingParts(finalParts);
|
||||
finalType = 'text';
|
||||
}
|
||||
return { parts: finalParts, newType: finalType };
|
||||
}
|
||||
@@ -213,6 +212,12 @@ function parseChunkForContent(chunk, thinkingEnabled, currentType, stripReferenc
|
||||
}
|
||||
}
|
||||
|
||||
if (pathValue === 'response/content') {
|
||||
newType = 'text';
|
||||
} else if (pathValue === 'response/thinking_content' && (!thinkingEnabled || newType !== 'text')) {
|
||||
newType = 'thinking';
|
||||
}
|
||||
|
||||
let partType = 'text';
|
||||
if (pathValue === 'response/thinking_content') {
|
||||
if (!thinkingEnabled) {
|
||||
@@ -226,8 +231,8 @@ function parseChunkForContent(chunk, thinkingEnabled, currentType, stripReferenc
|
||||
partType = 'text';
|
||||
} else if (pathValue.includes('response/fragments') && pathValue.includes('/content')) {
|
||||
partType = newType;
|
||||
} else if (!pathValue && thinkingEnabled) {
|
||||
partType = newType;
|
||||
} else if (!pathValue) {
|
||||
partType = newType || 'text';
|
||||
}
|
||||
|
||||
const val = chunk.v;
|
||||
@@ -308,6 +313,10 @@ function parseChunkForContent(chunk, thinkingEnabled, currentType, stripReferenc
|
||||
}
|
||||
|
||||
if (val && typeof val === 'object') {
|
||||
const directContent = asContentString(val, stripReferenceMarkers);
|
||||
if (directContent) {
|
||||
parts.push({ text: directContent, type: partType });
|
||||
}
|
||||
const resp = val.response && typeof val.response === 'object' ? val.response : val;
|
||||
if (Array.isArray(resp.fragments)) {
|
||||
for (const frag of resp.fragments) {
|
||||
@@ -593,6 +602,12 @@ function asContentString(v, stripReferenceMarkers = true) {
|
||||
if (Object.prototype.hasOwnProperty.call(v, 'v')) {
|
||||
return asContentString(v.v, stripReferenceMarkers);
|
||||
}
|
||||
if (Object.prototype.hasOwnProperty.call(v, 'text')) {
|
||||
return asContentString(v.text, stripReferenceMarkers);
|
||||
}
|
||||
if (Object.prototype.hasOwnProperty.call(v, 'value')) {
|
||||
return asContentString(v.value, stripReferenceMarkers);
|
||||
}
|
||||
return '';
|
||||
}
|
||||
if (v == null) {
|
||||
|
||||
@@ -1,5 +1,8 @@
|
||||
'use strict';
|
||||
|
||||
const MIN_DELTA_FLUSH_CHARS = 160;
|
||||
const MAX_DELTA_FLUSH_WAIT_MS = 80;
|
||||
|
||||
function createChatCompletionEmitter({ res, sessionID, created, model, isClosed }) {
|
||||
let firstChunkSent = false;
|
||||
|
||||
@@ -34,6 +37,62 @@ function createChatCompletionEmitter({ res, sessionID, created, model, isClosed
|
||||
};
|
||||
}
|
||||
|
||||
function createDeltaCoalescer({ sendDeltaFrame, minFlushChars = MIN_DELTA_FLUSH_CHARS, maxFlushWaitMS = MAX_DELTA_FLUSH_WAIT_MS }) {
|
||||
let pendingField = '';
|
||||
let pendingText = '';
|
||||
let flushTimer = null;
|
||||
|
||||
const clearFlushTimer = () => {
|
||||
if (flushTimer) {
|
||||
clearTimeout(flushTimer);
|
||||
flushTimer = null;
|
||||
}
|
||||
};
|
||||
|
||||
const flush = () => {
|
||||
clearFlushTimer();
|
||||
if (!pendingField || !pendingText) {
|
||||
return;
|
||||
}
|
||||
const delta = { [pendingField]: pendingText };
|
||||
pendingField = '';
|
||||
pendingText = '';
|
||||
sendDeltaFrame(delta);
|
||||
};
|
||||
|
||||
const scheduleFlush = () => {
|
||||
if (flushTimer || maxFlushWaitMS <= 0) {
|
||||
return;
|
||||
}
|
||||
flushTimer = setTimeout(flush, maxFlushWaitMS);
|
||||
if (typeof flushTimer.unref === 'function') {
|
||||
flushTimer.unref();
|
||||
}
|
||||
};
|
||||
|
||||
const append = (field, text) => {
|
||||
if (!field || !text) {
|
||||
return;
|
||||
}
|
||||
if (pendingField && pendingField !== field) {
|
||||
flush();
|
||||
}
|
||||
pendingField = field;
|
||||
pendingText += text;
|
||||
if ([...pendingText].length >= minFlushChars) {
|
||||
flush();
|
||||
return;
|
||||
}
|
||||
scheduleFlush();
|
||||
};
|
||||
|
||||
return {
|
||||
append,
|
||||
flush,
|
||||
};
|
||||
}
|
||||
|
||||
module.exports = {
|
||||
createChatCompletionEmitter,
|
||||
createDeltaCoalescer,
|
||||
};
|
||||
|
||||
@@ -20,7 +20,7 @@ const {
|
||||
boolDefaultTrue,
|
||||
resetStreamToolCallState,
|
||||
} = require('./toolcall_policy');
|
||||
const { createChatCompletionEmitter } = require('./stream_emitter');
|
||||
const { createChatCompletionEmitter, createDeltaCoalescer } = require('./stream_emitter');
|
||||
const {
|
||||
asString,
|
||||
isAbortError,
|
||||
@@ -191,6 +191,7 @@ async function handleVercelStream(req, res, rawBody, payload) {
|
||||
model,
|
||||
isClosed: () => clientClosed,
|
||||
});
|
||||
const deltaCoalescer = createDeltaCoalescer({ sendDeltaFrame });
|
||||
|
||||
const finish = async (reason, options = {}) => {
|
||||
if (ended) {
|
||||
@@ -201,6 +202,7 @@ async function handleVercelStream(req, res, rawBody, payload) {
|
||||
await releaseLease();
|
||||
return true;
|
||||
}
|
||||
deltaCoalescer.flush();
|
||||
const detected = parseStandaloneToolCalls(outputText, toolNames);
|
||||
if (detected.length > 0 && !toolCallsDoneEmitted) {
|
||||
toolCallsEmitted = true;
|
||||
@@ -210,6 +212,7 @@ async function handleVercelStream(req, res, rawBody, payload) {
|
||||
const tailEvents = flushToolSieve(toolSieveState, toolNames);
|
||||
for (const evt of tailEvents) {
|
||||
if (evt.type === 'tool_calls' && Array.isArray(evt.calls) && evt.calls.length > 0) {
|
||||
deltaCoalescer.flush();
|
||||
toolCallsEmitted = true;
|
||||
toolCallsDoneEmitted = true;
|
||||
sendDeltaFrame({ tool_calls: formatOpenAIStreamToolCalls(evt.calls, streamToolCallIDs, payload.tools) });
|
||||
@@ -217,9 +220,10 @@ async function handleVercelStream(req, res, rawBody, payload) {
|
||||
continue;
|
||||
}
|
||||
if (evt.text) {
|
||||
sendDeltaFrame({ content: evt.text });
|
||||
deltaCoalescer.append('content', evt.text);
|
||||
}
|
||||
}
|
||||
deltaCoalescer.flush();
|
||||
}
|
||||
if (detected.length > 0 || toolCallsEmitted) {
|
||||
reason = 'tool_calls';
|
||||
@@ -327,7 +331,7 @@ async function handleVercelStream(req, res, rawBody, payload) {
|
||||
continue;
|
||||
}
|
||||
thinkingText += trimmed;
|
||||
sendDeltaFrame({ reasoning_content: trimmed });
|
||||
deltaCoalescer.append('reasoning_content', trimmed);
|
||||
}
|
||||
} else {
|
||||
const trimmed = trimContinuationOverlap(outputText, p.text);
|
||||
@@ -339,7 +343,7 @@ async function handleVercelStream(req, res, rawBody, payload) {
|
||||
}
|
||||
outputText += trimmed;
|
||||
if (!toolSieveEnabled) {
|
||||
sendDeltaFrame({ content: trimmed });
|
||||
deltaCoalescer.append('content', trimmed);
|
||||
continue;
|
||||
}
|
||||
const events = processToolSieveChunk(toolSieveState, trimmed, toolNames);
|
||||
@@ -352,19 +356,21 @@ async function handleVercelStream(req, res, rawBody, payload) {
|
||||
const formatted = formatIncrementalToolCallDeltas(filtered, streamToolCallIDs);
|
||||
if (formatted.length > 0) {
|
||||
toolCallsEmitted = true;
|
||||
sendDeltaFrame({ tool_calls: formatted });
|
||||
deltaCoalescer.flush();
|
||||
sendDeltaFrame({ tool_calls: formatted });
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if (evt.type === 'tool_calls') {
|
||||
toolCallsEmitted = true;
|
||||
toolCallsDoneEmitted = true;
|
||||
deltaCoalescer.flush();
|
||||
sendDeltaFrame({ tool_calls: formatOpenAIStreamToolCalls(evt.calls, streamToolCallIDs, payload.tools) });
|
||||
resetStreamToolCallState(streamToolCallIDs, streamToolNames);
|
||||
continue;
|
||||
}
|
||||
if (evt.text) {
|
||||
sendDeltaFrame({ content: evt.text });
|
||||
deltaCoalescer.append('content', evt.text);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -510,32 +516,51 @@ function observeContinueState(state, chunk) {
|
||||
if (topID > 0) {
|
||||
state.responseMessageID = topID;
|
||||
}
|
||||
if (chunk.p === 'response/status') {
|
||||
setContinueStatus(state, asString(chunk.v));
|
||||
}
|
||||
observeContinueDirectPatch(state, chunk.p, chunk.v);
|
||||
if (chunk.p === 'response') {
|
||||
observeContinueBatchPatches(state, 'response', chunk.v);
|
||||
} else {
|
||||
observeContinueBatchPatches(state, '', chunk.v);
|
||||
}
|
||||
const response = chunk.v && typeof chunk.v === 'object' ? chunk.v.response : null;
|
||||
if (response && typeof response === 'object') {
|
||||
const id = numberValue(response.message_id);
|
||||
if (id > 0) {
|
||||
state.responseMessageID = id;
|
||||
}
|
||||
setContinueStatus(state, asString(response.status));
|
||||
if (response.auto_continue === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
}
|
||||
observeContinueResponseObject(state, response);
|
||||
const messageResponse = chunk.message && typeof chunk.message === 'object' && chunk.message.response;
|
||||
if (messageResponse && typeof messageResponse === 'object') {
|
||||
const id = numberValue(messageResponse.message_id);
|
||||
if (id > 0) {
|
||||
state.responseMessageID = id;
|
||||
}
|
||||
setContinueStatus(state, asString(messageResponse.status));
|
||||
observeContinueResponseObject(state, messageResponse);
|
||||
}
|
||||
|
||||
function observeContinueDirectPatch(state, path, value) {
|
||||
if (!state) {
|
||||
return;
|
||||
}
|
||||
switch (asString(path).trim().replace(/^\/+|\/+$/g, '')) {
|
||||
case 'response/status':
|
||||
case 'status':
|
||||
case 'response/quasi_status':
|
||||
case 'quasi_status':
|
||||
setContinueStatus(state, asString(value));
|
||||
break;
|
||||
case 'response/auto_continue':
|
||||
case 'auto_continue':
|
||||
if (value === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
function observeContinueResponseObject(state, response) {
|
||||
if (!state || !response || typeof response !== 'object') {
|
||||
return;
|
||||
}
|
||||
const id = numberValue(response.message_id);
|
||||
if (id > 0) {
|
||||
state.responseMessageID = id;
|
||||
}
|
||||
setContinueStatus(state, asString(response.status));
|
||||
if (response.auto_continue === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
}
|
||||
|
||||
@@ -563,6 +588,12 @@ function observeContinueBatchPatches(state, parentPath, raw) {
|
||||
case 'quasi_status':
|
||||
setContinueStatus(state, asString(patch.v));
|
||||
break;
|
||||
case 'response/auto_continue':
|
||||
case 'auto_continue':
|
||||
if (patch.v === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -248,6 +248,9 @@ function replaceDSMLToolMarkupOutsideIgnored(text) {
|
||||
if (tag) {
|
||||
if (tag.dsmlLike) {
|
||||
out += `<${tag.closing ? '/' : ''}${tag.name}${raw.slice(tag.nameEnd, tag.end + 1)}`;
|
||||
if (raw[tag.end] !== '>') {
|
||||
out += '>';
|
||||
}
|
||||
} else {
|
||||
out += raw.slice(tag.start, tag.end + 1);
|
||||
}
|
||||
@@ -424,31 +427,42 @@ function scanToolMarkupTagAt(text, start) {
|
||||
}
|
||||
const lower = raw.toLowerCase();
|
||||
let i = start + 1;
|
||||
while (i < raw.length && raw[i] === '<') {
|
||||
i += 1;
|
||||
}
|
||||
const closing = raw[i] === '/';
|
||||
if (closing) {
|
||||
i += 1;
|
||||
}
|
||||
let dsmlLike = false;
|
||||
if (i < raw.length && isToolMarkupPipe(raw[i])) {
|
||||
dsmlLike = true;
|
||||
i += 1;
|
||||
}
|
||||
if (lower.startsWith('dsml', i)) {
|
||||
dsmlLike = true;
|
||||
i += 'dsml'.length;
|
||||
while (i < raw.length && isToolMarkupSeparator(raw[i])) {
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
const prefix = consumeToolMarkupNamePrefix(raw, lower, i);
|
||||
i = prefix.next;
|
||||
const dsmlLike = prefix.dsmlLike;
|
||||
const { name, len } = matchToolMarkupName(lower, i);
|
||||
if (!name) {
|
||||
return null;
|
||||
}
|
||||
const nameEnd = i + len;
|
||||
const originalNameEnd = i + len;
|
||||
let nameEnd = originalNameEnd;
|
||||
while (nameEnd < raw.length && isToolMarkupPipe(raw[nameEnd])) {
|
||||
nameEnd += 1;
|
||||
}
|
||||
const hasTrailingPipe = nameEnd > originalNameEnd;
|
||||
if (!hasXmlTagBoundary(raw, nameEnd)) {
|
||||
return null;
|
||||
}
|
||||
const end = findXmlTagEnd(raw, nameEnd);
|
||||
let end = findXmlTagEnd(raw, nameEnd);
|
||||
if (end < 0) {
|
||||
if (!hasTrailingPipe) {
|
||||
return null;
|
||||
}
|
||||
end = nameEnd - 1;
|
||||
}
|
||||
if (hasTrailingPipe) {
|
||||
const nextLT = raw.indexOf('<', nameEnd);
|
||||
if (nextLT >= 0 && end >= nextLT) {
|
||||
end = nameEnd - 1;
|
||||
}
|
||||
}
|
||||
if (end < 0) {
|
||||
return null;
|
||||
}
|
||||
@@ -520,37 +534,94 @@ function findPartialToolMarkupStart(text) {
|
||||
if (lastLT < 0) {
|
||||
return -1;
|
||||
}
|
||||
const tail = raw.slice(lastLT);
|
||||
const start = includeDuplicateLeadingLessThan(raw, lastLT);
|
||||
const tail = raw.slice(start);
|
||||
if (tail.includes('>')) {
|
||||
return -1;
|
||||
}
|
||||
const lowerTail = tail.toLowerCase();
|
||||
const candidates = [
|
||||
'<tool_calls', '<invoke', '<parameter',
|
||||
'<|tool_calls', '<|invoke', '<|parameter',
|
||||
'<|tool_calls', '<|invoke', '<|parameter',
|
||||
'<|dsml|tool_calls', '<|dsml|invoke', '<|dsml|parameter',
|
||||
'<|dsml|tool_calls', '<|dsml|invoke', '<|dsml|parameter',
|
||||
'<dsmltool_calls', '<dsmlinvoke', '<dsmlparameter',
|
||||
'<dsml tool_calls', '<dsml invoke', '<dsml parameter',
|
||||
'<dsml|tool_calls', '<dsml|invoke', '<dsml|parameter',
|
||||
'<|dsmltool_calls', '<|dsmlinvoke', '<|dsmlparameter',
|
||||
'<|dsml tool_calls', '<|dsml invoke', '<|dsml parameter',
|
||||
];
|
||||
for (const candidate of candidates) {
|
||||
if (candidate.startsWith(lowerTail)) {
|
||||
return lastLT;
|
||||
}
|
||||
return isPartialToolMarkupTagPrefix(tail) ? start : -1;
|
||||
}
|
||||
|
||||
function includeDuplicateLeadingLessThan(text, idx) {
|
||||
let out = idx;
|
||||
while (out > 0 && text[out - 1] === '<') {
|
||||
out -= 1;
|
||||
}
|
||||
return -1;
|
||||
return out;
|
||||
}
|
||||
|
||||
function isToolMarkupPipe(ch) {
|
||||
return ch === '|' || ch === '|';
|
||||
}
|
||||
|
||||
function isToolMarkupSeparator(ch) {
|
||||
return ch === ' ' || ch === '\t' || ch === '\r' || ch === '\n' || isToolMarkupPipe(ch);
|
||||
function isPartialToolMarkupTagPrefix(text) {
|
||||
const raw = toStringSafe(text);
|
||||
if (!raw || raw[0] !== '<' || raw.includes('>')) {
|
||||
return false;
|
||||
}
|
||||
const lower = raw.toLowerCase();
|
||||
let i = 1;
|
||||
while (i < raw.length && raw[i] === '<') {
|
||||
i += 1;
|
||||
}
|
||||
if (i >= raw.length) {
|
||||
return true;
|
||||
}
|
||||
if (raw[i] === '/') {
|
||||
i += 1;
|
||||
}
|
||||
while (i <= raw.length) {
|
||||
if (i === raw.length) {
|
||||
return true;
|
||||
}
|
||||
if (hasToolMarkupNamePrefix(lower.slice(i))) {
|
||||
return true;
|
||||
}
|
||||
if ('dsml'.startsWith(lower.slice(i))) {
|
||||
return true;
|
||||
}
|
||||
const next = consumeToolMarkupNamePrefixOnce(raw, lower, i);
|
||||
if (!next.ok) {
|
||||
return false;
|
||||
}
|
||||
i = next.next;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function consumeToolMarkupNamePrefix(raw, lower, idx) {
|
||||
let next = idx;
|
||||
let dsmlLike = false;
|
||||
while (true) {
|
||||
const consumed = consumeToolMarkupNamePrefixOnce(raw, lower, next);
|
||||
if (!consumed.ok) {
|
||||
return { next, dsmlLike };
|
||||
}
|
||||
next = consumed.next;
|
||||
dsmlLike = true;
|
||||
}
|
||||
}
|
||||
|
||||
function consumeToolMarkupNamePrefixOnce(raw, lower, idx) {
|
||||
if (idx < raw.length && isToolMarkupPipe(raw[idx])) {
|
||||
return { next: idx + 1, ok: true };
|
||||
}
|
||||
if (idx < raw.length && [' ', '\t', '\r', '\n'].includes(raw[idx])) {
|
||||
return { next: idx + 1, ok: true };
|
||||
}
|
||||
if (lower.startsWith('dsml', idx)) {
|
||||
return { next: idx + 'dsml'.length, ok: true };
|
||||
}
|
||||
return { next: idx, ok: false };
|
||||
}
|
||||
|
||||
function hasToolMarkupNamePrefix(lowerTail) {
|
||||
for (const name of TOOL_MARKUP_NAMES) {
|
||||
if (lowerTail.startsWith(name) || name.startsWith(lowerTail)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function matchToolMarkupName(lower, start) {
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const XML_TOOL_SEGMENT_TAGS = [
|
||||
'<|dsml|tool_calls>', '<|dsml|tool_calls\n', '<|dsml|tool_calls ',
|
||||
'<|dsml|tool_calls>', '<|dsml|tool_calls\n', '<|dsml|tool_calls ',
|
||||
'<|dsml|invoke ', '<|dsml|invoke\n', '<|dsml|invoke\t', '<|dsml|invoke\r',
|
||||
'<|dsmltool_calls>', '<|dsmltool_calls\n', '<|dsmltool_calls ',
|
||||
'<|dsmlinvoke ', '<|dsmlinvoke\n', '<|dsmlinvoke\t', '<|dsmlinvoke\r',
|
||||
'<|dsml tool_calls>', '<|dsml tool_calls\n', '<|dsml tool_calls ',
|
||||
'<|dsml invoke ', '<|dsml invoke\n', '<|dsml invoke\t', '<|dsml invoke\r',
|
||||
'<dsml|tool_calls>', '<dsml|tool_calls\n', '<dsml|tool_calls ',
|
||||
'<dsml|invoke ', '<dsml|invoke\n', '<dsml|invoke\t', '<dsml|invoke\r',
|
||||
'<dsmltool_calls>', '<dsmltool_calls\n', '<dsmltool_calls ',
|
||||
'<dsmlinvoke ', '<dsmlinvoke\n', '<dsmlinvoke\t', '<dsmlinvoke\r',
|
||||
'<dsml tool_calls>', '<dsml tool_calls\n', '<dsml tool_calls ',
|
||||
'<dsml invoke ', '<dsml invoke\n', '<dsml invoke\t', '<dsml invoke\r',
|
||||
'<|tool_calls>', '<|tool_calls\n', '<|tool_calls ',
|
||||
'<|invoke ', '<|invoke\n', '<|invoke\t', '<|invoke\r',
|
||||
'<|tool_calls>', '<|tool_calls\n', '<|tool_calls ',
|
||||
'<|invoke ', '<|invoke\n', '<|invoke\t', '<|invoke\r',
|
||||
'<tool_calls>', '<tool_calls\n', '<tool_calls ',
|
||||
'<invoke ', '<invoke\n', '<invoke\t', '<invoke\r',
|
||||
];
|
||||
|
||||
const XML_TOOL_OPENING_TAGS = [
|
||||
'<|dsml|tool_calls',
|
||||
'<|dsml|tool_calls',
|
||||
'<|dsmltool_calls',
|
||||
'<|dsml tool_calls',
|
||||
'<dsml|tool_calls',
|
||||
'<dsmltool_calls',
|
||||
'<dsml tool_calls',
|
||||
'<|tool_calls',
|
||||
'<|tool_calls',
|
||||
'<tool_calls',
|
||||
];
|
||||
|
||||
const XML_TOOL_CLOSING_TAGS = [
|
||||
'</|dsml|tool_calls>',
|
||||
'</|dsml|tool_calls>',
|
||||
'</|dsmltool_calls>',
|
||||
'</|dsml tool_calls>',
|
||||
'</dsml|tool_calls>',
|
||||
'</dsmltool_calls>',
|
||||
'</dsml tool_calls>',
|
||||
'</|tool_calls>',
|
||||
'</|tool_calls>',
|
||||
'</tool_calls>',
|
||||
];
|
||||
|
||||
module.exports = {
|
||||
XML_TOOL_SEGMENT_TAGS,
|
||||
XML_TOOL_OPENING_TAGS,
|
||||
XML_TOOL_CLOSING_TAGS,
|
||||
};
|
||||
@@ -88,6 +88,58 @@ func TestBuildOpenAIFinalPrompt_VercelPreparePathKeepsFinalAnswerInstruction(t *
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildOpenAIFinalPromptReadLikeToolIncludesCacheGuard(t *testing.T) {
|
||||
messages := []any{
|
||||
map[string]any{"role": "user", "content": "请读取文件"},
|
||||
}
|
||||
tools := []any{
|
||||
map[string]any{
|
||||
"type": "function",
|
||||
"function": map[string]any{
|
||||
"name": "read_file",
|
||||
"description": "Read a file",
|
||||
"parameters": map[string]any{
|
||||
"type": "object",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
finalPrompt, _ := buildOpenAIFinalPrompt(messages, tools, "", false)
|
||||
if !strings.Contains(finalPrompt, "Read-tool cache guard") {
|
||||
t.Fatalf("read-like tool prompt missing cache guard: %q", finalPrompt)
|
||||
}
|
||||
if !strings.Contains(finalPrompt, "provides no file body") {
|
||||
t.Fatalf("read-like tool prompt missing no-body handling: %q", finalPrompt)
|
||||
}
|
||||
if !strings.Contains(finalPrompt, "Do not repeatedly call the same read request") {
|
||||
t.Fatalf("read-like tool prompt missing loop guard: %q", finalPrompt)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildOpenAIFinalPromptNonReadToolOmitsCacheGuard(t *testing.T) {
|
||||
messages := []any{
|
||||
map[string]any{"role": "user", "content": "搜索一下"},
|
||||
}
|
||||
tools := []any{
|
||||
map[string]any{
|
||||
"type": "function",
|
||||
"function": map[string]any{
|
||||
"name": "search",
|
||||
"description": "Search docs",
|
||||
"parameters": map[string]any{
|
||||
"type": "object",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
finalPrompt, _ := buildOpenAIFinalPrompt(messages, tools, "", false)
|
||||
if strings.Contains(finalPrompt, "Read-tool cache guard") {
|
||||
t.Fatalf("non-read tool prompt should not include read cache guard: %q", finalPrompt)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildOpenAIFinalPromptWithThinkingKeepsPromptUnchanged(t *testing.T) {
|
||||
messages := []any{
|
||||
map[string]any{"role": "user", "content": "继续回答上一个问题"},
|
||||
|
||||
@@ -39,20 +39,22 @@ func NormalizeOpenAIChatRequest(store ConfigReader, req map[string]any, traceID
|
||||
refFileIDs := CollectOpenAIRefFileIDs(req)
|
||||
|
||||
return StandardRequest{
|
||||
Surface: "openai_chat",
|
||||
RequestedModel: strings.TrimSpace(model),
|
||||
ResolvedModel: resolvedModel,
|
||||
ResponseModel: responseModel,
|
||||
Messages: messagesRaw,
|
||||
ToolsRaw: req["tools"],
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
ToolChoice: toolPolicy,
|
||||
Stream: util.ToBool(req["stream"]),
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
RefFileIDs: refFileIDs,
|
||||
PassThrough: passThrough,
|
||||
Surface: "openai_chat",
|
||||
RequestedModel: strings.TrimSpace(model),
|
||||
ResolvedModel: resolvedModel,
|
||||
ResponseModel: responseModel,
|
||||
Messages: messagesRaw,
|
||||
PromptTokenText: finalPrompt,
|
||||
ToolsRaw: req["tools"],
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
ToolChoice: toolPolicy,
|
||||
Stream: util.ToBool(req["stream"]),
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
RefFileIDs: refFileIDs,
|
||||
RefFileTokens: estimateInlineFileTokens(req),
|
||||
PassThrough: passThrough,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -99,20 +101,22 @@ func NormalizeOpenAIResponsesRequest(store ConfigReader, req map[string]any, tra
|
||||
refFileIDs := CollectOpenAIRefFileIDs(req)
|
||||
|
||||
return StandardRequest{
|
||||
Surface: "openai_responses",
|
||||
RequestedModel: model,
|
||||
ResolvedModel: resolvedModel,
|
||||
ResponseModel: model,
|
||||
Messages: messagesRaw,
|
||||
ToolsRaw: req["tools"],
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
ToolChoice: toolPolicy,
|
||||
Stream: util.ToBool(req["stream"]),
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
RefFileIDs: refFileIDs,
|
||||
PassThrough: passThrough,
|
||||
Surface: "openai_responses",
|
||||
RequestedModel: model,
|
||||
ResolvedModel: resolvedModel,
|
||||
ResponseModel: model,
|
||||
Messages: messagesRaw,
|
||||
PromptTokenText: finalPrompt,
|
||||
ToolsRaw: req["tools"],
|
||||
FinalPrompt: finalPrompt,
|
||||
ToolNames: toolNames,
|
||||
ToolChoice: toolPolicy,
|
||||
Stream: util.ToBool(req["stream"]),
|
||||
Thinking: thinkingEnabled,
|
||||
Search: searchEnabled,
|
||||
RefFileIDs: refFileIDs,
|
||||
RefFileTokens: estimateInlineFileTokens(req),
|
||||
PassThrough: passThrough,
|
||||
}, nil
|
||||
}
|
||||
|
||||
@@ -356,3 +360,30 @@ func namesToSet(names []string) map[string]struct{} {
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
// estimateInlineFileTokens extracts the byte count stashed by PreprocessInlineFileInputs
|
||||
// and converts it to a conservative token estimate. Inline files are typically images or
|
||||
// documents that the upstream model will process; we use bytes/3 (rather than bytes/4)
|
||||
// as a slightly pessimistic approximation so the returned context token count stays
|
||||
// safely above the real value.
|
||||
func estimateInlineFileTokens(req map[string]any) int {
|
||||
raw, ok := req["_inline_file_bytes"]
|
||||
if !ok {
|
||||
return 0
|
||||
}
|
||||
var bytes int
|
||||
switch v := raw.(type) {
|
||||
case int:
|
||||
bytes = v
|
||||
case int64:
|
||||
bytes = int(v)
|
||||
case float64:
|
||||
bytes = int(v)
|
||||
default:
|
||||
return 0
|
||||
}
|
||||
if bytes <= 0 {
|
||||
return 0
|
||||
}
|
||||
return bytes / 3
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ type StandardRequest struct {
|
||||
ResponseModel string
|
||||
Messages []any
|
||||
HistoryText string
|
||||
PromptTokenText string
|
||||
CurrentInputFileApplied bool
|
||||
ToolsRaw any
|
||||
FinalPrompt string
|
||||
@@ -18,6 +19,7 @@ type StandardRequest struct {
|
||||
Thinking bool
|
||||
Search bool
|
||||
RefFileIDs []string
|
||||
RefFileTokens int
|
||||
PassThrough map[string]any
|
||||
}
|
||||
|
||||
|
||||
@@ -4,6 +4,7 @@ import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"unicode"
|
||||
|
||||
"ds2api/internal/toolcall"
|
||||
)
|
||||
@@ -46,6 +47,9 @@ func injectToolPrompt(messages []map[string]any, tools []any, policy ToolChoiceP
|
||||
return messages, names
|
||||
}
|
||||
toolPrompt := "You have access to these tools:\n\n" + strings.Join(toolSchemas, "\n\n") + "\n\n" + toolcall.BuildToolCallInstructions(names)
|
||||
if hasReadLikeTool(names) {
|
||||
toolPrompt += "\n\nRead-tool cache guard: If a Read/read_file-style tool result says the file is unchanged, already available in history, should be referenced from previous context, or otherwise provides no file body, treat that result as missing content. Do not repeatedly call the same read request for that missing body. Request a full-content read if the tool supports it, or tell the user that the file contents need to be provided again."
|
||||
}
|
||||
if policy.Mode == ToolChoiceRequired {
|
||||
toolPrompt += "\n7) For this response, you MUST call at least one tool from the allowed list."
|
||||
}
|
||||
@@ -64,3 +68,23 @@ func injectToolPrompt(messages []map[string]any, tools []any, policy ToolChoiceP
|
||||
messages = append([]map[string]any{{"role": "system", "content": toolPrompt}}, messages...)
|
||||
return messages, names
|
||||
}
|
||||
|
||||
func hasReadLikeTool(names []string) bool {
|
||||
for _, name := range names {
|
||||
switch normalizeToolNameForGuard(name) {
|
||||
case "read", "readfile":
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func normalizeToolNameForGuard(name string) string {
|
||||
var b strings.Builder
|
||||
for _, r := range strings.ToLower(strings.TrimSpace(name)) {
|
||||
if unicode.IsLetter(r) || unicode.IsDigit(r) {
|
||||
b.WriteRune(r)
|
||||
}
|
||||
}
|
||||
return b.String()
|
||||
}
|
||||
|
||||
@@ -98,6 +98,14 @@ func NewApp() (*App, error) {
|
||||
r.Get("/v1/responses/{response_id}", responsesHandler.GetResponseByID)
|
||||
r.Post("/v1/files", filesHandler.UploadFile)
|
||||
r.Post("/v1/embeddings", embeddingsHandler.Embeddings)
|
||||
// Root OpenAI aliases support clients configured with the bare DS2API service URL.
|
||||
r.Get("/models", modelsHandler.ListModels)
|
||||
r.Get("/models/{model_id}", modelsHandler.GetModel)
|
||||
r.Post("/chat/completions", chatHandler.ChatCompletions)
|
||||
r.Post("/responses", responsesHandler.Responses)
|
||||
r.Get("/responses/{response_id}", responsesHandler.GetResponseByID)
|
||||
r.Post("/files", filesHandler.UploadFile)
|
||||
r.Post("/embeddings", embeddingsHandler.Embeddings)
|
||||
claude.RegisterRoutes(r, claudeHandler)
|
||||
gemini.RegisterRoutes(r, geminiHandler)
|
||||
r.Route("/admin", func(ar chi.Router) {
|
||||
|
||||
@@ -37,6 +37,13 @@ func TestAPIRoutesRemainRegistered(t *testing.T) {
|
||||
"GET /v1/responses/{response_id}",
|
||||
"POST /v1/files",
|
||||
"POST /v1/embeddings",
|
||||
"GET /models",
|
||||
"GET /models/{model_id}",
|
||||
"POST /chat/completions",
|
||||
"POST /responses",
|
||||
"GET /responses/{response_id}",
|
||||
"POST /files",
|
||||
"POST /embeddings",
|
||||
"GET /anthropic/v1/models",
|
||||
"POST /anthropic/v1/messages",
|
||||
"POST /anthropic/v1/messages/count_tokens",
|
||||
|
||||
@@ -41,6 +41,15 @@ func TestCollectStreamTextOnly(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestCollectStreamHandlesLongSingleSSELine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
resp := makeHTTPResponse(makeLargeContentSSEBody(t, payload))
|
||||
result := CollectStream(resp, false, true)
|
||||
if result.Text != payload {
|
||||
t.Fatalf("long SSE line payload mismatch: got len=%d want len=%d", len(result.Text), len(payload))
|
||||
}
|
||||
}
|
||||
|
||||
func TestCollectStreamThinkingAndText(t *testing.T) {
|
||||
resp := makeHTTPResponse(
|
||||
"data: {\"p\":\"response/thinking_content\",\"v\":\"Thinking...\"}\n" +
|
||||
|
||||
@@ -92,6 +92,7 @@ func ParseSSEChunkForContentDetailed(chunk map[string]any, thinkingEnabled bool,
|
||||
}
|
||||
newType := currentFragmentType
|
||||
parts := make([]ContentPart, 0, 8)
|
||||
updateTypeFromExplicitPath(path, thinkingEnabled, &newType)
|
||||
collectDirectFragments(path, chunk, v, &newType, &parts)
|
||||
updateTypeFromNestedResponse(path, v, &newType)
|
||||
partType := resolvePartType(path, thinkingEnabled, newType)
|
||||
@@ -107,11 +108,24 @@ func ParseSSEChunkForContentDetailed(chunk map[string]any, thinkingEnabled bool,
|
||||
detectionThinkingParts := selectThinkingParts(parts)
|
||||
if !thinkingEnabled {
|
||||
parts = dropThinkingParts(parts)
|
||||
newType = "text"
|
||||
}
|
||||
return parts, detectionThinkingParts, false, newType
|
||||
}
|
||||
|
||||
func updateTypeFromExplicitPath(path string, thinkingEnabled bool, newType *string) {
|
||||
if newType == nil {
|
||||
return
|
||||
}
|
||||
switch path {
|
||||
case "response/content":
|
||||
*newType = "text"
|
||||
case "response/thinking_content":
|
||||
if !thinkingEnabled || *newType != "text" {
|
||||
*newType = "thinking"
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func selectThinkingParts(parts []ContentPart) []ContentPart {
|
||||
if len(parts) == 0 {
|
||||
return nil
|
||||
@@ -206,8 +220,11 @@ func resolvePartType(path string, thinkingEnabled bool, newType string) string {
|
||||
return "text"
|
||||
case strings.Contains(path, "response/fragments") && strings.Contains(path, "/content"):
|
||||
return newType
|
||||
case path == "" && thinkingEnabled:
|
||||
return newType
|
||||
case path == "":
|
||||
if newType != "" {
|
||||
return newType
|
||||
}
|
||||
return "text"
|
||||
default:
|
||||
return "text"
|
||||
}
|
||||
@@ -244,11 +261,29 @@ func appendChunkValueContent(v any, partType string, newType *string, parts *[]C
|
||||
}
|
||||
*parts = append(*parts, pp...)
|
||||
case map[string]any:
|
||||
if appendObjectContentByPath(path, val, partType, parts) {
|
||||
return false
|
||||
}
|
||||
appendWrappedFragments(val, partType, newType, parts)
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func appendObjectContentByPath(path string, val map[string]any, partType string, parts *[]ContentPart) bool {
|
||||
if path != "response/content" && path != "response/thinking_content" && path != "" {
|
||||
return false
|
||||
}
|
||||
text, _ := val["text"].(string)
|
||||
if text == "" {
|
||||
text, _ = val["content"].(string)
|
||||
}
|
||||
if text == "" {
|
||||
return false
|
||||
}
|
||||
appendContentPart(parts, text, partType)
|
||||
return true
|
||||
}
|
||||
|
||||
func appendWrappedFragments(val map[string]any, partType string, newType *string, parts *[]ContentPart) {
|
||||
resp := val
|
||||
if wrapped, ok := val["response"].(map[string]any); ok {
|
||||
|
||||
@@ -88,6 +88,71 @@ func TestParseSSEChunkForContentAfterAppendUsesUpdatedType(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseSSEChunkForContentThinkingDisabledKeepsHiddenFragmentState(t *testing.T) {
|
||||
chunk1 := map[string]any{
|
||||
"p": "response/fragments",
|
||||
"o": "APPEND",
|
||||
"v": []any{
|
||||
map[string]any{"type": "THINK", "content": "我们"},
|
||||
},
|
||||
}
|
||||
parts1, finished1, nextType1 := ParseSSEChunkForContent(chunk1, false, "text")
|
||||
if finished1 {
|
||||
t.Fatal("expected first chunk unfinished")
|
||||
}
|
||||
if nextType1 != "thinking" {
|
||||
t.Fatalf("expected hidden THINK fragment to keep next type thinking, got %q", nextType1)
|
||||
}
|
||||
if len(parts1) != 0 {
|
||||
t.Fatalf("expected hidden thinking to be dropped, got %#v", parts1)
|
||||
}
|
||||
|
||||
chunk2 := map[string]any{
|
||||
"p": "response/fragments/-1/content",
|
||||
"v": "被",
|
||||
}
|
||||
parts2, finished2, nextType2 := ParseSSEChunkForContent(chunk2, false, nextType1)
|
||||
if finished2 {
|
||||
t.Fatal("expected second chunk unfinished")
|
||||
}
|
||||
if nextType2 != "thinking" {
|
||||
t.Fatalf("expected hidden continuation to keep next type thinking, got %q", nextType2)
|
||||
}
|
||||
if len(parts2) != 0 {
|
||||
t.Fatalf("expected hidden continuation to be dropped, got %#v", parts2)
|
||||
}
|
||||
|
||||
chunk3 := map[string]any{"v": "要求"}
|
||||
parts3, finished3, nextType3 := ParseSSEChunkForContent(chunk3, false, nextType2)
|
||||
if finished3 {
|
||||
t.Fatal("expected third chunk unfinished")
|
||||
}
|
||||
if nextType3 != "thinking" {
|
||||
t.Fatalf("expected pathless hidden continuation to keep next type thinking, got %q", nextType3)
|
||||
}
|
||||
if len(parts3) != 0 {
|
||||
t.Fatalf("expected pathless hidden continuation to be dropped, got %#v", parts3)
|
||||
}
|
||||
|
||||
chunk4 := map[string]any{
|
||||
"p": "response/fragments",
|
||||
"o": "APPEND",
|
||||
"v": []any{
|
||||
map[string]any{"type": "RESPONSE", "content": "答"},
|
||||
},
|
||||
}
|
||||
parts4, finished4, nextType4 := ParseSSEChunkForContent(chunk4, false, nextType3)
|
||||
if finished4 {
|
||||
t.Fatal("expected fourth chunk unfinished")
|
||||
}
|
||||
if nextType4 != "text" {
|
||||
t.Fatalf("expected RESPONSE fragment to switch next type text, got %q", nextType4)
|
||||
}
|
||||
if len(parts4) != 1 || parts4[0].Type != "text" || parts4[0].Text != "答" {
|
||||
t.Fatalf("expected visible response text, got %#v", parts4)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseSSEChunkForContentAutoTransitionsThinkClose(t *testing.T) {
|
||||
chunk := map[string]any{
|
||||
"p": "response/thinking_content",
|
||||
@@ -163,3 +228,44 @@ func TestParseSSEChunkForContentStripsLeakedThinkTagsFromText(t *testing.T) {
|
||||
t.Fatalf("expected leaked think tag to be stripped, got %#v", parts[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseSSEChunkForContentResponseContentObjectShape(t *testing.T) {
|
||||
chunk := map[string]any{
|
||||
"p": "response/content",
|
||||
"v": map[string]any{"text": "对象内容"},
|
||||
}
|
||||
parts, finished, _ := ParseSSEChunkForContent(chunk, false, "text")
|
||||
if finished {
|
||||
t.Fatal("expected unfinished")
|
||||
}
|
||||
if len(parts) != 1 || parts[0].Text != "对象内容" || parts[0].Type != "text" {
|
||||
t.Fatalf("unexpected parts: %#v", parts)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseSSEChunkForThinkingContentObjectShape(t *testing.T) {
|
||||
chunk := map[string]any{
|
||||
"p": "response/thinking_content",
|
||||
"v": map[string]any{"content": "对象思考"},
|
||||
}
|
||||
parts, finished, _ := ParseSSEChunkForContent(chunk, true, "thinking")
|
||||
if finished {
|
||||
t.Fatal("expected unfinished")
|
||||
}
|
||||
if len(parts) != 1 || parts[0].Text != "对象思考" || parts[0].Type != "thinking" {
|
||||
t.Fatalf("unexpected parts: %#v", parts)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseSSEChunkForContentObjectShapeWithoutPath(t *testing.T) {
|
||||
chunk := map[string]any{
|
||||
"v": map[string]any{"text": "无路径对象内容"},
|
||||
}
|
||||
parts, finished, _ := ParseSSEChunkForContent(chunk, false, "text")
|
||||
if finished {
|
||||
t.Fatal("expected unfinished")
|
||||
}
|
||||
if len(parts) != 1 || parts[0].Text != "无路径对象内容" || parts[0].Type != "text" {
|
||||
t.Fatalf("unexpected parts: %#v", parts)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -9,8 +9,7 @@ import (
|
||||
|
||||
const (
|
||||
parsedLineBufferSize = 128
|
||||
scannerBufferSize = 64 * 1024
|
||||
maxScannerLineSize = 2 * 1024 * 1024
|
||||
lineReaderBufferSize = 64 * 1024
|
||||
minFlushChars = 160
|
||||
maxFlushWait = 80 * time.Millisecond
|
||||
)
|
||||
@@ -29,8 +28,8 @@ func StartParsedLinePump(ctx context.Context, body io.Reader, thinkingEnabled bo
|
||||
eof bool
|
||||
}
|
||||
lineCh := make(chan scanItem, 1)
|
||||
stopScanner := make(chan struct{})
|
||||
defer close(stopScanner)
|
||||
stopReader := make(chan struct{})
|
||||
defer close(stopReader)
|
||||
go func() {
|
||||
sendScanItem := func(item scanItem) bool {
|
||||
select {
|
||||
@@ -38,20 +37,28 @@ func StartParsedLinePump(ctx context.Context, body io.Reader, thinkingEnabled bo
|
||||
return true
|
||||
case <-ctx.Done():
|
||||
return false
|
||||
case <-stopScanner:
|
||||
case <-stopReader:
|
||||
return false
|
||||
}
|
||||
}
|
||||
defer close(lineCh)
|
||||
scanner := bufio.NewScanner(body)
|
||||
scanner.Buffer(make([]byte, 0, scannerBufferSize), maxScannerLineSize)
|
||||
for scanner.Scan() {
|
||||
line := append([]byte{}, scanner.Bytes()...)
|
||||
if !sendScanItem(scanItem{line: line}) {
|
||||
reader := bufio.NewReaderSize(body, lineReaderBufferSize)
|
||||
for {
|
||||
line, err := reader.ReadBytes('\n')
|
||||
if len(line) > 0 {
|
||||
line = append([]byte{}, line...)
|
||||
if !sendScanItem(scanItem{line: line}) {
|
||||
return
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
err = nil
|
||||
}
|
||||
_ = sendScanItem(scanItem{err: err, eof: true})
|
||||
return
|
||||
}
|
||||
}
|
||||
_ = sendScanItem(scanItem{err: scanner.Err(), eof: true})
|
||||
}()
|
||||
|
||||
ticker := time.NewTicker(maxFlushWait)
|
||||
|
||||
@@ -158,11 +158,13 @@ func TestStartParsedLinePumpNonSSELines(t *testing.T) {
|
||||
|
||||
func TestStartParsedLinePumpThinkingDisabled(t *testing.T) {
|
||||
body := strings.NewReader(
|
||||
"data: {\"p\":\"response/thinking_content\",\"v\":\"thought\"}\n" +
|
||||
"data: {\"p\":\"response/fragments\",\"o\":\"APPEND\",\"v\":[{\"type\":\"THINK\",\"content\":\"思\"}]}\n" +
|
||||
"data: {\"p\":\"response/fragments/-1/content\",\"v\":\"考\"}\n" +
|
||||
"data: {\"v\":\"隐藏\"}\n" +
|
||||
"data: {\"p\":\"response/fragments\",\"o\":\"APPEND\",\"v\":[{\"type\":\"RESPONSE\",\"content\":\"答\"}]}\n" +
|
||||
"data: {\"p\":\"response/content\",\"v\":\"response\"}\n" +
|
||||
"data: [DONE]\n",
|
||||
)
|
||||
// With thinking disabled, thinking content should still be emitted but marked differently
|
||||
results, done := StartParsedLinePump(context.Background(), body, false, "text")
|
||||
|
||||
var parts []ContentPart
|
||||
@@ -171,8 +173,15 @@ func TestStartParsedLinePumpThinkingDisabled(t *testing.T) {
|
||||
}
|
||||
<-done
|
||||
|
||||
if len(parts) < 1 {
|
||||
t.Fatalf("expected at least 1 part, got %d", len(parts))
|
||||
got := strings.Builder{}
|
||||
for _, p := range parts {
|
||||
if p.Type != "text" {
|
||||
t.Fatalf("expected only text parts with thinking disabled, got %#v", parts)
|
||||
}
|
||||
got.WriteString(p.Text)
|
||||
}
|
||||
if got.String() != "答response" {
|
||||
t.Fatalf("expected hidden thinking to be dropped, got %q from %#v", got.String(), parts)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -2,10 +2,23 @@ package sse
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func makeLargeContentSSEBody(t *testing.T, payload string) string {
|
||||
t.Helper()
|
||||
line, err := json.Marshal(map[string]any{
|
||||
"p": "response/content",
|
||||
"v": payload,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("marshal SSE line failed: %v", err)
|
||||
}
|
||||
return "data: " + string(line) + "\n" + "data: [DONE]\n"
|
||||
}
|
||||
|
||||
func TestStartParsedLinePumpParsesAndStops(t *testing.T) {
|
||||
body := strings.NewReader("data: {\"p\":\"response/content\",\"v\":\"hi\"}\n\ndata: [DONE]\n")
|
||||
results, done := StartParsedLinePump(context.Background(), body, false, "text")
|
||||
@@ -28,3 +41,28 @@ func TestStartParsedLinePumpParsesAndStops(t *testing.T) {
|
||||
t.Fatalf("expected last line to stop stream, got parsed=%v stop=%v", last.Parsed, last.Stop)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStartParsedLinePumpHandlesLongSingleSSELine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
results, done := StartParsedLinePump(context.Background(), strings.NewReader(makeLargeContentSSEBody(t, payload)), false, "text")
|
||||
|
||||
var got strings.Builder
|
||||
var sawDone bool
|
||||
for r := range results {
|
||||
for _, p := range r.Parts {
|
||||
got.WriteString(p.Text)
|
||||
}
|
||||
if r.Stop {
|
||||
sawDone = true
|
||||
}
|
||||
}
|
||||
if err := <-done; err != nil {
|
||||
t.Fatalf("unexpected long-line read error: %v", err)
|
||||
}
|
||||
if got.String() != payload {
|
||||
t.Fatalf("long SSE line payload mismatch: got len=%d want len=%d", got.Len(), len(payload))
|
||||
}
|
||||
if !sawDone {
|
||||
t.Fatal("expected DONE after long SSE line")
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,6 +44,9 @@ func rewriteDSMLToolMarkupOutsideIgnored(text string) string {
|
||||
}
|
||||
b.WriteString(tag.Name)
|
||||
b.WriteString(text[tag.NameEnd : tag.End+1])
|
||||
if text[tag.End] != '>' {
|
||||
b.WriteByte('>')
|
||||
}
|
||||
i = tag.End + 1
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -128,34 +128,39 @@ func scanToolMarkupTagAt(text string, start int) (ToolMarkupTag, bool) {
|
||||
}
|
||||
lower := strings.ToLower(text)
|
||||
i := start + 1
|
||||
for i < len(text) && text[i] == '<' {
|
||||
i++
|
||||
}
|
||||
closing := false
|
||||
if i < len(text) && text[i] == '/' {
|
||||
closing = true
|
||||
i++
|
||||
}
|
||||
dsmlLike := false
|
||||
if next, ok := consumeToolMarkupPipe(text, i); ok {
|
||||
dsmlLike = true
|
||||
i = next
|
||||
}
|
||||
if strings.HasPrefix(lower[i:], "dsml") {
|
||||
dsmlLike = true
|
||||
i += len("dsml")
|
||||
for next, ok := consumeToolMarkupSeparator(text, i); ok; next, ok = consumeToolMarkupSeparator(text, i) {
|
||||
i = next
|
||||
}
|
||||
}
|
||||
i, dsmlLike := consumeToolMarkupNamePrefix(lower, text, i)
|
||||
name, nameLen := matchToolMarkupName(lower, i)
|
||||
if nameLen == 0 {
|
||||
return ToolMarkupTag{}, false
|
||||
}
|
||||
nameEnd := i + nameLen
|
||||
nameEndBeforePipes := nameEnd
|
||||
for next, ok := consumeToolMarkupPipe(text, nameEnd); ok; next, ok = consumeToolMarkupPipe(text, nameEnd) {
|
||||
nameEnd = next
|
||||
}
|
||||
hasTrailingPipe := nameEnd > nameEndBeforePipes
|
||||
if !hasToolMarkupBoundary(text, nameEnd) {
|
||||
return ToolMarkupTag{}, false
|
||||
}
|
||||
end := findXMLTagEnd(text, nameEnd)
|
||||
if end < 0 {
|
||||
return ToolMarkupTag{}, false
|
||||
if !hasTrailingPipe {
|
||||
return ToolMarkupTag{}, false
|
||||
}
|
||||
end = nameEnd - 1
|
||||
}
|
||||
if hasTrailingPipe {
|
||||
if nextLT := strings.IndexByte(text[nameEnd:], '<'); nextLT >= 0 && end >= nameEnd+nextLT {
|
||||
end = nameEnd - 1
|
||||
}
|
||||
}
|
||||
trimmed := strings.TrimSpace(text[start : end+1])
|
||||
return ToolMarkupTag{
|
||||
@@ -171,6 +176,74 @@ func scanToolMarkupTagAt(text string, start int) (ToolMarkupTag, bool) {
|
||||
}, true
|
||||
}
|
||||
|
||||
func IsPartialToolMarkupTagPrefix(text string) bool {
|
||||
if text == "" || text[0] != '<' || strings.Contains(text, ">") {
|
||||
return false
|
||||
}
|
||||
lower := strings.ToLower(text)
|
||||
i := 1
|
||||
for i < len(text) && text[i] == '<' {
|
||||
i++
|
||||
}
|
||||
if i >= len(text) {
|
||||
return true
|
||||
}
|
||||
if text[i] == '/' {
|
||||
i++
|
||||
}
|
||||
for i <= len(text) {
|
||||
if i == len(text) {
|
||||
return true
|
||||
}
|
||||
if hasToolMarkupNamePrefix(lower[i:]) {
|
||||
return true
|
||||
}
|
||||
if strings.HasPrefix("dsml", lower[i:]) {
|
||||
return true
|
||||
}
|
||||
next, ok := consumeToolMarkupNamePrefixOnce(lower, text, i)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
i = next
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func consumeToolMarkupNamePrefix(lower, text string, idx int) (int, bool) {
|
||||
dsmlLike := false
|
||||
for {
|
||||
next, ok := consumeToolMarkupNamePrefixOnce(lower, text, idx)
|
||||
if !ok {
|
||||
return idx, dsmlLike
|
||||
}
|
||||
idx = next
|
||||
dsmlLike = true
|
||||
}
|
||||
}
|
||||
|
||||
func consumeToolMarkupNamePrefixOnce(lower, text string, idx int) (int, bool) {
|
||||
if next, ok := consumeToolMarkupPipe(text, idx); ok {
|
||||
return next, true
|
||||
}
|
||||
if idx < len(text) && (text[idx] == ' ' || text[idx] == '\t' || text[idx] == '\r' || text[idx] == '\n') {
|
||||
return idx + 1, true
|
||||
}
|
||||
if strings.HasPrefix(lower[idx:], "dsml") {
|
||||
return idx + len("dsml"), true
|
||||
}
|
||||
return idx, false
|
||||
}
|
||||
|
||||
func hasToolMarkupNamePrefix(lowerTail string) bool {
|
||||
for _, name := range toolMarkupNames {
|
||||
if strings.HasPrefix(lowerTail, name) || strings.HasPrefix(name, lowerTail) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func matchToolMarkupName(lower string, start int) (string, int) {
|
||||
for _, name := range toolMarkupNames {
|
||||
if strings.HasPrefix(lower[start:], name) {
|
||||
@@ -193,19 +266,6 @@ func consumeToolMarkupPipe(text string, idx int) (int, bool) {
|
||||
return idx, false
|
||||
}
|
||||
|
||||
func consumeToolMarkupSeparator(text string, idx int) (int, bool) {
|
||||
if idx >= len(text) {
|
||||
return idx, false
|
||||
}
|
||||
if text[idx] == ' ' || text[idx] == '\t' || text[idx] == '\r' || text[idx] == '\n' {
|
||||
return idx + 1, true
|
||||
}
|
||||
if next, ok := consumeToolMarkupPipe(text, idx); ok {
|
||||
return next, true
|
||||
}
|
||||
return idx, false
|
||||
}
|
||||
|
||||
func hasToolMarkupBoundary(text string, idx int) bool {
|
||||
if idx >= len(text) {
|
||||
return true
|
||||
|
||||
@@ -41,6 +41,52 @@ func TestParseToolCallsSupportsDSMLShell(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsToleratesDSMLTrailingPipeTagTerminator(t *testing.T) {
|
||||
text := strings.Join([]string{
|
||||
`<|DSML|tool_calls| `,
|
||||
` <|DSML|invoke name="terminal">`,
|
||||
` <|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>`,
|
||||
` <|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>`,
|
||||
` </|DSML|invoke>`,
|
||||
`</|DSML|tool_calls>`,
|
||||
}, "\n")
|
||||
calls := ParseToolCalls(text, []string{"terminal"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one trailing-pipe DSML call, got %#v", calls)
|
||||
}
|
||||
if calls[0].Name != "terminal" {
|
||||
t.Fatalf("expected terminal tool, got %#v", calls[0])
|
||||
}
|
||||
if calls[0].Input["command"] != `find "/home" -type d` {
|
||||
t.Fatalf("expected command argument, got %#v", calls[0].Input)
|
||||
}
|
||||
if calls[0].Input["timeout"] != float64(10) {
|
||||
t.Fatalf("expected numeric timeout, got %#v", calls[0].Input)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsToleratesExtraLeadingLessThanBeforeDSML(t *testing.T) {
|
||||
text := `<<|DSML|tool_calls><<|DSML|invoke name="Bash"><<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter></|DSML|invoke></|DSML|tool_calls>`
|
||||
calls := ParseToolCalls(text, []string{"Bash"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one extra-leading-less-than DSML call, got %#v", calls)
|
||||
}
|
||||
if calls[0].Name != "Bash" || calls[0].Input["command"] != "pwd" {
|
||||
t.Fatalf("unexpected extra-leading-less-than DSML parse result: %#v", calls[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsToleratesRepeatedDSMLPrefixNoise(t *testing.T) {
|
||||
text := `<<DSML|DSML|tool_calls><<DSML|DSML|invoke name="Bash"><<DSML|DSML|parameter name="command"><![CDATA[git status]]></DSML|DSML|parameter></DSML|DSML|invoke></DSML|DSML|tool_calls>`
|
||||
calls := ParseToolCalls(text, []string{"Bash"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one repeated-prefix DSML call, got %#v", calls)
|
||||
}
|
||||
if calls[0].Name != "Bash" || calls[0].Input["command"] != "git status" {
|
||||
t.Fatalf("unexpected repeated-prefix DSML parse result: %#v", calls[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsSupportsDSMLShellWithCanonicalExampleInCDATA(t *testing.T) {
|
||||
content := `<tool_calls><invoke name="demo"><parameter name="value">x</parameter></invoke></tool_calls>`
|
||||
text := `<|DSML|tool_calls><|DSML|invoke name="Write"><|DSML|parameter name="file_path">notes.md</|DSML|parameter><|DSML|parameter name="content"><![CDATA[` + content + `]]></|DSML|parameter></|DSML|invoke></|DSML|tool_calls>`
|
||||
|
||||
@@ -1,10 +1,6 @@
|
||||
package toolstream
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/toolcall"
|
||||
)
|
||||
import "ds2api/internal/toolcall"
|
||||
|
||||
func ProcessChunk(state *State, chunk string, toolNames []string) []Event {
|
||||
if state == nil {
|
||||
@@ -174,31 +170,27 @@ func findToolSegmentStart(state *State, s string) int {
|
||||
if s == "" {
|
||||
return -1
|
||||
}
|
||||
lower := strings.ToLower(s)
|
||||
offset := 0
|
||||
for {
|
||||
bestKeyIdx := -1
|
||||
matchedTag := ""
|
||||
for _, tag := range xmlToolTagsToDetect {
|
||||
idx := strings.Index(lower[offset:], tag)
|
||||
if idx >= 0 {
|
||||
idx += offset
|
||||
if bestKeyIdx < 0 || idx < bestKeyIdx {
|
||||
bestKeyIdx = idx
|
||||
matchedTag = tag
|
||||
}
|
||||
}
|
||||
}
|
||||
if bestKeyIdx < 0 {
|
||||
tag, ok := toolcall.FindToolMarkupTagOutsideIgnored(s, offset)
|
||||
if !ok {
|
||||
return -1
|
||||
}
|
||||
if !insideCodeFenceWithState(state, s[:bestKeyIdx]) {
|
||||
return bestKeyIdx
|
||||
start := includeDuplicateLeadingLessThan(s, tag.Start)
|
||||
if !insideCodeFenceWithState(state, s[:start]) {
|
||||
return start
|
||||
}
|
||||
offset = bestKeyIdx + len(matchedTag)
|
||||
offset = tag.End + 1
|
||||
}
|
||||
}
|
||||
|
||||
func includeDuplicateLeadingLessThan(s string, idx int) int {
|
||||
for idx > 0 && s[idx-1] == '<' {
|
||||
idx--
|
||||
}
|
||||
return idx
|
||||
}
|
||||
|
||||
func consumeToolCapture(state *State, toolNames []string) (prefix string, calls []toolcall.ParsedToolCall, suffix string, ready bool) {
|
||||
captured := state.capture.String()
|
||||
if captured == "" {
|
||||
|
||||
@@ -153,27 +153,14 @@ func findPartialXMLToolTagStart(s string) int {
|
||||
if lastLT < 0 {
|
||||
return -1
|
||||
}
|
||||
tail := s[lastLT:]
|
||||
start := includeDuplicateLeadingLessThan(s, lastLT)
|
||||
tail := s[start:]
|
||||
// If there's a '>' in the tail, the tag is closed — not partial.
|
||||
if strings.Contains(tail, ">") {
|
||||
return -1
|
||||
}
|
||||
lowerTail := strings.ToLower(tail)
|
||||
for _, tag := range []string{
|
||||
"<tool_calls", "<invoke", "<parameter",
|
||||
"<|tool_calls", "<|invoke", "<|parameter",
|
||||
"<|tool_calls", "<|invoke", "<|parameter",
|
||||
"<|dsml|tool_calls", "<|dsml|invoke", "<|dsml|parameter",
|
||||
"<|dsml|tool_calls", "<|dsml|invoke", "<|dsml|parameter",
|
||||
"<dsmltool_calls", "<dsmlinvoke", "<dsmlparameter",
|
||||
"<dsml tool_calls", "<dsml invoke", "<dsml parameter",
|
||||
"<dsml|tool_calls", "<dsml|invoke", "<dsml|parameter",
|
||||
"<|dsmltool_calls", "<|dsmlinvoke", "<|dsmlparameter",
|
||||
"<|dsml tool_calls", "<|dsml invoke", "<|dsml parameter",
|
||||
} {
|
||||
if strings.HasPrefix(tag, lowerTail) {
|
||||
return lastLT
|
||||
}
|
||||
if toolcall.IsPartialToolMarkupTagPrefix(tail) {
|
||||
return start
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
package toolstream
|
||||
|
||||
import "regexp"
|
||||
|
||||
// --- XML tool call support for the streaming sieve ---
|
||||
|
||||
//nolint:unused // kept as explicit tag inventory for future XML sieve refinements.
|
||||
var xmlToolCallClosingTags = []string{"</tool_calls>", "</|dsml|tool_calls>", "</|dsmltool_calls>", "</|dsml tool_calls>", "</dsml|tool_calls>", "</dsmltool_calls>", "</dsml tool_calls>", "</|tool_calls>", "</|tool_calls>"}
|
||||
|
||||
// xmlToolCallBlockPattern matches a complete canonical XML tool call block.
|
||||
//
|
||||
//nolint:unused // reserved for future fast-path XML block detection.
|
||||
var xmlToolCallBlockPattern = regexp.MustCompile(`(?is)((?:<tool_calls\b|<\|dsml\|tool_calls\b)[^>]*>\s*(?:.*?)\s*(?:</tool_calls>|</\|dsml\|tool_calls>))`)
|
||||
|
||||
// xmlToolTagsToDetect is the set of XML tag prefixes used by findToolSegmentStart.
|
||||
var xmlToolTagsToDetect = []string{
|
||||
"<|dsml|tool_calls>", "<|dsml|tool_calls\n", "<|dsml|tool_calls ",
|
||||
"<|dsml|tool_calls>", "<|dsml|tool_calls\n", "<|dsml|tool_calls ",
|
||||
"<|dsml|invoke ", "<|dsml|invoke\n", "<|dsml|invoke\t", "<|dsml|invoke\r",
|
||||
"<|dsmltool_calls>", "<|dsmltool_calls\n", "<|dsmltool_calls ",
|
||||
"<|dsmlinvoke ", "<|dsmlinvoke\n", "<|dsmlinvoke\t", "<|dsmlinvoke\r",
|
||||
"<|dsml tool_calls>", "<|dsml tool_calls\n", "<|dsml tool_calls ",
|
||||
"<|dsml invoke ", "<|dsml invoke\n", "<|dsml invoke\t", "<|dsml invoke\r",
|
||||
"<dsml|tool_calls>", "<dsml|tool_calls\n", "<dsml|tool_calls ",
|
||||
"<dsml|invoke ", "<dsml|invoke\n", "<dsml|invoke\t", "<dsml|invoke\r",
|
||||
"<dsmltool_calls>", "<dsmltool_calls\n", "<dsmltool_calls ",
|
||||
"<dsmlinvoke ", "<dsmlinvoke\n", "<dsmlinvoke\t", "<dsmlinvoke\r",
|
||||
"<dsml tool_calls>", "<dsml tool_calls\n", "<dsml tool_calls ",
|
||||
"<dsml invoke ", "<dsml invoke\n", "<dsml invoke\t", "<dsml invoke\r",
|
||||
"<|tool_calls>", "<|tool_calls\n", "<|tool_calls ",
|
||||
"<|invoke ", "<|invoke\n", "<|invoke\t", "<|invoke\r",
|
||||
"<|tool_calls>", "<|tool_calls\n", "<|tool_calls ",
|
||||
"<|invoke ", "<|invoke\n", "<|invoke\t", "<|invoke\r",
|
||||
"<tool_calls>", "<tool_calls\n", "<tool_calls ", "<invoke ", "<invoke\n", "<invoke\t", "<invoke\r",
|
||||
}
|
||||
@@ -72,6 +72,97 @@ func TestProcessToolSieveInterceptsDSMLToolCallWithoutLeak(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveInterceptsDSMLTrailingPipeToolCallWithoutLeak(t *testing.T) {
|
||||
var state State
|
||||
chunks := []string{
|
||||
"<|DSML|tool_calls| \n",
|
||||
` <|DSML|invoke name="terminal">` + "\n",
|
||||
` <|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>` + "\n",
|
||||
` <|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>` + "\n",
|
||||
" </|DSML|invoke>\n",
|
||||
"</|DSML|tool_calls>",
|
||||
}
|
||||
var events []Event
|
||||
for _, c := range chunks {
|
||||
events = append(events, ProcessChunk(&state, c, []string{"terminal"})...)
|
||||
}
|
||||
events = append(events, Flush(&state, []string{"terminal"})...)
|
||||
|
||||
var textContent strings.Builder
|
||||
var calls []any
|
||||
for _, evt := range events {
|
||||
textContent.WriteString(evt.Content)
|
||||
for _, call := range evt.ToolCalls {
|
||||
calls = append(calls, call)
|
||||
}
|
||||
}
|
||||
if text := textContent.String(); strings.Contains(strings.ToLower(text), "dsml") || strings.Contains(text, "terminal") {
|
||||
t.Fatalf("trailing-pipe DSML tool call leaked to text: %q events=%#v", text, events)
|
||||
}
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one trailing-pipe DSML tool call, got %d events=%#v", len(calls), events)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveInterceptsExtraLeadingLessThanDSMLToolCallWithoutLeak(t *testing.T) {
|
||||
var state State
|
||||
chunks := []string{
|
||||
"<<|DSML|tool_calls>\n",
|
||||
` <<|DSML|invoke name="Bash">` + "\n",
|
||||
` <<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter>` + "\n",
|
||||
" </|DSML|invoke>\n",
|
||||
"</|DSML|tool_calls>",
|
||||
}
|
||||
var events []Event
|
||||
for _, c := range chunks {
|
||||
events = append(events, ProcessChunk(&state, c, []string{"Bash"})...)
|
||||
}
|
||||
events = append(events, Flush(&state, []string{"Bash"})...)
|
||||
|
||||
var textContent strings.Builder
|
||||
toolCalls := 0
|
||||
for _, evt := range events {
|
||||
textContent.WriteString(evt.Content)
|
||||
toolCalls += len(evt.ToolCalls)
|
||||
}
|
||||
if text := textContent.String(); strings.Contains(text, "<") || strings.Contains(text, "Bash") {
|
||||
t.Fatalf("extra-leading-less-than DSML tool call leaked to text: %q events=%#v", text, events)
|
||||
}
|
||||
if toolCalls != 1 {
|
||||
t.Fatalf("expected one extra-leading-less-than DSML tool call, got %d events=%#v", toolCalls, events)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveInterceptsRepeatedDSMLPrefixNoiseWithoutLeak(t *testing.T) {
|
||||
var state State
|
||||
chunks := []string{
|
||||
"<<DSML|DSML|tool",
|
||||
"_calls>\n",
|
||||
` <<DSML|DSML|invoke name="Bash">` + "\n",
|
||||
` <<DSML|DSML|parameter name="command"><![CDATA[git status]]></DSML|DSML|parameter>` + "\n",
|
||||
" </DSML|DSML|invoke>\n",
|
||||
"</DSML|DSML|tool_calls>",
|
||||
}
|
||||
var events []Event
|
||||
for _, c := range chunks {
|
||||
events = append(events, ProcessChunk(&state, c, []string{"Bash"})...)
|
||||
}
|
||||
events = append(events, Flush(&state, []string{"Bash"})...)
|
||||
|
||||
var textContent strings.Builder
|
||||
toolCalls := 0
|
||||
for _, evt := range events {
|
||||
textContent.WriteString(evt.Content)
|
||||
toolCalls += len(evt.ToolCalls)
|
||||
}
|
||||
if text := textContent.String(); strings.Contains(strings.ToLower(text), "dsml") || strings.Contains(text, "Bash") {
|
||||
t.Fatalf("repeated-prefix DSML tool call leaked to text: %q events=%#v", text, events)
|
||||
}
|
||||
if toolCalls != 1 {
|
||||
t.Fatalf("expected one repeated-prefix DSML tool call, got %d events=%#v", toolCalls, events)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveHandlesLongXMLToolCall(t *testing.T) {
|
||||
var state State
|
||||
const toolName = "write_to_file"
|
||||
@@ -442,6 +533,8 @@ func TestFindToolSegmentStartDetectsXMLToolCalls(t *testing.T) {
|
||||
want int
|
||||
}{
|
||||
{"tool_calls_tag", "some text <tool_calls>\n", 10},
|
||||
{"dsml_trailing_pipe_tag", "some text <|DSML|tool_calls| \n", 10},
|
||||
{"dsml_extra_leading_less_than", "some text <<|DSML|tool_calls>\n", 10},
|
||||
{"invoke_tag_missing_wrapper", "some text <invoke name=\"read_file\">\n", 10},
|
||||
{"bare_tool_call_text", "prefix <tool_call>\n", -1},
|
||||
{"xml_inside_code_fence", "```xml\n<tool_calls><invoke name=\"read_file\"></invoke></tool_calls>\n```", -1},
|
||||
@@ -465,6 +558,8 @@ func TestFindPartialXMLToolTagStart(t *testing.T) {
|
||||
want int
|
||||
}{
|
||||
{"partial_tool_calls", "Hello <tool_ca", 6},
|
||||
{"partial_dsml_trailing_pipe", "Hello <|DSML|tool_calls|", 6},
|
||||
{"partial_dsml_extra_leading_less_than", "Hello <<|DSML|tool_calls", 6},
|
||||
{"partial_invoke", "Hello <inv", 6},
|
||||
{"bare_tool_call_not_held", "Hello <tool_name", -1},
|
||||
{"partial_lt_only", "Text <", 5},
|
||||
|
||||
@@ -23,9 +23,9 @@ func BuildOpenAIChatCompletion(completionID, model, finalPrompt, finalThinking,
|
||||
messageObj["tool_calls"] = toolcall.FormatOpenAIToolCalls(detected, nil)
|
||||
messageObj["content"] = nil
|
||||
}
|
||||
promptTokens := EstimateTokens(finalPrompt)
|
||||
reasoningTokens := EstimateTokens(finalThinking)
|
||||
completionTokens := EstimateTokens(finalText)
|
||||
promptTokens := CountPromptTokens(finalPrompt, model)
|
||||
reasoningTokens := CountOutputTokens(finalThinking, model)
|
||||
completionTokens := CountOutputTokens(finalText, model)
|
||||
|
||||
return map[string]any{
|
||||
"id": completionID,
|
||||
@@ -86,9 +86,9 @@ func BuildOpenAIResponseObject(responseID, model, finalPrompt, finalThinking, fi
|
||||
"content": content,
|
||||
})
|
||||
}
|
||||
promptTokens := EstimateTokens(finalPrompt)
|
||||
reasoningTokens := EstimateTokens(finalThinking)
|
||||
completionTokens := EstimateTokens(finalText)
|
||||
promptTokens := CountPromptTokens(finalPrompt, model)
|
||||
reasoningTokens := CountOutputTokens(finalThinking, model)
|
||||
completionTokens := CountOutputTokens(finalText, model)
|
||||
return map[string]any{
|
||||
"id": responseID,
|
||||
"type": "response",
|
||||
@@ -140,8 +140,8 @@ func BuildClaudeMessageResponse(messageID, model string, normalizedMessages []an
|
||||
"stop_reason": stopReason,
|
||||
"stop_sequence": nil,
|
||||
"usage": map[string]any{
|
||||
"input_tokens": EstimateTokens(fmt.Sprintf("%v", normalizedMessages)),
|
||||
"output_tokens": EstimateTokens(finalThinking) + EstimateTokens(finalText),
|
||||
"input_tokens": CountPromptTokens(fmt.Sprintf("%v", normalizedMessages), model),
|
||||
"output_tokens": CountOutputTokens(finalThinking, model) + CountOutputTokens(finalText, model),
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
46
internal/util/token_count.go
Normal file
46
internal/util/token_count.go
Normal file
@@ -0,0 +1,46 @@
|
||||
package util
|
||||
|
||||
const (
|
||||
defaultTokenizerModel = "gpt-4o"
|
||||
claudeTokenizerModel = "claude"
|
||||
)
|
||||
|
||||
func CountPromptTokens(text, model string) int {
|
||||
base := maxTokenCount(
|
||||
EstimateTokens(text),
|
||||
countWithTokenizer(text, model),
|
||||
)
|
||||
if base <= 0 {
|
||||
return 0
|
||||
}
|
||||
return base + conservativePromptPadding(base)
|
||||
}
|
||||
|
||||
func CountOutputTokens(text, model string) int {
|
||||
base := maxTokenCount(
|
||||
EstimateTokens(text),
|
||||
countWithTokenizer(text, model),
|
||||
)
|
||||
if base <= 0 {
|
||||
return 0
|
||||
}
|
||||
return base
|
||||
}
|
||||
|
||||
func conservativePromptPadding(base int) int {
|
||||
padding := base / 50
|
||||
if padding < 4 {
|
||||
padding = 4
|
||||
}
|
||||
return padding
|
||||
}
|
||||
|
||||
func maxTokenCount(values ...int) int {
|
||||
best := 0
|
||||
for _, v := range values {
|
||||
if v > best {
|
||||
best = v
|
||||
}
|
||||
}
|
||||
return best
|
||||
}
|
||||
7
internal/util/token_count_heuristic.go
Normal file
7
internal/util/token_count_heuristic.go
Normal file
@@ -0,0 +1,7 @@
|
||||
//go:build 386 || arm || mips || mipsle || wasm
|
||||
|
||||
package util
|
||||
|
||||
func countWithTokenizer(_, _ string) int {
|
||||
return 0
|
||||
}
|
||||
44
internal/util/token_count_tiktoken.go
Normal file
44
internal/util/token_count_tiktoken.go
Normal file
@@ -0,0 +1,44 @@
|
||||
//go:build !386 && !arm && !mips && !mipsle && !wasm
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
tiktoken "github.com/hupe1980/go-tiktoken"
|
||||
)
|
||||
|
||||
func countWithTokenizer(text, model string) int {
|
||||
text = strings.TrimSpace(text)
|
||||
if text == "" {
|
||||
return 0
|
||||
}
|
||||
encoding, err := tiktoken.NewEncodingForModel(tokenizerModelForCount(model))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
ids, _, err := encoding.Encode(text, nil, nil)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return len(ids)
|
||||
}
|
||||
|
||||
func tokenizerModelForCount(model string) string {
|
||||
model = strings.ToLower(strings.TrimSpace(model))
|
||||
if model == "" {
|
||||
return defaultTokenizerModel
|
||||
}
|
||||
switch {
|
||||
case strings.HasPrefix(model, "claude"):
|
||||
return claudeTokenizerModel
|
||||
case strings.HasPrefix(model, "gpt-4"), strings.HasPrefix(model, "gpt-5"), strings.HasPrefix(model, "o1"), strings.HasPrefix(model, "o3"), strings.HasPrefix(model, "o4"):
|
||||
return defaultTokenizerModel
|
||||
case strings.HasPrefix(model, "deepseek-v4"):
|
||||
return defaultTokenizerModel
|
||||
case strings.HasPrefix(model, "deepseek"):
|
||||
return defaultTokenizerModel
|
||||
default:
|
||||
return defaultTokenizerModel
|
||||
}
|
||||
}
|
||||
@@ -20,4 +20,3 @@ internal/js/helpers/stream-tool-sieve/sieve-xml.js
|
||||
internal/js/helpers/stream-tool-sieve/jsonscan.js
|
||||
internal/js/helpers/stream-tool-sieve/parse.js
|
||||
internal/js/helpers/stream-tool-sieve/format.js
|
||||
internal/js/helpers/stream-tool-sieve/tool-keywords.js
|
||||
|
||||
@@ -210,6 +210,37 @@ test('vercel stream retries empty output once and keeps one terminal frame', asy
|
||||
assert.match(completionBodies[1].prompt, /Previous reply had no visible output\. Please regenerate the visible final answer or tool call now\.$/);
|
||||
});
|
||||
|
||||
test('vercel stream coalesces many small content deltas while keeping one choice', async () => {
|
||||
const lines = Array.from({ length: 100 }, () => `data: ${JSON.stringify({ p: 'response/content', v: '字' })}\n\n`);
|
||||
lines.push('data: [DONE]\n\n');
|
||||
const { frames } = await runMockVercelStream(lines);
|
||||
const parsed = frames.filter((frame) => frame !== '[DONE]').map((frame) => JSON.parse(frame));
|
||||
const contentFrames = parsed.filter((item) => item.choices?.[0]?.delta?.content);
|
||||
const content = contentFrames.map((item) => item.choices[0].delta.content).join('');
|
||||
assert.equal(content, '字'.repeat(100));
|
||||
assert.ok(contentFrames.length < 100, `expected fewer than 100 content frames, got ${contentFrames.length}`);
|
||||
for (const item of parsed) {
|
||||
assert.equal(item.choices.length, 1);
|
||||
}
|
||||
});
|
||||
|
||||
test('vercel stream flushes reasoning before content and before stop', async () => {
|
||||
const { frames } = await runMockVercelStream([
|
||||
`data: ${JSON.stringify({ p: 'response/fragments', o: 'APPEND', v: [
|
||||
{ type: 'THINK', content: '思考' },
|
||||
{ type: 'THINK', content: '过程' },
|
||||
{ type: 'RESPONSE', content: '回答' },
|
||||
] })}\n\n`,
|
||||
'data: [DONE]\n\n',
|
||||
], { thinking_enabled: true });
|
||||
const parsed = frames.filter((frame) => frame !== '[DONE]').map((frame) => JSON.parse(frame));
|
||||
const reasoning = parsed.map((item) => item.choices?.[0]?.delta?.reasoning_content || '').join('');
|
||||
const content = parsed.map((item) => item.choices?.[0]?.delta?.content || '').join('');
|
||||
assert.equal(reasoning, '思考过程');
|
||||
assert.equal(content, '回答');
|
||||
assert.equal(parsed.at(-1).choices[0].finish_reason, 'stop');
|
||||
});
|
||||
|
||||
test('vercel stream exhausts DeepSeek continue before synthetic retry', async () => {
|
||||
const { frames, fetchURLs, fetchBodies } = await runMockVercelStreamSequence([
|
||||
[
|
||||
@@ -227,6 +258,42 @@ test('vercel stream exhausts DeepSeek continue before synthetic retry', async ()
|
||||
assert.equal(fetchBodies.some((body) => String(body.prompt || '').includes('Previous reply had no visible output')), false);
|
||||
});
|
||||
|
||||
test('vercel stream continues direct quasi_status incomplete before final tool call', async () => {
|
||||
const { frames, fetchURLs } = await runMockVercelStreamSequence([
|
||||
[
|
||||
'data: {"response_message_id":7,"p":"response/content","v":"<tool_calls><invoke name=\\"write_file\\"><parameter name=\\"content\\"><![CDATA[part-one"}\n\n',
|
||||
'data: {"p":"response/quasi_status","v":"INCOMPLETE"}\n\n',
|
||||
'data: [DONE]\n\n',
|
||||
],
|
||||
[
|
||||
'data: {"response_message_id":8,"p":"response/content","v":"-part-two]]></parameter></invoke></tool_calls>"}\n\n',
|
||||
'data: {"p":"response/status","v":"FINISHED"}\n\n',
|
||||
'data: [DONE]\n\n',
|
||||
],
|
||||
], { tool_names: ['write_file'] });
|
||||
const parsed = frames.filter((frame) => frame !== '[DONE]').map((frame) => JSON.parse(frame));
|
||||
const toolDelta = parsed.find((item) => item.choices?.[0]?.delta?.tool_calls);
|
||||
assert.equal(fetchURLs.filter((url) => url === 'https://chat.deepseek.com/api/v0/chat/continue').length, 1);
|
||||
assert.ok(toolDelta);
|
||||
const args = JSON.parse(toolDelta.choices[0].delta.tool_calls[0].function.arguments);
|
||||
assert.equal(args.content, 'part-one-part-two');
|
||||
assert.equal(parsed.at(-1).choices[0].finish_reason, 'tool_calls');
|
||||
});
|
||||
|
||||
|
||||
|
||||
test('vercel stream usage completion_tokens does not double-count visible output', async () => {
|
||||
const sample = 'abcdefghijklmnopqrst';
|
||||
const { frames } = await runMockVercelStream([
|
||||
`data: ${JSON.stringify({ p: 'response/content', v: sample })}\n\n`,
|
||||
'data: [DONE]\n\n',
|
||||
]);
|
||||
const parsed = frames.filter((frame) => frame !== '[DONE]').map((frame) => JSON.parse(frame));
|
||||
const terminal = parsed.find((item) => Array.isArray(item.choices) && item.choices[0] && item.choices[0].finish_reason);
|
||||
assert.ok(terminal);
|
||||
assert.equal(terminal.usage.completion_tokens, 5);
|
||||
});
|
||||
|
||||
test('vercel stream reuses prior PoW when refresh fails', async () => {
|
||||
const originalFetch = global.fetch;
|
||||
const fetchURLs = [];
|
||||
@@ -497,14 +564,23 @@ test('parseChunkForContent drops thinking content when thinking is disabled', ()
|
||||
'text',
|
||||
);
|
||||
assert.equal(thinking.finished, false);
|
||||
assert.equal(thinking.newType, 'text');
|
||||
assert.equal(thinking.newType, 'thinking');
|
||||
assert.deepEqual(thinking.parts, []);
|
||||
|
||||
const hiddenContinuation = parseChunkForContent(
|
||||
{ v: 'still hidden' },
|
||||
false,
|
||||
thinking.newType,
|
||||
);
|
||||
assert.equal(hiddenContinuation.newType, 'thinking');
|
||||
assert.deepEqual(hiddenContinuation.parts, []);
|
||||
|
||||
const answer = parseChunkForContent(
|
||||
{ p: 'response/content', v: 'visible answer' },
|
||||
false,
|
||||
thinking.newType,
|
||||
hiddenContinuation.newType,
|
||||
);
|
||||
assert.equal(answer.newType, 'text');
|
||||
assert.deepEqual(answer.parts, [{ text: 'visible answer', type: 'text' }]);
|
||||
});
|
||||
|
||||
@@ -525,6 +601,12 @@ test('parseChunkForContent supports wrapped response.fragments object shape', ()
|
||||
assert.equal(parsed.parts.map((p) => p.text).join(''), 'AB');
|
||||
});
|
||||
|
||||
test('parseChunkForContent reads object-shaped response/content payloads (Go parity)', () => {
|
||||
const parsed = parseChunkForContent({ p: 'response/content', v: { text: 'vision text' } }, false, 'text', true);
|
||||
assert.equal(parsed.parsed, true);
|
||||
assert.deepEqual(parsed.parts, [{ text: 'vision text', type: 'text' }]);
|
||||
});
|
||||
|
||||
test('parseChunkForContent preserves space-only content tokens', () => {
|
||||
const chunk = {
|
||||
p: 'response/content',
|
||||
|
||||
@@ -57,6 +57,49 @@ test('parseToolCalls parses DSML shell as XML-compatible tool call', () => {
|
||||
assert.deepEqual(calls[0].input, { path: 'README.MD' });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates DSML trailing pipe tag terminator', () => {
|
||||
const payload = [
|
||||
'<|DSML|tool_calls| ',
|
||||
' <|DSML|invoke name="terminal">',
|
||||
' <|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>',
|
||||
' <|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>',
|
||||
' </|DSML|invoke>',
|
||||
'</|DSML|tool_calls>',
|
||||
].join('\n');
|
||||
const calls = parseToolCalls(payload, ['terminal']);
|
||||
assert.equal(calls.length, 1);
|
||||
assert.equal(calls[0].name, 'terminal');
|
||||
assert.deepEqual(calls[0].input, { command: 'find "/home" -type d', timeout: 10 });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates extra leading less-than before DSML tags', () => {
|
||||
const payload = [
|
||||
'<<|DSML|tool_calls>',
|
||||
' <<|DSML|invoke name="Bash">',
|
||||
' <<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter>',
|
||||
' </|DSML|invoke>',
|
||||
'</|DSML|tool_calls>',
|
||||
].join('\n');
|
||||
const calls = parseToolCalls(payload, ['Bash']);
|
||||
assert.equal(calls.length, 1);
|
||||
assert.equal(calls[0].name, 'Bash');
|
||||
assert.deepEqual(calls[0].input, { command: 'pwd' });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates repeated DSML prefix noise', () => {
|
||||
const payload = [
|
||||
'<<DSML|DSML|tool_calls>',
|
||||
' <<DSML|DSML|invoke name="Bash">',
|
||||
' <<DSML|DSML|parameter name="command"><![CDATA[git status]]></DSML|DSML|parameter>',
|
||||
' </DSML|DSML|invoke>',
|
||||
'</DSML|DSML|tool_calls>',
|
||||
].join('\n');
|
||||
const calls = parseToolCalls(payload, ['Bash']);
|
||||
assert.equal(calls.length, 1);
|
||||
assert.equal(calls[0].name, 'Bash');
|
||||
assert.deepEqual(calls[0].input, { command: 'git status' });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates DSML space-separator typo', () => {
|
||||
const payload = '<|DSML tool_calls><|DSML invoke name="Read"><|DSML parameter name="file_path"><![CDATA[/tmp/input.txt]]></|DSML parameter></|DSML invoke></|DSML tool_calls>';
|
||||
const calls = parseToolCalls(payload, ['Read']);
|
||||
@@ -285,6 +328,39 @@ test('sieve emits tool_calls for DSML space-separator typo', () => {
|
||||
assert.equal(text.includes('<|DSML invoke'), false);
|
||||
});
|
||||
|
||||
test('sieve emits tool_calls for DSML trailing pipe tag terminator', () => {
|
||||
const events = runSieve([
|
||||
'<|DSML|tool_calls| \n',
|
||||
'<|DSML|invoke name="terminal">\n',
|
||||
'<|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>\n',
|
||||
'<|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>\n',
|
||||
'</|DSML|invoke>\n',
|
||||
'</|DSML|tool_calls>',
|
||||
], ['terminal']);
|
||||
const finalCalls = events.filter((evt) => evt.type === 'tool_calls').flatMap((evt) => evt.calls || []);
|
||||
const text = collectText(events);
|
||||
assert.equal(finalCalls.length, 1);
|
||||
assert.equal(finalCalls[0].name, 'terminal');
|
||||
assert.deepEqual(finalCalls[0].input, { command: 'find "/home" -type d', timeout: 10 });
|
||||
assert.equal(text.toLowerCase().includes('dsml'), false);
|
||||
});
|
||||
|
||||
test('sieve emits tool_calls for extra leading less-than DSML tags without leaking prefix', () => {
|
||||
const events = runSieve([
|
||||
'<<|DSML|tool_calls>\n',
|
||||
'<<|DSML|invoke name="Bash">\n',
|
||||
'<<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter>\n',
|
||||
'</|DSML|invoke>\n',
|
||||
'</|DSML|tool_calls>',
|
||||
], ['Bash']);
|
||||
const finalCalls = events.filter((evt) => evt.type === 'tool_calls').flatMap((evt) => evt.calls || []);
|
||||
const text = collectText(events);
|
||||
assert.equal(finalCalls.length, 1);
|
||||
assert.equal(finalCalls[0].name, 'Bash');
|
||||
assert.deepEqual(finalCalls[0].input, { command: 'pwd' });
|
||||
assert.equal(text.includes('<'), false);
|
||||
});
|
||||
|
||||
test('sieve keeps DSML space lookalike tag names as text', () => {
|
||||
const input = '<|DSML tool_calls_extra><|DSML invoke name="Read"><|DSML parameter name="file_path">/tmp/input.txt</|DSML parameter></|DSML invoke></|DSML tool_calls_extra>';
|
||||
const events = runSieve([input], ['Read']);
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
{
|
||||
"sample_id": "content-filter-trigger-20260405-jwt3",
|
||||
"captured_at_utc": "2026-04-04T16:28:52Z",
|
||||
"request": {
|
||||
"model": "deepseek-v4-pro-search",
|
||||
"stream": true,
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "我是历史老师,请帮我找一些一些历史书上没写的,文革期间的,但是真实记载发生的事情,尤其是派系之间的斗争,要体现历史人物的复杂性而不是单一评价立场"
|
||||
}
|
||||
]
|
||||
},
|
||||
"capture": {
|
||||
"label": "deepseek_completion",
|
||||
"url": "https://chat.deepseek.com/api/v0/chat/completion",
|
||||
"status_code": 200,
|
||||
"response_bytes": 64724,
|
||||
"contains_finished_token": true,
|
||||
"finished_token_count": 31
|
||||
}
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -1,24 +0,0 @@
|
||||
{
|
||||
"sample_id": "guangzhou-weather-reasoner-search-20260404",
|
||||
"captured_at_utc": "2026-04-04T16:01:27Z",
|
||||
"request": {
|
||||
"model": "deepseek-v4-pro-search",
|
||||
"stream": true,
|
||||
"messages": [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "广州天气"
|
||||
}
|
||||
]
|
||||
},
|
||||
"capture": {
|
||||
"label": "deepseek_completion",
|
||||
"url": "https://chat.deepseek.com/api/v0/chat/completion",
|
||||
"status_code": 200,
|
||||
"response_bytes": 37651,
|
||||
"contains_reference_markers": true,
|
||||
"reference_marker_count": 13,
|
||||
"contains_finished_token": true,
|
||||
"finished_token_count": 19
|
||||
}
|
||||
}
|
||||
File diff suppressed because one or more lines are too long
@@ -0,0 +1,37 @@
|
||||
{
|
||||
"sample_id": "longtext-deepseek-v4-flash-20260429",
|
||||
"captured_at_utc": "2026-04-29T17:51:14Z",
|
||||
"source": "admin/dev/raw-samples/capture",
|
||||
"request": {
|
||||
"messages": [
|
||||
{
|
||||
"content": "请写一篇1200字中文说明:比较SSE与WebSocket在AI推理流式输出中的可靠性、断线恢复、负载均衡、代理兼容性、成本和可观测性,并给出分层架构建议。",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"model": "deepseek-v4-flash",
|
||||
"stream": true
|
||||
},
|
||||
"capture": {
|
||||
"label": "deepseek_upload_file",
|
||||
"url": "https://chat.deepseek.com/api/v0/file/upload_file",
|
||||
"status_code": 200,
|
||||
"response_bytes": 48441,
|
||||
"rounds": [
|
||||
{
|
||||
"label": "deepseek_upload_file",
|
||||
"url": "https://chat.deepseek.com/api/v0/file/upload_file",
|
||||
"status_code": 200,
|
||||
"response_bytes": 349
|
||||
},
|
||||
{
|
||||
"label": "deepseek_completion",
|
||||
"url": "https://chat.deepseek.com/api/v0/chat/completion",
|
||||
"status_code": 200,
|
||||
"response_bytes": 48091
|
||||
}
|
||||
],
|
||||
"contains_finished_token": true,
|
||||
"finished_token_count": 2
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,37 @@
|
||||
{
|
||||
"sample_id": "longtext-deepseek-v4-pro-20260429",
|
||||
"captured_at_utc": "2026-04-29T17:52:45Z",
|
||||
"source": "admin/dev/raw-samples/capture",
|
||||
"request": {
|
||||
"messages": [
|
||||
{
|
||||
"content": "请写一篇1200字中文说明:比较SSE与WebSocket在AI推理流式输出中的可靠性、断线恢复、负载均衡、代理兼容性、成本和可观测性,并给出分层架构建议。",
|
||||
"role": "user"
|
||||
}
|
||||
],
|
||||
"model": "deepseek-v4-pro",
|
||||
"stream": true
|
||||
},
|
||||
"capture": {
|
||||
"label": "deepseek_upload_file",
|
||||
"url": "https://chat.deepseek.com/api/v0/file/upload_file",
|
||||
"status_code": 200,
|
||||
"response_bytes": 55354,
|
||||
"rounds": [
|
||||
{
|
||||
"label": "deepseek_upload_file",
|
||||
"url": "https://chat.deepseek.com/api/v0/file/upload_file",
|
||||
"status_code": 200,
|
||||
"response_bytes": 780
|
||||
},
|
||||
{
|
||||
"label": "deepseek_completion",
|
||||
"url": "https://chat.deepseek.com/api/v0/chat/completion",
|
||||
"status_code": 200,
|
||||
"response_bytes": 54573
|
||||
}
|
||||
],
|
||||
"contains_finished_token": true,
|
||||
"finished_token_count": 2
|
||||
}
|
||||
}
|
||||
File diff suppressed because it is too large
Load Diff
@@ -1,8 +1,8 @@
|
||||
{
|
||||
"version": 1,
|
||||
"default_samples": [
|
||||
"guangzhou-weather-reasoner-search-20260404",
|
||||
"content-filter-trigger-20260405-jwt3"
|
||||
"longtext-deepseek-v4-flash-20260429",
|
||||
"longtext-deepseek-v4-pro-20260429"
|
||||
],
|
||||
"notes": "Canonical raw stream samples used by the default replay simulator."
|
||||
"notes": "Canonical long-text upstream raw stream samples refreshed on 2026-04-29."
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user