mirror of
https://github.com/CJackHwang/ds2api.git
synced 2026-05-02 07:25:26 +08:00
Compare commits
21 Commits
2671298439
...
dev
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
049e40e5f1 | ||
|
|
b3c54fcf3d | ||
|
|
1c38709d32 | ||
|
|
28e800c670 | ||
|
|
4389e02b29 | ||
|
|
e2756f800d | ||
|
|
55abf64717 | ||
|
|
76ee2faa12 | ||
|
|
0bca6e2cee | ||
|
|
934b40e572 | ||
|
|
dd5a0c5213 | ||
|
|
43402e7a26 | ||
|
|
6373c001f5 | ||
|
|
3430322e81 | ||
|
|
df1cfac9bc | ||
|
|
706e68de23 | ||
|
|
83b4c7bcad | ||
|
|
445c95a4f2 | ||
|
|
0a6ef8e3f2 | ||
|
|
fd0ec29991 | ||
|
|
94c1acace5 |
1
.github/workflows/release-artifacts.yml
vendored
1
.github/workflows/release-artifacts.yml
vendored
@@ -47,7 +47,6 @@ jobs:
|
||||
|
||||
- name: Release Blocking Gates
|
||||
run: |
|
||||
./tests/scripts/check-stage6-manual-smoke.sh
|
||||
./tests/scripts/check-refactor-line-gate.sh
|
||||
./tests/scripts/run-unit-all.sh
|
||||
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -29,6 +29,7 @@ yarn.lock
|
||||
pnpm-lock.yaml
|
||||
|
||||
# Build artifacts
|
||||
dist/
|
||||
*.tsbuildinfo
|
||||
.cache/
|
||||
.parcel-cache/
|
||||
|
||||
26
API.en.md
26
API.en.md
@@ -33,6 +33,8 @@ Docs: [Overview](README.en.md) / [Architecture](docs/ARCHITECTURE.en.md) / [Depl
|
||||
| Health probes | `GET /healthz`, `GET /readyz` |
|
||||
| CORS | Enabled (uniformly covers `/v1/*`, `/anthropic/*`, `/v1beta/models/*`, and `/admin/*`; echoes the browser `Origin` when present, otherwise `*`; default allow-list includes `Content-Type`, `Authorization`, `X-API-Key`, `X-Ds2-Target-Account`, `X-Ds2-Source`, `X-Vercel-Protection-Bypass`, `X-Goog-Api-Key`, `Anthropic-Version`, `Anthropic-Beta`, and also accepts third-party preflight-requested headers such as `x-stainless-*`; `/v1/chat/completions` on Vercel Node Runtime matches the same behavior; internal-only `X-Ds2-Internal-Token` remains blocked) |
|
||||
|
||||
- All JSON request bodies must be valid UTF-8; malformed byte sequences are rejected on ingress with `400 invalid json`.
|
||||
|
||||
### 3.0 Adapter-Layer Notes
|
||||
|
||||
- OpenAI / Claude / Gemini protocols are now mounted on one shared `chi` router tree assembled in `internal/server/router.go`.
|
||||
@@ -81,7 +83,7 @@ Two header formats accepted:
|
||||
- Token is in `config.keys` → **Managed account mode**: DS2API auto-selects an account via rotation
|
||||
- Token is not in `config.keys` → **Direct token mode**: treated as a DeepSeek token directly
|
||||
|
||||
**Optional header**: `X-Ds2-Target-Account: <email_or_mobile>` — Pin a specific managed account.
|
||||
**Optional header**: `X-Ds2-Target-Account: <email_or_mobile>` — Pin a specific managed account; if the target account does not exist or the managed-account queue is exhausted, the request returns `429`, and current responses do not include `Retry-After`. If the account exists but login/refresh fails, the request returns the underlying `401` or upstream error.
|
||||
Gemini-compatible clients can also send `x-goog-api-key`, `?key=`, or `?api_key=` as the caller credential source.
|
||||
|
||||
### Admin Endpoints (`/admin/*`)
|
||||
@@ -198,10 +200,15 @@ No auth required. Returns the currently supported DeepSeek native model list.
|
||||
"object": "list",
|
||||
"data": [
|
||||
{"id": "deepseek-v4-flash", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-flash-nothinking", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-pro", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-pro-nothinking", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-flash-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-flash-search-nothinking", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-pro-search", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-vision", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []}
|
||||
{"id": "deepseek-v4-pro-search-nothinking", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-vision", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []},
|
||||
{"id": "deepseek-v4-vision-nothinking", "object": "model", "created": 1677610602, "owned_by": "deepseek", "permission": []}
|
||||
]
|
||||
}
|
||||
```
|
||||
@@ -300,7 +307,7 @@ data: [DONE]
|
||||
- When thinking is enabled, the stream may emit `delta.reasoning_content`
|
||||
- Text emits `delta.content`
|
||||
- Last chunk includes `finish_reason` and `usage`
|
||||
- Token counting prefers pass-through from upstream DeepSeek SSE (`accumulated_token_usage` / `token_usage`), and only falls back to local estimation when upstream usage is absent
|
||||
- Token counting prefers pass-through from upstream DeepSeek SSE (`accumulated_token_usage` / `token_usage`), and only falls back to local estimation when upstream usage is absent. Failed/interrupted endings (for example `response.failed`) may not include `usage`
|
||||
|
||||
#### Tool Calls
|
||||
|
||||
@@ -416,7 +423,7 @@ Business auth required. Returns OpenAI-compatible embeddings shape.
|
||||
| `model` | string | ✅ | Supports native models + alias mapping |
|
||||
| `input` | string/array | ✅ | Supports string, string array, token array |
|
||||
|
||||
> Requires `embeddings.provider`. Current supported values: `mock` / `deterministic` / `builtin`. If missing/unsupported, returns standard error shape with HTTP 501.
|
||||
> Requires `embeddings.provider`. Current supported values: `mock` / `deterministic` / `builtin` (all three use the same local deterministic implementation). If missing/unsupported, returns standard error shape with HTTP 501.
|
||||
|
||||
### `POST /v1/files`
|
||||
|
||||
@@ -430,7 +437,7 @@ Business auth required. OpenAI Files-compatible upload endpoint; currently only
|
||||
Constraints and behavior:
|
||||
|
||||
- `Content-Type` must be `multipart/form-data` (otherwise `400`).
|
||||
- Total request size limit is `100 MiB` (over-limit returns `413`).
|
||||
- Total request size limit is **100 MiB** (over-limit returns `413`).
|
||||
- Success returns an OpenAI `file` object (`id/object/bytes/filename/purpose/status`, etc.) and includes `account_id` for source-account tracing.
|
||||
|
||||
---
|
||||
@@ -484,6 +491,13 @@ anthropic-version: 2023-06-01
|
||||
| `stream` | boolean | ❌ | Default `false` |
|
||||
| `system` | string | ❌ | Optional system prompt |
|
||||
| `tools` | array | ❌ | Claude tool schema |
|
||||
| `thinking` | object | ❌ | Anthropic thinking config; translated into downstream reasoning control, and ignored by `-nothinking` models |
|
||||
| `temperature` | number | ❌ | Passed through to the downstream bridge; if `temperature` and `top_p` are both present, `temperature` wins |
|
||||
| `top_p` | number | ❌ | Passed through when `temperature` is absent |
|
||||
| `stop_sequences` | array | ❌ | Passed through as downstream stop sequences |
|
||||
| `tool_choice` | string/object | ❌ | Supports `auto` / `none` / `required` / `{"type":"function","name":"..."}` and is translated to downstream tool choice |
|
||||
|
||||
> Note: `thinking`, `temperature`, `top_p`, `stop_sequences`, and `tool_choice` are translated through the compatibility bridge. Final behavior still depends on the selected model and upstream support. When both `temperature` and `top_p` are present, `temperature` takes precedence.
|
||||
|
||||
#### Non-Stream Response
|
||||
|
||||
@@ -1212,7 +1226,7 @@ Clients should handle HTTP status code plus `error` / `detail` fields.
|
||||
| Code | Meaning |
|
||||
| --- | --- |
|
||||
| `401` | Authentication failed (invalid key/token, or expired admin JWT) |
|
||||
| `429` | Too many requests (exceeded inflight + queue capacity) |
|
||||
| `429` | Too many requests (exceeded inflight + queue capacity; current responses do not include `Retry-After`) |
|
||||
| `503` | Model unavailable or upstream error |
|
||||
|
||||
---
|
||||
|
||||
19
API.md
19
API.md
@@ -33,6 +33,8 @@
|
||||
| 健康检查 | `GET /healthz`、`GET /readyz` |
|
||||
| CORS | 已启用(统一覆盖 `/v1/*`、`/anthropic/*`、`/v1beta/models/*`、`/admin/*`;浏览器有 `Origin` 时回显该 Origin,否则为 `*`;默认允许 `Content-Type`, `Authorization`, `X-API-Key`, `X-Ds2-Target-Account`, `X-Ds2-Source`, `X-Vercel-Protection-Bypass`, `X-Goog-Api-Key`, `Anthropic-Version`, `Anthropic-Beta`,并会放行预检里声明的第三方请求头,如 `x-stainless-*`;Vercel 上 `/v1/chat/completions` 的 Node Runtime 也对齐相同行为;内部专用头 `X-Ds2-Internal-Token` 仍被拦截) |
|
||||
|
||||
- 所有 JSON 请求体都必须是合法 UTF-8;非法字节序列会在入站阶段被拒绝为 `400 invalid json`。
|
||||
|
||||
### 3.0 接口适配层说明
|
||||
|
||||
- OpenAI / Claude / Gemini 三套协议已统一挂在同一 `chi` 路由树上,由 `internal/server/router.go` 负责装配。
|
||||
@@ -81,7 +83,7 @@ Vercel 一键部署可先只填 `DS2API_ADMIN_KEY`,部署后在 `/admin` 导
|
||||
- token 在 `config.keys` 中 → **托管账号模式**,自动轮询选择账号
|
||||
- token 不在 `config.keys` 中 → **直通 token 模式**,直接作为 DeepSeek token 使用
|
||||
|
||||
**可选请求头**:`X-Ds2-Target-Account: <email_or_mobile>` — 指定使用某个托管账号。
|
||||
**可选请求头**:`X-Ds2-Target-Account: <email_or_mobile>` — 指定使用某个托管账号;如果目标账号不存在,或管理账号队列已耗尽,相关业务请求会返回 `429`,当前不会附带 `Retry-After` 头。若账号存在但登录/刷新失败,则返回对应的 `401` 或上游错误。
|
||||
Gemini 兼容客户端还可以使用 `x-goog-api-key`、`?key=` 或 `?api_key=` 作为凭据来源。
|
||||
|
||||
### Admin 接口(`/admin/*`)
|
||||
@@ -307,7 +309,7 @@ data: [DONE]
|
||||
- 开启 thinking 时会输出 `delta.reasoning_content`
|
||||
- 普通文本输出 `delta.content`
|
||||
- 最后一段包含 `finish_reason` 和 `usage`
|
||||
- token 计数优先透传上游 DeepSeek SSE(如 `accumulated_token_usage` / `token_usage`);仅在上游缺失时回退本地估算
|
||||
- token 计数优先透传上游 DeepSeek SSE(如 `accumulated_token_usage` / `token_usage`);仅在上游缺失时回退本地估算。失败/中断型结束(例如 `response.failed`)可能不会携带 `usage`
|
||||
|
||||
#### Tool Calls
|
||||
|
||||
@@ -424,7 +426,7 @@ data: [DONE]
|
||||
| `model` | string | ✅ | 支持原生模型 + alias 自动映射 |
|
||||
| `input` | string/array | ✅ | 支持字符串、字符串数组、token 数组 |
|
||||
|
||||
> 需配置 `embeddings.provider`。当前支持:`mock` / `deterministic` / `builtin`。未配置或不支持时返回标准错误结构(HTTP 501)。
|
||||
> 需配置 `embeddings.provider`。当前支持:`mock` / `deterministic` / `builtin`(三者都走同一套本地确定性实现)。未配置或不支持时返回标准错误结构(HTTP 501)。
|
||||
|
||||
### `POST /v1/files`
|
||||
|
||||
@@ -438,7 +440,7 @@ data: [DONE]
|
||||
约束与行为:
|
||||
|
||||
- 请求必须为 `multipart/form-data`,否则返回 `400`。
|
||||
- 请求体总大小上限 `100 MiB`(超限返回 `413`)。
|
||||
- 请求体总大小上限 **100 MiB**(超限返回 `413`)。
|
||||
- 成功返回 OpenAI `file` 对象(`id/object/bytes/filename/purpose/status` 等字段),并附带 `account_id` 便于定位来源账号。
|
||||
|
||||
---
|
||||
@@ -495,6 +497,13 @@ anthropic-version: 2023-06-01
|
||||
| `stream` | boolean | ❌ | 默认 `false` |
|
||||
| `system` | string | ❌ | 可选系统提示 |
|
||||
| `tools` | array | ❌ | Claude tool 定义 |
|
||||
| `thinking` | object | ❌ | Anthropic thinking 配置;会转译为下游 reasoning 控制,`-nothinking` 模型会忽略 |
|
||||
| `temperature` | number | ❌ | 透传到下游;若同时提供 `top_p`,以 `temperature` 为准 |
|
||||
| `top_p` | number | ❌ | 当未提供 `temperature` 时透传到下游 |
|
||||
| `stop_sequences` | array | ❌ | 透传到下游停用序列 |
|
||||
| `tool_choice` | string/object | ❌ | 支持 `auto` / `none` / `required` / `{"type":"function","name":"..."}`,并会转译为下游工具选择 |
|
||||
|
||||
> 说明:上述 `thinking`、`temperature`、`top_p`、`stop_sequences`、`tool_choice` 都会走兼容层转译;最终是否生效仍取决于当前模型和上游能力。`temperature` 与 `top_p` 同时存在时,`temperature` 优先。
|
||||
|
||||
#### 非流式响应
|
||||
|
||||
@@ -1226,7 +1235,7 @@ Gemini 路由使用 Google 风格错误结构:
|
||||
| 状态码 | 说明 |
|
||||
| --- | --- |
|
||||
| `401` | 鉴权失败(key/token 无效,或 Admin JWT 过期) |
|
||||
| `429` | 请求过多(超出并发上限 + 等待队列) |
|
||||
| `429` | 请求过多(超出并发上限 + 等待队列;当前不附带 `Retry-After` 头) |
|
||||
| `503` | 模型不可用或上游服务异常 |
|
||||
|
||||
---
|
||||
|
||||
@@ -29,7 +29,7 @@ WORKDIR /app
|
||||
RUN apt-get update \
|
||||
&& apt-get install -y --no-install-recommends ca-certificates \
|
||||
&& groupadd -r ds2api && useradd -r -g ds2api -d /app -s /sbin/nologin ds2api \
|
||||
&& mkdir -p /app/data && chown -R ds2api:ds2api /app \
|
||||
&& mkdir -p /app/data /data && chown -R ds2api:ds2api /app /data \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
COPY --from=busybox-tools /bin/busybox /usr/local/bin/busybox
|
||||
EXPOSE 5001
|
||||
|
||||
18
README.MD
18
README.MD
@@ -17,7 +17,7 @@
|
||||
|
||||
语言 / Language: [中文](README.MD) | [English](README.en.md)
|
||||
|
||||
将 DeepSeek Web 对话能力转换为 OpenAI、Claude 与 Gemini 兼容 API。后端为 **Go 全量实现**,前端为 React WebUI 管理台(源码在 `webui/`,部署时自动构建到 `static/admin`)。
|
||||
将 DeepSeek Web 对话能力转换为 OpenAI、Claude 与 Gemini 兼容 API。核心后端以 **Go** 实现,Vercel 流式桥接额外使用少量 Node Runtime,前端为 React WebUI 管理台(源码在 `webui/`,部署时自动构建到 `static/admin`)。
|
||||
|
||||
文档入口:[文档导航](docs/README.md) / [架构说明](docs/ARCHITECTURE.md) / [接口文档](API.md)
|
||||
|
||||
@@ -247,6 +247,7 @@ docker-compose logs -f
|
||||
|
||||
默认 `docker-compose.yml` 会把宿主机 `6011` 映射到容器内的 `5001`。如果你希望直接对外暴露 `5001`,请设置 `DS2API_HOST_PORT=5001`(或者手动调整 `ports` 配置)。
|
||||
同时默认把 `./config.json` 挂载到容器 `/data/config.json`,并设置 `DS2API_CONFIG_PATH=/data/config.json`,用于避免 `/app` 只读导致运行时 token 持久化失败。
|
||||
镜像会预创建 `/data` 并授权给非 root 的 `ds2api` 用户;如果使用单文件 bind mount,请确保宿主机 `config.json` 对容器用户可读写,例如 `chmod 644 config.json`。
|
||||
|
||||
更新镜像:`docker-compose up -d --build`
|
||||
|
||||
@@ -284,7 +285,7 @@ base64 < config.json | tr -d '\n'
|
||||
|
||||
### 方式四:本地源码运行
|
||||
|
||||
**前置要求**:Go 1.26+,Node.js `20.19+` 或 `22.12+`(仅在需要构建 WebUI 时)
|
||||
**前置要求**:Go 1.26+,Node.js `20.19+` 或 `22.12+`(仅在需要构建 WebUI 时);同时确保 `npm` 可用,建议 `npm 10+`
|
||||
|
||||
```bash
|
||||
# 1. 克隆仓库
|
||||
@@ -303,7 +304,7 @@ go run ./cmd/ds2api
|
||||
|
||||
服务实际绑定:`0.0.0.0:5001`,因此同一局域网设备通常也可以通过你的内网 IP 访问。
|
||||
|
||||
> **WebUI 自动构建**:本地首次启动时,若 `static/admin` 不存在,会自动尝试执行 `npm ci`(仅在缺少依赖时)和 `npm run build -- --outDir static/admin --emptyOutDir`(需要本机有 Node.js)。你也可以手动构建:`./scripts/build-webui.sh`
|
||||
> **WebUI 自动构建**:本地首次启动时,若 `static/admin` 不存在,会自动尝试执行 `npm ci`(仅在缺少依赖时)和 `npm run build -- --outDir static/admin --emptyOutDir`(需要本机有 Node.js 和 npm)。你也可以手动构建:`./scripts/build-webui.sh`
|
||||
|
||||
## 配置说明
|
||||
|
||||
@@ -317,7 +318,7 @@ go run ./cmd/ds2api
|
||||
- `runtime`:账号并发、队列与 token 刷新策略,可通过 Admin Settings 热更新。
|
||||
- `auto_delete.mode`:请求结束后的远端会话清理策略,支持 `none` / `single` / `all`。
|
||||
- `history_split`:旧轮次拆分字段,已废弃并忽略,仅保留兼容旧配置。
|
||||
- `current_input_file`:唯一生效的独立拆分策略;默认开启且阈值为 `0`,触发时将完整上下文合并上传为 `history.txt` 上下文文件。
|
||||
- `current_input_file`:唯一生效的独立拆分策略;默认开启且阈值为 `0`,触发时将完整上下文合并上传为 `DS2API_HISTORY.txt` 上下文文件。
|
||||
- 如果关闭 `current_input_file`,请求会直接透传,不上传拆分上下文文件。
|
||||
- `thinking_injection`:默认开启;在最新 user 消息末尾追加思考增强提示词,提高高强度推理与工具调用前的思考稳定性;`prompt` 留空时使用内置默认提示词。
|
||||
|
||||
@@ -333,6 +334,7 @@ go run ./cmd/ds2api
|
||||
| **直通 token 模式** | 传入 token 不在 `config.keys` 中时,直接作为 DeepSeek token 使用 |
|
||||
|
||||
可选请求头 `X-Ds2-Target-Account`:指定使用某个托管账号(值为 email 或 mobile)。
|
||||
如果指定账号不存在,或者当前管理账号队列已满,请求会返回 `429`;当前 `429` 不附带 `Retry-After` 头。若账号存在但登录/刷新失败,则返回对应的鉴权错误。
|
||||
Gemini 路由还可以使用 `x-goog-api-key`,或在没有认证头时使用 `?key=` / `?api_key=` 作为调用方凭据。
|
||||
|
||||
## 并发模型
|
||||
@@ -345,7 +347,7 @@ Gemini 路由还可以使用 `x-goog-api-key`,或在没有认证头时使用 `
|
||||
```
|
||||
|
||||
- 当 in-flight 槽位满时,请求进入等待队列,**不会立即 429**
|
||||
- 超出总承载上限后才返回 `429 Too Many Requests`
|
||||
- 超出总承载上限后才返回 `429 Too Many Requests`,当前响应不附带 `Retry-After`
|
||||
- `GET /admin/queue/status` 返回实时并发状态
|
||||
|
||||
## Tool Call 适配
|
||||
@@ -423,10 +425,10 @@ npm run build --prefix webui
|
||||
|
||||
工作流文件:`.github/workflows/release-artifacts.yml`
|
||||
|
||||
- **触发条件**:仅在 GitHub Release `published` 时触发(普通 push 不会触发)
|
||||
- **构建产物**:多平台二进制包(`linux/amd64`、`linux/arm64`、`linux/armv7`、`darwin/amd64`、`darwin/arm64`、`windows/amd64`、`windows/arm64`)+ `sha256sums.txt`
|
||||
- **触发条件**:默认仅在 GitHub Release `published` 时自动触发;也支持在 Actions 页面手动 `workflow_dispatch`,并填写 `release_tag` 复跑/补发
|
||||
- **构建产物**:多平台二进制包(`linux/amd64`、`linux/arm64`、`linux/armv7`、`darwin/amd64`、`darwin/arm64`、`windows/amd64`、`windows/arm64`)、Linux Docker 镜像导出包 + `sha256sums.txt`
|
||||
- **容器镜像发布**:仅推送到 GHCR(`ghcr.io/cjackhwang/ds2api`)
|
||||
- **每个压缩包包含**:`ds2api` 可执行文件、`static/admin`、WASM 文件(同时支持内置 fallback)、`config.example.json` 配置示例、README、LICENSE
|
||||
- **每个二进制压缩包包含**:`ds2api` 可执行文件、`static/admin`、`config.example.json`、`.env.example`、`README.MD`、`README.en.md`、`LICENSE`
|
||||
|
||||
## 免责声明
|
||||
|
||||
|
||||
10
README.en.md
10
README.en.md
@@ -16,7 +16,7 @@
|
||||
|
||||
Language: [中文](README.MD) | [English](README.en.md)
|
||||
|
||||
DS2API converts DeepSeek Web chat capability into OpenAI-compatible, Claude-compatible, and Gemini-compatible APIs. The backend is a **pure Go implementation**, with a React WebUI admin panel (source in `webui/`, build output auto-generated to `static/admin` during deployment).
|
||||
DS2API converts DeepSeek Web chat capability into OpenAI-compatible, Claude-compatible, and Gemini-compatible APIs. The core backend is Go-based, with a small Node Runtime bridge used for Vercel streaming, and the React WebUI admin panel lives in `webui/` (build output auto-generated to `static/admin` during deployment).
|
||||
|
||||
Documentation entry: [Docs Index](docs/README.md) / [Architecture](docs/ARCHITECTURE.en.md) / [API Reference](API.en.md)
|
||||
|
||||
@@ -306,7 +306,7 @@ Common fields:
|
||||
- `runtime`: account concurrency, queueing, and token refresh behavior, hot-reloadable via Admin Settings.
|
||||
- `auto_delete.mode`: remote session cleanup after each request, supporting `none` / `single` / `all`.
|
||||
- `history_split`: legacy multi-turn history split field, now ignored and kept only for backward-compatible config loading.
|
||||
- `current_input_file`: the only active split mode; it is enabled by default and uploads the full context as a `history.txt` context file once the character threshold is reached.
|
||||
- `current_input_file`: the only active split mode; it is enabled by default and uploads the full context as a `DS2API_HISTORY.txt` context file once the character threshold is reached.
|
||||
- If you turn off `current_input_file`, requests pass through directly without uploading any split context file.
|
||||
|
||||
For the full environment variable list, see [docs/DEPLOY.en.md](docs/DEPLOY.en.md). For auth behavior, see [API.en.md](API.en.md#authentication).
|
||||
@@ -409,10 +409,10 @@ npm run build --prefix webui
|
||||
|
||||
Workflow: `.github/workflows/release-artifacts.yml`
|
||||
|
||||
- **Trigger**: only on GitHub Release `published` (normal pushes do not trigger builds)
|
||||
- **Outputs**: multi-platform archives (`linux/amd64`, `linux/arm64`, `linux/armv7`, `darwin/amd64`, `darwin/arm64`, `windows/amd64`, `windows/arm64`) + `sha256sums.txt`
|
||||
- **Trigger**: by default only on GitHub Release `published`; you can also run it manually via `workflow_dispatch` and pass `release_tag` to rerun / backfill
|
||||
- **Outputs**: multi-platform binary archives (`linux/amd64`, `linux/arm64`, `linux/armv7`, `darwin/amd64`, `darwin/arm64`, `windows/amd64`, `windows/arm64`), Linux Docker image export tarballs, and `sha256sums.txt`
|
||||
- **Container publishing**: GHCR only (`ghcr.io/cjackhwang/ds2api`)
|
||||
- **Each archive includes**: `ds2api` executable, `static/admin`, WASM file (with embedded fallback support), `config.example.json`-based config template, README, LICENSE
|
||||
- **Each binary archive includes**: the `ds2api` executable, `static/admin`, `config.example.json`, `.env.example`, `README.MD`, `README.en.md`, and `LICENSE`
|
||||
|
||||
## Disclaimer
|
||||
|
||||
|
||||
@@ -36,7 +36,7 @@ go run ./cmd/ds2api
|
||||
cd webui
|
||||
|
||||
# 2. Install dependencies
|
||||
npm install
|
||||
npm ci
|
||||
|
||||
# 3. Start dev server (hot reload)
|
||||
npm run dev
|
||||
|
||||
@@ -36,7 +36,7 @@ go run ./cmd/ds2api
|
||||
cd webui
|
||||
|
||||
# 2. 安装依赖
|
||||
npm install
|
||||
npm ci
|
||||
|
||||
# 3. 启动开发服务器(热更新)
|
||||
npm run dev
|
||||
|
||||
@@ -64,8 +64,8 @@ Use `config.json` as the single source of truth:
|
||||
|
||||
Built-in GitHub Actions workflow: `.github/workflows/release-artifacts.yml`
|
||||
|
||||
- **Trigger**: only on Release `published` (no build on normal push)
|
||||
- **Outputs**: multi-platform binary archives + `sha256sums.txt`
|
||||
- **Trigger**: by default only on Release `published`; you can also run it manually via `workflow_dispatch` and pass `release_tag` to rerun / backfill
|
||||
- **Outputs**: multi-platform binary archives, Linux Docker image export tarballs, and `sha256sums.txt`
|
||||
- **Container publishing**: GHCR only (`ghcr.io/cjackhwang/ds2api`)
|
||||
|
||||
| Platform | Architecture | Format |
|
||||
@@ -131,6 +131,7 @@ docker-compose logs -f
|
||||
|
||||
The default `docker-compose.yml` directly uses `ghcr.io/cjackhwang/ds2api:latest` and maps host port `6011` to container port `5001`. If you want `5001` exposed directly, set `DS2API_HOST_PORT=5001` (or adjust the `ports` mapping).
|
||||
The compose template also defaults to `DS2API_CONFIG_PATH=/data/config.json` with `./config.json:/data/config.json` mounted, so deployments avoid read-only `/app` persistence issues by default.
|
||||
The image pre-creates `/data` and grants it to the non-root `ds2api` user. If you bind-mount a single host file, make sure `config.json` is readable/writable by the container user, for example with `chmod 644 config.json`; otherwise Linux UID/GID mismatches can still cause `open /data/config.json: permission denied`.
|
||||
Compatibility note: when `DS2API_CONFIG_PATH` is unset and runtime base dir is `/app`, newer versions prefer `/data/config.json`; if that file is missing but legacy `/app/config.json` exists, DS2API automatically falls back to the legacy path to avoid post-upgrade config loss.
|
||||
|
||||
If you want a pinned version instead of `latest`, you can also pull a specific tag directly:
|
||||
@@ -270,6 +271,7 @@ VERCEL_TEAM_ID=team_xxxxxxxxxxxx # optional for personal accounts
|
||||
| `VERCEL_TOKEN` | Vercel sync token | — |
|
||||
| `VERCEL_PROJECT_ID` | Vercel project ID | — |
|
||||
| `VERCEL_TEAM_ID` | Vercel team ID | — |
|
||||
| `DS2API_CHAT_HISTORY_PATH` | Chat history storage path (must be set to `/tmp/chat_history.json` on Vercel, otherwise unavailable due to read-only filesystem) | `data/chat_history.json` |
|
||||
| `DS2API_VERCEL_PROTECTION_BYPASS` | Deployment protection bypass for internal Node→Go calls | — |
|
||||
|
||||
### 3.4 Vercel Architecture
|
||||
@@ -359,6 +361,22 @@ If API responses return Vercel HTML `Authentication Required`:
|
||||
- **Option B**: Add `x-vercel-protection-bypass` header to requests
|
||||
- **Option C**: Set `VERCEL_AUTOMATION_BYPASS_SECRET` (or `DS2API_VERCEL_PROTECTION_BYPASS`) for internal Node→Go calls
|
||||
|
||||
#### Chat History Unavailable (read-only file system)
|
||||
|
||||
```text
|
||||
create chat history dir: mkdir /var/task/data: read-only file system
|
||||
```
|
||||
|
||||
**Cause**: Vercel Serverless functions have a read-only filesystem (`/var/task`). Chat history fails because it cannot create directories there.
|
||||
|
||||
**Fix**: Add the following in Vercel Project Settings → Environment Variables:
|
||||
|
||||
```text
|
||||
DS2API_CHAT_HISTORY_PATH=/tmp/chat_history.json
|
||||
```
|
||||
|
||||
`/tmp` is the only writable directory in Vercel Serverless. Data is ephemeral (not persisted across cold starts), but the feature works within a single instance lifetime.
|
||||
|
||||
### 3.6 Build Artifacts Not Committed
|
||||
|
||||
- `static/admin` directory is not in Git
|
||||
@@ -401,7 +419,7 @@ Or step by step:
|
||||
|
||||
```bash
|
||||
cd webui
|
||||
npm install
|
||||
npm ci
|
||||
npm run build
|
||||
# Output goes to static/admin/
|
||||
```
|
||||
|
||||
@@ -64,8 +64,8 @@ cp config.example.json config.json
|
||||
|
||||
仓库内置 GitHub Actions 工作流:`.github/workflows/release-artifacts.yml`
|
||||
|
||||
- **触发条件**:仅在 Release `published` 时触发(普通 push 不会构建)
|
||||
- **构建产物**:多平台二进制压缩包 + `sha256sums.txt`
|
||||
- **触发条件**:默认仅在 Release `published` 时自动触发;也支持在 Actions 页面手动 `workflow_dispatch`,并填写 `release_tag` 复跑/补发
|
||||
- **构建产物**:多平台二进制压缩包、Linux Docker 镜像导出包 + `sha256sums.txt`
|
||||
- **容器镜像发布**:仅发布到 GHCR(`ghcr.io/cjackhwang/ds2api`)
|
||||
|
||||
| 平台 | 架构 | 文件格式 |
|
||||
@@ -131,6 +131,7 @@ docker-compose logs -f
|
||||
|
||||
默认 `docker-compose.yml` 直接使用 `ghcr.io/cjackhwang/ds2api:latest`,并把宿主机 `6011` 映射到容器内的 `5001`。如果你希望直接对外暴露 `5001`,请设置 `DS2API_HOST_PORT=5001`(或者手动调整 `ports` 配置)。
|
||||
Compose 模板还会默认设置 `DS2API_CONFIG_PATH=/data/config.json` 并挂载 `./config.json:/data/config.json`,优先避免 `/app` 只读带来的配置持久化问题。
|
||||
镜像内会预创建 `/data` 并授权给非 root 的 `ds2api` 用户;如果你使用 bind mount 单文件,请确保宿主机 `config.json` 至少可被容器用户读取/写入,例如 `chmod 644 config.json`,否则 Linux UID/GID 不一致时仍可能出现 `open /data/config.json: permission denied`。
|
||||
兼容说明:若未设置 `DS2API_CONFIG_PATH` 且运行目录是 `/app`,新版本会优先使用 `/data/config.json`;当该文件不存在但检测到历史 `/app/config.json` 时,会自动回退读取旧路径,避免升级后“配置丢失”。
|
||||
|
||||
如需固定版本,也可以直接拉取指定 tag:
|
||||
@@ -270,6 +271,7 @@ VERCEL_TEAM_ID=team_xxxxxxxxxxxx # 个人账号可留空
|
||||
| `VERCEL_TOKEN` | Vercel 同步 token | — |
|
||||
| `VERCEL_PROJECT_ID` | Vercel 项目 ID | — |
|
||||
| `VERCEL_TEAM_ID` | Vercel 团队 ID | — |
|
||||
| `DS2API_CHAT_HISTORY_PATH` | Chat history 存储路径(Vercel 上必须设为 `/tmp/chat_history.json`,否则因文件系统只读而不可用) | `data/chat_history.json` |
|
||||
| `DS2API_VERCEL_PROTECTION_BYPASS` | 部署保护绕过密钥(内部 Node→Go 调用) | — |
|
||||
|
||||
### 3.3 运行时行为配置(通过 Admin API 设置)
|
||||
@@ -369,6 +371,22 @@ No Output Directory named "public" found after the Build completed.
|
||||
- **方案 B**:请求中添加 `x-vercel-protection-bypass` 头
|
||||
- **方案 C**:设置 `VERCEL_AUTOMATION_BYPASS_SECRET`(或 `DS2API_VERCEL_PROTECTION_BYPASS`),仅影响内部 Node→Go 调用
|
||||
|
||||
#### Chat History 不可用(read-only file system)
|
||||
|
||||
```text
|
||||
create chat history dir: mkdir /var/task/data: read-only file system
|
||||
```
|
||||
|
||||
**原因**:Vercel Serverless 函数的文件系统(`/var/task`)为只读,chat history 尝试在该路径下创建目录失败。
|
||||
|
||||
**解决**:在 Vercel Project Settings → Environment Variables 中添加:
|
||||
|
||||
```text
|
||||
DS2API_CHAT_HISTORY_PATH=/tmp/chat_history.json
|
||||
```
|
||||
|
||||
`/tmp` 是 Vercel Serverless 环境中唯一可写的目录。数据在函数冷启动之间不会持久化(ephemeral),但在单个实例生命周期内功能正常。
|
||||
|
||||
### 3.6 仓库不提交构建产物
|
||||
|
||||
- `static/admin` 目录不在 Git 中
|
||||
@@ -411,7 +429,7 @@ go run ./cmd/ds2api
|
||||
|
||||
```bash
|
||||
cd webui
|
||||
npm install
|
||||
npm ci
|
||||
npm run build
|
||||
# 产物输出到 static/admin/
|
||||
```
|
||||
|
||||
@@ -22,7 +22,7 @@
|
||||
|
||||
### 文档维护约定
|
||||
|
||||
- 文档更新必须以实际代码实现为依据:总路由装配看 `internal/server/router.go`,协议/resource 路由看 `internal/httpapi/*/**/routes.go` 与 `internal/httpapi/admin/handler.go`,配置默认值看 `internal/config/*`,模型/alias 看 `internal/config/models.go`,prompt 兼容链路看 `docs/prompt-compatibility.md` 列出的代码入口。
|
||||
- 文档更新必须以实际代码实现为依据:总路由装配看 `internal/server/router.go`,协议/resource 路由看 `internal/httpapi/**/handler*.go` 与 `internal/httpapi/admin/handler.go`,配置默认值看 `internal/config/*`,模型/alias 看 `internal/config/models.go`,prompt 兼容链路看 `docs/prompt-compatibility.md` 列出的代码入口。
|
||||
- `README.MD` / `README.en.md`:面向首次接触用户,保留“是什么 + 怎么快速跑起来”。
|
||||
- `docs/ARCHITECTURE*.md`:面向开发者,集中维护项目结构、模块职责与调用链。
|
||||
- `API*.md`:面向客户端接入者,聚焦接口行为、鉴权和示例。
|
||||
@@ -53,7 +53,7 @@ Recommended reading order:
|
||||
|
||||
### Maintenance conventions
|
||||
|
||||
- Documentation updates must be grounded in the actual implementation: root routing lives in `internal/server/router.go`, protocol/resource routes live in `internal/httpapi/*/**/routes.go` and `internal/httpapi/admin/handler.go`, config defaults in `internal/config/*`, models/aliases in `internal/config/models.go`, and the prompt compatibility pipeline in the code entrypoints listed by `docs/prompt-compatibility.md`.
|
||||
- Documentation updates must be grounded in the actual implementation: root routing lives in `internal/server/router.go`, protocol/resource routes live in `internal/httpapi/**/handler*.go` and `internal/httpapi/admin/handler.go`, config defaults in `internal/config/*`, models/aliases in `internal/config/models.go`, and the prompt compatibility pipeline in the code entrypoints listed by `docs/prompt-compatibility.md`.
|
||||
- `README.MD` / `README.en.md`: onboarding-oriented (“what + quick start”).
|
||||
- `docs/ARCHITECTURE*.md`: developer-oriented source of truth for module boundaries and execution flow.
|
||||
- `API*.md`: integration-oriented behavior/contracts.
|
||||
|
||||
@@ -60,11 +60,10 @@ npm run build --prefix webui
|
||||
./tests/scripts/check-refactor-line-gate.sh
|
||||
./tests/scripts/check-node-split-syntax.sh
|
||||
./tests/scripts/check-cross-build.sh
|
||||
|
||||
# 历史阶段门禁:阶段 6 手工烟测签字检查(默认读取 plans/stage6-manual-smoke.md)
|
||||
./tests/scripts/check-stage6-manual-smoke.sh
|
||||
```
|
||||
|
||||
说明:`plans/stage6-manual-smoke.md` 已移除,阶段 6 手工烟测不再作为当前 CI 或发布门禁。
|
||||
|
||||
### 端到端测试 | End-to-End Tests
|
||||
|
||||
```bash
|
||||
|
||||
@@ -117,6 +117,11 @@ OpenAI Chat / Responses 在标准化后、current input file 之前,会默认
|
||||
- 普通请求会直接出现在最终 `prompt` 的最新 user block 末尾。
|
||||
- 如果触发 current input file,它会进入完整上下文文件中。
|
||||
|
||||
另外,`MessagesPrepareWithThinking` 还会在最终 prompt 的最前面预置一段固定的 system 级“输出完整性约束(Output integrity guard)”:
|
||||
|
||||
- 如果上游上下文、工具输出或解析后的文本出现乱码、损坏、部分解析、重复或其他畸形片段,不要模仿、不要回显,只输出给用户的正确内容。
|
||||
- 这段约束位于普通 system / tool prompt 之前,因此是当前最终 prompt 里的最高优先级前置指令。
|
||||
|
||||
### 5.1 角色标记
|
||||
|
||||
最终 prompt 使用 DeepSeek 风格角色标记:
|
||||
@@ -155,7 +160,8 @@ OpenAI Chat / Responses 在标准化后、current input file 之前,会默认
|
||||
|
||||
工具调用正例现在优先示范官方 DSML 风格:`<|DSML|tool_calls>` → `<|DSML|invoke name="...">` → `<|DSML|parameter name="...">`。
|
||||
兼容层仍接受旧式纯 `<tool_calls>` wrapper,但提示词会优先要求模型输出官方 DSML 标签,并强调不能只输出 closing wrapper 而漏掉 opening tag。需要注意:这是“兼容 DSML 外壳,内部仍以 XML 解析语义为准”,不是原生 DSML 全链路实现;DSML 标签会在解析入口归一化回现有 XML 标签后继续走同一套 parser。
|
||||
数组参数使用 `<item>...</item>` 子节点表示;当某个参数体只包含 item 子节点时,Go / Node 解析器会把它还原成数组,避免 `questions` / `options` 这类 schema 中要求 array 的参数被误解析成 `{ "item": ... }` 对象。若模型把完整结构化 XML fragment 误包进 CDATA,兼容层会在保护 `content` / `command` 等原文字段的前提下,尝试把非原文字段中的 CDATA XML fragment 还原成 object / array。不过,如果 CDATA 只是单个平面的 XML/HTML 标签,例如 `<b>urgent</b>` 这种行内标记,兼容层会保留原始字符串,不会强行升成 object / array;只有明显表示结构的 CDATA 片段,例如多兄弟节点、嵌套子节点或 `item` 列表,才会触发结构化恢复。
|
||||
数组参数使用 `<item>...</item>` 子节点表示;当某个参数体只包含 item 子节点时,Go / Node 解析器会把它还原成数组,避免 `questions` / `options` 这类 schema 中要求 array 的参数被误解析成 `{ "item": ... }` 对象。除此之外,解析器还会回收一些更松散的列表写法,例如 JSON array 字面量或逗号分隔的 JSON 项序列,只要它们足够明确;但 `<item>` 仍然是首选形态。若模型把完整结构化 XML fragment 误包进 CDATA,兼容层会在保护 `content` / `command` 等原文字段的前提下,尝试把非原文字段中的 CDATA XML fragment 还原成 object / array。不过,如果 CDATA 只是单个平面的 XML/HTML 标签,例如 `<b>urgent</b>` 这种行内标记,兼容层会保留原始字符串,不会强行升成 object / array;只有明显表示结构的 CDATA 片段,例如多兄弟节点、嵌套子节点或 `item` 列表,才会触发结构化恢复。
|
||||
Go 侧读取 DeepSeek SSE 时不再依赖 `bufio.Scanner` 的固定 2MiB 单行上限;当写文件类工具把很长的 `content` 放在单个 `data:` 行里返回时,非流式收集、流式解析和 auto-continue 透传都会保留完整行,再进入同一套工具解析与序列化流程。
|
||||
在 assistant 最终回包阶段,如果某个 tool 参数在声明 schema 中明确是 `string`,兼容层会在把解析后的 `tool_calls` / `function_call` 重新序列化成 OpenAI / Responses / Claude 可见参数前,递归把该路径上的 number / bool / object / array 统一转成字符串;其中 object / array 会压成紧凑 JSON 字符串。这个保护只对 schema 明确声明为 string 的路径生效,不会改写本来就是 `number` / `boolean` / `object` / `array` 的参数。这样可以兼容 DeepSeek 输出了结构化片段、但上游客户端工具 schema 又严格要求字符串参数的场景(例如 `content`、`prompt`、`path`、`taskId` 等)。
|
||||
工具 schema 的权威来源始终是**当前请求实际携带的 schema**,而不是同名工具在其他 runtime(Claude Code / OpenCode / Codex 等)里的默认印象。兼容层现在会同时兼容 OpenAI 风格 `function.parameters`、直接工具对象上的 `parameters` / `input_schema`、以及 camelCase 的 `inputSchema` / `schema`,并在最终输出阶段按这份请求内 schema 决定是保留 array/object,还是仅对明确声明为 `string` 的路径做字符串化。该规则同样适用于 Claude 的流式收尾和 Vercel Node 流式 tool-call formatter,避免不同 runtime 因 schema shape 差异而出现同名工具参数类型漂移。
|
||||
正例中的工具名只会来自当前请求实际声明的工具;如果当前请求没有足够的已知工具形态,就省略对应的单工具、多工具或嵌套示例,避免把不可用工具名写进 prompt。
|
||||
@@ -237,6 +243,14 @@ OpenAI 文件相关实现:
|
||||
- 文件 ID 收集:
|
||||
[internal/promptcompat/file_refs.go](../internal/promptcompat/file_refs.go)
|
||||
|
||||
OpenAI 的文件上传现在不再是“只传文件本体”的通用路径,而是会先根据请求里的 `model` 解析出 DeepSeek 的上传类型,并把它透传到上传接口的 `x-model-type`。当前可见的上传类型就是 `default` / `expert` / `vision`,其中 vision 请求上传图片时必须带上 `vision`,否则下游容易退回到仅文本或 OCR 语义。这个模型类型会同时用于:
|
||||
|
||||
- `/v1/files` 这类独立文件上传入口
|
||||
- Chat / Responses 的 inline 图片、附件上传
|
||||
- current input file 触发时生成的 `DS2API_HISTORY.txt` 上下文文件
|
||||
|
||||
也就是说,文件上传和完成请求的 `model_type` 现在是一致的:完成 payload 里仍然是 `model_type`,上传文件则会在 DeepSeek 上传阶段携带同样的模型类型信息。
|
||||
|
||||
结论:
|
||||
|
||||
- “systemprompt 文字”在 prompt 里
|
||||
@@ -248,7 +262,7 @@ OpenAI 文件相关实现:
|
||||
|
||||
兼容层现在只保留 `current_input_file` 这一种拆分方式;旧的 `history_split` 已废弃,只保留为兼容旧配置的字段,不再参与请求处理。
|
||||
|
||||
- `current_input_file` 默认开启;它用于把“完整上下文”合并进 `history.txt` 上下文文件。当最新 user turn 的纯文本长度达到 `current_input_file.min_chars`(默认 `0`)时,兼容层会上传一个文件名为 `history.txt` 的上下文文件,并在 live prompt 中只保留一个中性的 user 消息要求模型直接回答最新请求,不再暴露文件名或要求模型读取本地文件。
|
||||
- `current_input_file` 默认开启;它用于把“完整上下文”合并进 `DS2API_HISTORY.txt` 上下文文件。当最新 user turn 的纯文本长度达到 `current_input_file.min_chars`(默认 `0`)时,兼容层会上传一个文件名为 `DS2API_HISTORY.txt` 的上下文文件。文件内容会先做 OpenAI 消息标准化,再序列化成按轮次编号的 `DS2API_HISTORY.txt` 风格 transcript,带有 `# DS2API_HISTORY.txt` 标题和 `=== N. ROLE ===` 分段;live prompt 中则会给出一个 continuation 语气的 user 消息,引导模型从 `DS2API_HISTORY.txt` 的最新状态继续推进,并直接回答最新请求,避免把任务拉回起点。
|
||||
- 如果 `current_input_file.enabled=false`,请求会直接透传,不上传任何拆分上下文文件。
|
||||
- 旧的 `history_split.enabled` / `history_split.trigger_after_turns` 会被读取进配置对象以保持兼容,但不会触发拆分上传,也不会影响 `current_input_file` 的默认开启。
|
||||
- 即使触发 `current_input_file` 后 live prompt 被缩短,对客户端回包里的上下文 token 统计,仍会沿用**拆分前的完整 prompt 语义**做计数,而不是按缩短后的占位 prompt 计算;否则会把真实上下文显著算小。
|
||||
@@ -262,11 +276,24 @@ OpenAI 文件相关实现:
|
||||
- 旧历史拆分兼容壳:
|
||||
[internal/httpapi/openai/history/history_split.go](../internal/httpapi/openai/history/history_split.go)
|
||||
|
||||
当前输入转文件启用并触发时,上传文件的真实文件名是 `history.txt`,文件内容是完整 `messages` 上下文;它仍会先用 OpenAI 消息标准化和 DeepSeek 角色标记序列化,并直接作为 `history.txt` 的纯文本内容上传(不再注入文件边界标签):
|
||||
当前输入转文件启用并触发时,上传文件的真实文件名是 `DS2API_HISTORY.txt`,文件内容是完整 `messages` 上下文;它仍会先用 OpenAI 消息标准化和 DeepSeek 角色标记序列化,再按轮次编号成 `DS2API_HISTORY.txt` 风格的 transcript(不再注入文件边界标签):
|
||||
|
||||
```text
|
||||
[uploaded filename]: history.txt
|
||||
<|begin▁of▁sentence|><|System|>...<|User|>...<|Assistant|>...<|Tool|>...<|User|>...
|
||||
[uploaded filename]: DS2API_HISTORY.txt
|
||||
# DS2API_HISTORY.txt
|
||||
Prior conversation history and tool progress.
|
||||
|
||||
=== 1. SYSTEM ===
|
||||
...
|
||||
|
||||
=== 2. USER ===
|
||||
...
|
||||
|
||||
=== 3. ASSISTANT ===
|
||||
...
|
||||
|
||||
=== 4. TOOL ===
|
||||
...
|
||||
```
|
||||
|
||||
开启后,请求的 live prompt 不再直接内联完整上下文,而是保留一个 user role 的短提示,提示模型基于已提供上下文直接回答最新请求;上传后的 `file_id` 会进入 `ref_file_ids`。
|
||||
@@ -318,7 +345,7 @@ OpenAI 文件相关实现:
|
||||
|
||||
```json
|
||||
{
|
||||
"prompt": "<|begin▁of▁sentence|><|System|>原 system / developer\n\nYou have access to these tools: ...<|end▁of▁instructions|><|User|>The current request and prior conversation context have already been provided. Answer the latest user request directly.<|Assistant|>",
|
||||
"prompt": "<|begin▁of▁sentence|><|System|>原 system / developer\n\nYou have access to these tools: ...<|end▁of▁instructions|><|User|>Continue from the latest state in the attached DS2API_HISTORY.txt context. Treat it as the current working state and answer the latest user request directly.<|Assistant|>",
|
||||
"ref_file_ids": [
|
||||
"file-current-input-ignore",
|
||||
"file-systemprompt",
|
||||
@@ -333,7 +360,7 @@ OpenAI 文件相关实现:
|
||||
|
||||
- 大部分结构化语义被压进 `prompt`
|
||||
- 文件保持文件
|
||||
- 需要时把完整上下文拆进 `history.txt` 上下文文件
|
||||
- 需要时把完整上下文拆进 `DS2API_HISTORY.txt` 上下文文件,并按轮次编号成 transcript
|
||||
|
||||
## 12. 修改时必须同步本文档的场景
|
||||
|
||||
@@ -346,7 +373,7 @@ OpenAI 文件相关实现:
|
||||
- tool result 注入方式变更
|
||||
- tool prompt 模板或 tool_choice 约束变更
|
||||
- inline 文件上传 / 文件引用收集规则变更
|
||||
- current input file 触发条件、上传格式、`history.txt` 包装格式变更
|
||||
- current input file 触发条件、上传格式、`DS2API_HISTORY.txt` transcript 结构变更
|
||||
- 旧 `history_split` 兼容逻辑的读取、忽略或退化行为变更
|
||||
- completion payload 字段语义变更
|
||||
- Claude / Gemini 对这套统一语义的复用关系变更
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
</tool_calls>
|
||||
```
|
||||
|
||||
这不是原生 DSML 全链路实现。DSML 只作为 prompt 外壳和解析入口别名;进入 parser 前会被归一化成 `<tool_calls>` / `<invoke>` / `<parameter>`,内部仍以现有 XML 解析语义为准。
|
||||
这不是原生 DSML 全链路实现。DSML 主要用于让模型有意识地输出协议标识,隔离普通 XML 语义;进入 parser 前会按固定本地标签名归一化成 `<tool_calls>` / `<invoke>` / `<parameter>`,内部仍以现有 XML 解析语义为准。
|
||||
|
||||
约束:
|
||||
|
||||
@@ -39,7 +39,8 @@
|
||||
兼容修复:
|
||||
|
||||
- 如果模型漏掉 opening wrapper,但后面仍输出了一个或多个 invoke 并以 closing wrapper 收尾,Go 解析链路会在解析前补回缺失的 opening wrapper。
|
||||
- 如果模型把 DSML 标签里的分隔符 `|` 写漏成空格(例如 `<|DSML tool_calls>` / `<|DSML invoke>` / `<|DSML parameter>`,或无 leading pipe 的 `<DSML tool_calls>` 形态),或把 `DSML` 与工具标签名直接黏连(例如 `<DSMLtool_calls>` / `<DSMLinvoke>` / `<DSMLparameter>`),或把最前面的 pipe 误写成全宽竖线(例如 `<|DSML|tool_calls>` / `<|DSML|invoke>` / `<|DSML|parameter>`),Go / Node 会在固定工具标签名范围内归一化;相似但非工具标签名(如 `tool_calls_extra`)仍按普通文本处理。
|
||||
- Go / Node 解析层不再枚举每一种 DSML typo。它会把工具标签名前的 `DSML`、管道符 `|` / `|`、空白、重复 leading `<` 视为可容忍的协议噪声,然后只匹配固定本地标签名 `tool_calls` / `invoke` / `parameter`。例如 `<DSML|tool_calls>`、`<<|DSML|tool_calls>`、`<|DSML tool_calls>`、`<DSMLtool_calls>`、`<<DSML|DSML|tool_calls>` 都会归一化;相似但非固定标签名(如 `tool_calls_extra`)仍按普通文本处理。
|
||||
- 如果模型在固定工具标签名后多输出一个尾部管道符,例如 `<|DSML|tool_calls|` / `<|DSML|invoke|` / `<|DSML|parameter|`,兼容层会把这个尾部 `|` 当作异常标签终止符并补齐缺失的 `>`;如果后面已经有 `>`,也会消费这个多余 `|` 后再归一化。
|
||||
- 这是一个针对常见模型失误的窄修复,不改变推荐输出格式;prompt 仍要求模型直接输出完整 DSML 外壳。
|
||||
- 裸 `<invoke ...>` / `<parameter ...>` 不会被当成“已支持的工具语法”;只有 `tool_calls` wrapper 或可修复的缺失 opening wrapper 才会进入工具调用路径。
|
||||
|
||||
@@ -53,7 +54,7 @@
|
||||
|
||||
在流式链路中(Go / Node 一致):
|
||||
|
||||
- DSML `<|DSML|tool_calls>` wrapper、兼容变体(`<dsml|tool_calls>`、`<|tool_calls>`、`<|tool_calls>`、`<|DSML|tool_calls>`)、窄容错空格分隔形态(如 `<|DSML tool_calls>`)、黏连形态(如 `<DSMLtool_calls>`)和 canonical `<tool_calls>` wrapper 都会进入结构化捕获
|
||||
- DSML `<|DSML|tool_calls>` wrapper、基于固定本地标签名的 DSML 噪声容错形态、尾部管道符形态(如 `<|DSML|tool_calls|`)和 canonical `<tool_calls>` wrapper 都会进入结构化捕获
|
||||
- 如果流里直接从 invoke 开始,但后面补上了 closing wrapper,Go 流式筛分也会按缺失 opening wrapper 的修复路径尝试恢复
|
||||
- 已识别成功的工具调用不会再次回流到普通文本
|
||||
- 不符合新格式的块不会执行,并继续按原样文本透传
|
||||
@@ -61,6 +62,7 @@
|
||||
- 支持嵌套围栏(如 4 反引号嵌套 3 反引号)和 CDATA 内围栏保护
|
||||
- 如果模型把 `<![CDATA[` 打开后却没有闭合,流式扫描阶段仍会保守地继续缓冲,不会误把 CDATA 里的示例 XML 当成真实工具调用;在最终 parse / flush 恢复阶段,会对这类 loose CDATA 做窄修复,尽量保住外层已完整包裹的真实工具调用
|
||||
- 当文本中 mention 了某种标签名(如 `<dsml|tool_calls>` 或 Markdown inline code 里的 `<|DSML|tool_calls>`)而后面紧跟真正工具调用时,sieve 会跳过不可解析的 mention 候选并继续匹配后续真实工具块,不会因 mention 导致工具调用丢失,也不会截断 mention 后的正文
|
||||
- Go 侧 SSE 读取不再使用 `bufio.Scanner` 的固定 token 上限;单个 `data:` 行中包含很长的写文件参数时,非流式收集、流式解析与 auto-continue 透传都应保留完整行,再交给 tool parser 处理
|
||||
|
||||
另外,`<parameter>` 的值如果本身是合法 JSON 字面量,也会按结构化值解析,而不是一律保留为字符串。例如 `123`、`true`、`null`、`[1,2]`、`{"a":1}` 都会还原成对应的 number / boolean / null / array / object。
|
||||
结构化 XML 参数也会还原为 JSON 结构:如果参数体只包含一个或多个 `<item>...</item>` 子节点,会输出数组;嵌套对象里的 item-only 字段也同样按数组处理。例如 `<parameter name="questions"><item><question>...</question></item></parameter>` 会输出 `{"questions":[{"question":"..."}]}`,而不是 `{"questions":{"item":...}}`。
|
||||
@@ -94,7 +96,7 @@ node --test tests/node/stream-tool-sieve.test.js
|
||||
|
||||
- DSML `<|DSML|tool_calls>` wrapper 正常解析
|
||||
- legacy canonical `<tool_calls>` wrapper 正常解析
|
||||
- 别名变体(`<dsml|tool_calls>`、`<|tool_calls>`、`<|tool_calls>`)、DSML 空格分隔 typo(如 `<|DSML tool_calls>`)和黏连 typo(如 `<DSMLtool_calls>`)正常解析
|
||||
- 固定本地标签名的 DSML 噪声容错形态(如 `<DSML|tool_calls>`、`<<|DSML|tool_calls>`、`<|DSML tool_calls>`、`<DSMLtool_calls>`、`<<DSML|DSML|tool_calls>`)正常解析
|
||||
- 混搭标签(DSML wrapper + canonical inner)归一化后正常解析
|
||||
- 波浪线围栏 `~~~` 内的示例不执行
|
||||
- 嵌套围栏(4 反引号嵌套 3 反引号)内的示例不执行
|
||||
|
||||
@@ -133,33 +133,51 @@ func pumpAutoContinue(ctx context.Context, pw *io.PipeWriter, initial io.ReadClo
|
||||
// sentinels are consumed (not forwarded) so that the downstream only sees
|
||||
// one final [DONE] at the very end.
|
||||
func streamBodyWithContinueState(ctx context.Context, pw *io.PipeWriter, body io.Reader, state *continueState) (bool, error) {
|
||||
scanner := bufio.NewScanner(body)
|
||||
scanner.Buffer(make([]byte, 0, 64*1024), 2*1024*1024)
|
||||
reader := bufio.NewReaderSize(body, 64*1024)
|
||||
hadDone := false
|
||||
for scanner.Scan() {
|
||||
for {
|
||||
select {
|
||||
case <-ctx.Done():
|
||||
return hadDone, ctx.Err()
|
||||
default:
|
||||
}
|
||||
line := append([]byte{}, scanner.Bytes()...)
|
||||
trimmed := strings.TrimSpace(string(line))
|
||||
if trimmed == "" {
|
||||
continue
|
||||
}
|
||||
if strings.HasPrefix(trimmed, "data:") {
|
||||
data := strings.TrimSpace(strings.TrimPrefix(trimmed, "data:"))
|
||||
if data == "[DONE]" {
|
||||
hadDone = true
|
||||
continue
|
||||
line, err := reader.ReadBytes('\n')
|
||||
if len(line) == 0 && err != nil {
|
||||
if err == io.EOF {
|
||||
return hadDone, nil
|
||||
}
|
||||
state.observe(data)
|
||||
return hadDone, err
|
||||
}
|
||||
if _, err := io.Copy(pw, bytes.NewReader(append(line, '\n'))); err != nil {
|
||||
trimmed := strings.TrimSpace(string(line))
|
||||
if trimmed != "" {
|
||||
if strings.HasPrefix(trimmed, "data:") {
|
||||
data := strings.TrimSpace(strings.TrimPrefix(trimmed, "data:"))
|
||||
if data == "[DONE]" {
|
||||
hadDone = true
|
||||
if err != nil && err != io.EOF {
|
||||
return hadDone, err
|
||||
}
|
||||
if err == io.EOF {
|
||||
return hadDone, nil
|
||||
}
|
||||
continue
|
||||
}
|
||||
state.observe(data)
|
||||
}
|
||||
if !strings.HasSuffix(string(line), "\n") {
|
||||
line = append(line, '\n')
|
||||
}
|
||||
if _, copyErr := io.Copy(pw, bytes.NewReader(line)); copyErr != nil {
|
||||
return hadDone, copyErr
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return hadDone, nil
|
||||
}
|
||||
return hadDone, err
|
||||
}
|
||||
}
|
||||
return hadDone, scanner.Err()
|
||||
}
|
||||
|
||||
// observe extracts continue-relevant signals from an SSE JSON chunk.
|
||||
@@ -175,34 +193,48 @@ func (s *continueState) observe(data string) {
|
||||
if id := intFrom(chunk["response_message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
// Path-based status: {"p": "response/status", "v": "FINISHED"}
|
||||
if p, _ := chunk["p"].(string); p == "response/status" {
|
||||
s.setStatus(asString(chunk["v"]))
|
||||
}
|
||||
s.observeDirectPatch(asString(chunk["p"]), chunk["v"])
|
||||
if p, _ := chunk["p"].(string); p == "response" {
|
||||
s.observeBatchPatches("response", chunk["v"])
|
||||
} else {
|
||||
s.observeBatchPatches("", chunk["v"])
|
||||
}
|
||||
// Nested v.response
|
||||
v, _ := chunk["v"].(map[string]any)
|
||||
if response, _ := v["response"].(map[string]any); response != nil {
|
||||
if id := intFrom(response["message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
s.setStatus(asString(response["status"]))
|
||||
if autoContinue, ok := response["auto_continue"].(bool); ok && autoContinue {
|
||||
if v, _ := chunk["v"].(map[string]any); v != nil {
|
||||
s.observeResponseObject(v["response"])
|
||||
}
|
||||
if message, _ := chunk["message"].(map[string]any); message != nil {
|
||||
s.observeResponseObject(message["response"])
|
||||
}
|
||||
}
|
||||
|
||||
func (s *continueState) observeDirectPatch(path string, value any) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
switch strings.Trim(strings.TrimSpace(path), "/") {
|
||||
case "response/status", "status", "response/quasi_status", "quasi_status":
|
||||
s.setStatus(asString(value))
|
||||
case "response/auto_continue", "auto_continue":
|
||||
if v, ok := value.(bool); ok && v {
|
||||
s.lastStatus = "AUTO_CONTINUE"
|
||||
}
|
||||
}
|
||||
// Nested message.response
|
||||
if message, _ := chunk["message"].(map[string]any); message != nil {
|
||||
if response, _ := message["response"].(map[string]any); response != nil {
|
||||
if id := intFrom(response["message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
s.setStatus(asString(response["status"]))
|
||||
}
|
||||
}
|
||||
|
||||
func (s *continueState) observeResponseObject(raw any) {
|
||||
if s == nil {
|
||||
return
|
||||
}
|
||||
response, _ := raw.(map[string]any)
|
||||
if response == nil {
|
||||
return
|
||||
}
|
||||
if id := intFrom(response["message_id"]); id > 0 {
|
||||
s.responseMessageID = id
|
||||
}
|
||||
s.setStatus(asString(response["status"]))
|
||||
if autoContinue, ok := response["auto_continue"].(bool); ok && autoContinue {
|
||||
s.lastStatus = "AUTO_CONTINUE"
|
||||
}
|
||||
}
|
||||
|
||||
@@ -230,6 +262,10 @@ func (s *continueState) observeBatchPatches(parentPath string, raw any) {
|
||||
switch strings.Trim(strings.TrimSpace(fullPath), "/") {
|
||||
case "response/status", "status", "response/quasi_status", "quasi_status":
|
||||
s.setStatus(asString(m["v"]))
|
||||
case "response/auto_continue", "auto_continue":
|
||||
if v, ok := m["v"].(bool); ok && v {
|
||||
s.lastStatus = "AUTO_CONTINUE"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -150,6 +150,62 @@ func TestAutoContinueDoesNotTriggerOnPlainWIPWithoutExplicitContinuationSignal(t
|
||||
}
|
||||
}
|
||||
|
||||
func TestAutoContinuePassesThroughLongSingleSSELine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
initialBody := `data: {"p":"response/content","v":"` + payload + `"}` + "\n" +
|
||||
`data: [DONE]` + "\n"
|
||||
|
||||
body := newAutoContinueBody(context.Background(), io.NopCloser(strings.NewReader(initialBody)), "session-123", 8, func(context.Context, string, int) (*http.Response, error) {
|
||||
return nil, errors.New("continue should not have been called")
|
||||
})
|
||||
defer func() { _ = body.Close() }()
|
||||
|
||||
out, err := io.ReadAll(body)
|
||||
if err != nil {
|
||||
t.Fatalf("read body failed: %v", err)
|
||||
}
|
||||
if !bytes.Contains(out, []byte(payload)) {
|
||||
t.Fatalf("expected long SSE payload to pass through, got len=%d want payload len=%d", len(out), len(payload))
|
||||
}
|
||||
if !bytes.Contains(out, []byte(`data: [DONE]`)) {
|
||||
t.Fatalf("expected final DONE sentinel in body, got len=%d", len(out))
|
||||
}
|
||||
}
|
||||
|
||||
func TestAutoContinueTriggersOnDirectQuasiStatusIncomplete(t *testing.T) {
|
||||
initialBody := strings.Join([]string{
|
||||
`data: {"response_message_id":321,"p":"response/content","v":"<tool_calls><invoke name=\"write_file\"><parameter name=\"content\"><![CDATA[part-one"}`,
|
||||
`data: {"p":"response/quasi_status","v":"INCOMPLETE"}`,
|
||||
`data: [DONE]`,
|
||||
}, "\n") + "\n"
|
||||
|
||||
var continueCalls atomic.Int32
|
||||
body := newAutoContinueBody(context.Background(), io.NopCloser(strings.NewReader(initialBody)), "session-123", 8, func(context.Context, string, int) (*http.Response, error) {
|
||||
continueCalls.Add(1)
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Header: make(http.Header),
|
||||
Body: io.NopCloser(strings.NewReader(
|
||||
`data: {"response_message_id":322,"p":"response/content","v":"-part-two]]></parameter></invoke></tool_calls>"}` + "\n" +
|
||||
`data: {"p":"response/status","v":"FINISHED"}` + "\n" +
|
||||
`data: [DONE]` + "\n",
|
||||
)),
|
||||
}, nil
|
||||
})
|
||||
defer func() { _ = body.Close() }()
|
||||
|
||||
out, err := io.ReadAll(body)
|
||||
if err != nil {
|
||||
t.Fatalf("read body failed: %v", err)
|
||||
}
|
||||
if continueCalls.Load() != 1 {
|
||||
t.Fatalf("expected exactly one continue call, got %d", continueCalls.Load())
|
||||
}
|
||||
if !bytes.Contains(out, []byte("part-one")) || !bytes.Contains(out, []byte("-part-two")) {
|
||||
t.Fatalf("expected continued tool content in body, got=%s", string(out))
|
||||
}
|
||||
}
|
||||
|
||||
func TestAutoContinueTriggersOnResponseBatchQuasiStatusIncomplete(t *testing.T) {
|
||||
initialBody := strings.Join([]string{
|
||||
`data: {"response_message_id":321,"v":{"response":{"message_id":321,"status":"WIP","auto_continue":false}}}`,
|
||||
|
||||
@@ -23,6 +23,7 @@ type UploadFileRequest struct {
|
||||
Filename string
|
||||
ContentType string
|
||||
Purpose string
|
||||
ModelType string
|
||||
Data []byte
|
||||
}
|
||||
|
||||
@@ -54,6 +55,7 @@ func (c *Client) UploadFile(ctx context.Context, a *auth.RequestAuth, req Upload
|
||||
contentType = "application/octet-stream"
|
||||
}
|
||||
purpose := strings.TrimSpace(req.Purpose)
|
||||
modelType := strings.ToLower(strings.TrimSpace(req.ModelType))
|
||||
body, contentTypeHeader, err := buildUploadMultipartBody(filename, contentType, req.Data)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
@@ -64,6 +66,9 @@ func (c *Client) UploadFile(ctx context.Context, a *auth.RequestAuth, req Upload
|
||||
"purpose": purpose,
|
||||
"bytes": len(req.Data),
|
||||
}
|
||||
if modelType != "" {
|
||||
capturePayload["model_type"] = modelType
|
||||
}
|
||||
captureSession := c.capture.Start("deepseek_upload_file", dsprotocol.DeepSeekUploadFileURL, a.AccountID, capturePayload)
|
||||
attempts := 0
|
||||
refreshed := false
|
||||
@@ -81,6 +86,9 @@ func (c *Client) UploadFile(ctx context.Context, a *auth.RequestAuth, req Upload
|
||||
}
|
||||
headers := c.authHeaders(a.DeepSeekToken)
|
||||
headers["Content-Type"] = contentTypeHeader
|
||||
if modelType != "" {
|
||||
headers["x-model-type"] = modelType
|
||||
}
|
||||
headers["x-ds-pow-response"] = powHeader
|
||||
headers["x-file-size"] = strconv.Itoa(len(req.Data))
|
||||
headers["x-thinking-enabled"] = "1"
|
||||
|
||||
@@ -82,6 +82,7 @@ func TestUploadFileUsesUploadTargetPowAndMultipartHeaders(t *testing.T) {
|
||||
var seenTargetPath string
|
||||
var seenContentType string
|
||||
var seenFileSize string
|
||||
var seenModelType string
|
||||
var seenBody string
|
||||
call := 0
|
||||
client := &Client{
|
||||
@@ -96,6 +97,7 @@ func TestUploadFileUsesUploadTargetPowAndMultipartHeaders(t *testing.T) {
|
||||
seenPow = req.Header.Get("x-ds-pow-response")
|
||||
seenContentType = req.Header.Get("Content-Type")
|
||||
seenFileSize = req.Header.Get("x-file-size")
|
||||
seenModelType = req.Header.Get("x-model-type")
|
||||
seenBody = string(bodyBytes)
|
||||
return &http.Response{StatusCode: http.StatusOK, Header: make(http.Header), Body: io.NopCloser(strings.NewReader(uploadResponse)), Request: req}, nil
|
||||
default:
|
||||
@@ -112,6 +114,7 @@ func TestUploadFileUsesUploadTargetPowAndMultipartHeaders(t *testing.T) {
|
||||
Filename: "demo.txt",
|
||||
ContentType: "text/plain",
|
||||
Purpose: "assistants",
|
||||
ModelType: "vision",
|
||||
Data: []byte("hello"),
|
||||
}, 1)
|
||||
if err != nil {
|
||||
@@ -140,6 +143,9 @@ func TestUploadFileUsesUploadTargetPowAndMultipartHeaders(t *testing.T) {
|
||||
if seenFileSize != "5" {
|
||||
t.Fatalf("expected x-file-size=5, got %q", seenFileSize)
|
||||
}
|
||||
if seenModelType != "vision" {
|
||||
t.Fatalf("expected x-model-type=vision, got %q", seenModelType)
|
||||
}
|
||||
if !strings.HasPrefix(seenContentType, "multipart/form-data; boundary=") {
|
||||
t.Fatalf("expected multipart content type, got %q", seenContentType)
|
||||
}
|
||||
|
||||
@@ -159,6 +159,6 @@ func toStringSet(in []string) map[string]struct{} {
|
||||
|
||||
const (
|
||||
KeepAliveTimeout = 5
|
||||
StreamIdleTimeout = 90
|
||||
MaxKeepaliveCount = 10
|
||||
StreamIdleTimeout = 300
|
||||
MaxKeepaliveCount = 40
|
||||
)
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
"client": {
|
||||
"name": "DeepSeek",
|
||||
"platform": "android",
|
||||
"version": "2.0.3",
|
||||
"version": "2.0.4",
|
||||
"android_api_level": "35",
|
||||
"locale": "zh_CN"
|
||||
},
|
||||
@@ -24,4 +24,4 @@
|
||||
"skip_exact_paths": [
|
||||
"response/search_status"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -2,20 +2,24 @@ package protocol
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"io"
|
||||
"net/http"
|
||||
)
|
||||
|
||||
func ScanSSELines(resp *http.Response, onLine func([]byte) bool) error {
|
||||
scanner := bufio.NewScanner(resp.Body)
|
||||
buf := make([]byte, 0, 64*1024)
|
||||
scanner.Buffer(buf, 2*1024*1024)
|
||||
for scanner.Scan() {
|
||||
if !onLine(scanner.Bytes()) {
|
||||
break
|
||||
reader := bufio.NewReaderSize(resp.Body, 64*1024)
|
||||
for {
|
||||
line, err := reader.ReadBytes('\n')
|
||||
if len(line) > 0 {
|
||||
if !onLine(line) {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
return nil
|
||||
}
|
||||
return err
|
||||
}
|
||||
}
|
||||
if err := scanner.Err(); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
26
internal/deepseek/protocol/sse_test.go
Normal file
26
internal/deepseek/protocol/sse_test.go
Normal file
@@ -0,0 +1,26 @@
|
||||
package protocol
|
||||
|
||||
import (
|
||||
"io"
|
||||
"net/http"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestScanSSELinesHandlesLongSingleLine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
body := "data: {\"p\":\"response/content\",\"v\":\"" + payload + "\"}\n"
|
||||
resp := &http.Response{Body: io.NopCloser(strings.NewReader(body))}
|
||||
|
||||
var got string
|
||||
err := ScanSSELines(resp, func(line []byte) bool {
|
||||
got = string(line)
|
||||
return true
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("ScanSSELines returned error: %v", err)
|
||||
}
|
||||
if !strings.Contains(got, payload) {
|
||||
t.Fatalf("long SSE line was not preserved: got len=%d want payload len=%d", len(got), len(payload))
|
||||
}
|
||||
}
|
||||
@@ -3,12 +3,14 @@ package claude
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/config"
|
||||
"ds2api/internal/httpapi/requestbody"
|
||||
streamengine "ds2api/internal/stream"
|
||||
"ds2api/internal/translatorcliproxy"
|
||||
"ds2api/internal/util"
|
||||
@@ -33,7 +35,11 @@ func (h *Handler) Messages(w http.ResponseWriter, r *http.Request) {
|
||||
func (h *Handler) proxyViaOpenAI(w http.ResponseWriter, r *http.Request, store ConfigReader) bool {
|
||||
raw, err := io.ReadAll(r.Body)
|
||||
if err != nil {
|
||||
writeClaudeError(w, http.StatusBadRequest, "invalid body")
|
||||
if errors.Is(err, requestbody.ErrInvalidUTF8Body) {
|
||||
writeClaudeError(w, http.StatusBadRequest, "invalid json")
|
||||
} else {
|
||||
writeClaudeError(w, http.StatusBadRequest, "invalid body")
|
||||
}
|
||||
return true
|
||||
}
|
||||
var req map[string]any
|
||||
|
||||
@@ -2,8 +2,8 @@ package gemini
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"ds2api/internal/toolcall"
|
||||
"encoding/json"
|
||||
"errors"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
@@ -11,7 +11,9 @@ import (
|
||||
|
||||
"github.com/go-chi/chi/v5"
|
||||
|
||||
"ds2api/internal/httpapi/requestbody"
|
||||
"ds2api/internal/sse"
|
||||
"ds2api/internal/toolcall"
|
||||
"ds2api/internal/translatorcliproxy"
|
||||
"ds2api/internal/util"
|
||||
|
||||
@@ -32,7 +34,11 @@ func (h *Handler) handleGenerateContent(w http.ResponseWriter, r *http.Request,
|
||||
func (h *Handler) proxyViaOpenAI(w http.ResponseWriter, r *http.Request, stream bool) bool {
|
||||
raw, err := io.ReadAll(r.Body)
|
||||
if err != nil {
|
||||
writeGeminiError(w, http.StatusBadRequest, "invalid body")
|
||||
if errors.Is(err, requestbody.ErrInvalidUTF8Body) {
|
||||
writeGeminiError(w, http.StatusBadRequest, "invalid json")
|
||||
} else {
|
||||
writeGeminiError(w, http.StatusBadRequest, "invalid body")
|
||||
}
|
||||
return true
|
||||
}
|
||||
routeModel := strings.TrimSpace(chi.URLParam(r, "model"))
|
||||
|
||||
@@ -311,16 +311,16 @@ func TestChatCompletionsCurrentInputFilePersistsNeutralPrompt(t *testing.T) {
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected current input upload to happen, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
if ds.uploadCalls[0].Filename != "history.txt" {
|
||||
t.Fatalf("expected history.txt upload, got %q", ds.uploadCalls[0].Filename)
|
||||
if ds.uploadCalls[0].Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("expected DS2API_HISTORY.txt upload, got %q", ds.uploadCalls[0].Filename)
|
||||
}
|
||||
if full.HistoryText != string(ds.uploadCalls[0].Data) {
|
||||
t.Fatalf("expected uploaded current input file to be persisted in history text")
|
||||
}
|
||||
if len(full.Messages) != 1 {
|
||||
t.Fatalf("expected neutral prompt to be the only persisted message, got %#v", full.Messages)
|
||||
t.Fatalf("expected continuation prompt to be the only persisted message, got %#v", full.Messages)
|
||||
}
|
||||
if !strings.Contains(full.Messages[0].Content, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral prompt to be persisted, got %#v", full.Messages[0])
|
||||
if !strings.Contains(full.Messages[0].Content, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation prompt to be persisted, got %#v", full.Messages[0])
|
||||
}
|
||||
}
|
||||
|
||||
@@ -173,6 +173,15 @@ func (s *chatStreamRuntime) sendFailedChunk(status int, message, code string) {
|
||||
s.sendDone()
|
||||
}
|
||||
|
||||
func (s *chatStreamRuntime) markContextCancelled() {
|
||||
s.finalErrorStatus = 499
|
||||
s.finalErrorMessage = "request context cancelled"
|
||||
s.finalErrorCode = string(streamengine.StopReasonContextCancelled)
|
||||
s.finalThinking = s.thinking.String()
|
||||
s.finalText = cleanVisibleOutput(s.text.String(), s.stripReferenceMarkers)
|
||||
s.finalFinishReason = string(streamengine.StopReasonContextCancelled)
|
||||
}
|
||||
|
||||
func (s *chatStreamRuntime) resetStreamToolCallState() {
|
||||
s.streamToolCallIDs = map[int]string{}
|
||||
s.streamToolNames = map[int]string{}
|
||||
|
||||
@@ -247,11 +247,15 @@ func (h *Handler) consumeChatStreamAttempt(r *http.Request, resp *http.Response,
|
||||
}
|
||||
},
|
||||
OnContextDone: func() {
|
||||
streamRuntime.markContextCancelled()
|
||||
if historySession != nil {
|
||||
historySession.stopped(streamRuntime.thinking.String(), streamRuntime.text.String(), string(streamengine.StopReasonContextCancelled))
|
||||
}
|
||||
},
|
||||
})
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
return true, false
|
||||
}
|
||||
terminalWritten := streamRuntime.finalize(finalReason, allowDeferEmpty && finalReason != "content_filter")
|
||||
if terminalWritten {
|
||||
recordChatStreamHistory(streamRuntime, historySession)
|
||||
@@ -283,6 +287,10 @@ func logChatStreamTerminal(streamRuntime *chatStreamRuntime, attempts int) {
|
||||
if attempts > 0 {
|
||||
source = "synthetic_retry"
|
||||
}
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
config.Logger.Info("[openai_empty_retry] terminal cancelled", "surface", "chat.completions", "stream", true, "retry_attempts", attempts, "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
}
|
||||
if streamRuntime.finalErrorMessage != "" {
|
||||
config.Logger.Info("[openai_empty_retry] terminal empty output", "surface", "chat.completions", "stream", true, "retry_attempts", attempts, "success_source", "none", "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
|
||||
85
internal/httpapi/openai/chat/empty_retry_runtime_test.go
Normal file
85
internal/httpapi/openai/chat/empty_retry_runtime_test.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package chat
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"ds2api/internal/chathistory"
|
||||
"ds2api/internal/stream"
|
||||
)
|
||||
|
||||
func TestConsumeChatStreamAttemptMarksContextCancelledState(t *testing.T) {
|
||||
historyStore := newTestChatHistoryStore(t)
|
||||
entry, err := historyStore.Start(chathistory.StartParams{
|
||||
CallerID: "caller:test",
|
||||
Model: "deepseek-v4-flash",
|
||||
Stream: true,
|
||||
UserInput: "hello",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("start history failed: %v", err)
|
||||
}
|
||||
session := &chatHistorySession{
|
||||
store: historyStore,
|
||||
entryID: entry.ID,
|
||||
startedAt: time.Now(),
|
||||
lastPersist: time.Now(),
|
||||
finalPrompt: "prompt",
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil).WithContext(ctx)
|
||||
rec := httptest.NewRecorder()
|
||||
streamRuntime := newChatStreamRuntime(
|
||||
rec,
|
||||
http.NewResponseController(rec),
|
||||
true,
|
||||
"cid-cancelled",
|
||||
time.Now().Unix(),
|
||||
"deepseek-v4-flash",
|
||||
"prompt",
|
||||
false,
|
||||
false,
|
||||
true,
|
||||
nil,
|
||||
nil,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
resp := makeOpenAISSEHTTPResponse(
|
||||
`data: {"p":"response/content","v":"hello"}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
|
||||
h := &Handler{}
|
||||
terminalWritten, retryable := h.consumeChatStreamAttempt(req, resp, streamRuntime, "text", false, session, true)
|
||||
if !terminalWritten || retryable {
|
||||
t.Fatalf("expected cancelled attempt to terminate without retry, got terminalWritten=%v retryable=%v", terminalWritten, retryable)
|
||||
}
|
||||
if got, want := streamRuntime.finalErrorCode, string(stream.StopReasonContextCancelled); got != want {
|
||||
t.Fatalf("expected cancelled final error code %q, got %q", want, got)
|
||||
}
|
||||
if streamRuntime.finalErrorMessage == "" {
|
||||
t.Fatalf("expected cancelled final error message to be preserved")
|
||||
}
|
||||
|
||||
snapshot, err := historyStore.Snapshot()
|
||||
if err != nil {
|
||||
t.Fatalf("snapshot failed: %v", err)
|
||||
}
|
||||
if len(snapshot.Items) != 1 {
|
||||
t.Fatalf("expected one history item, got %d", len(snapshot.Items))
|
||||
}
|
||||
full, err := historyStore.Get(snapshot.Items[0].ID)
|
||||
if err != nil {
|
||||
t.Fatalf("get detail failed: %v", err)
|
||||
}
|
||||
if full.Status != "stopped" {
|
||||
t.Fatalf("expected stopped status, got %#v", full)
|
||||
}
|
||||
}
|
||||
@@ -130,8 +130,8 @@ func TestHandleVercelStreamPrepareAppliesCurrentInputFile(t *testing.T) {
|
||||
t.Fatalf("expected payload object, got %#v", body["payload"])
|
||||
}
|
||||
promptText, _ := payload["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral prompt, got %s", promptText)
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation prompt, got %s", promptText)
|
||||
}
|
||||
if strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("expected original turns hidden from prompt, got %s", promptText)
|
||||
|
||||
@@ -94,6 +94,9 @@ func TestPreprocessInlineFileInputsReplacesDataURLAndCollectsRefFileIDs(t *testi
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected 1 upload, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
if ds.uploadCalls[0].ModelType != "default" {
|
||||
t.Fatalf("expected default model type when request omits model, got %q", ds.uploadCalls[0].ModelType)
|
||||
}
|
||||
if ds.lastCtx != ctx {
|
||||
t.Fatalf("expected upload to use request context")
|
||||
}
|
||||
@@ -149,7 +152,7 @@ func TestPreprocessInlineFileInputsDeduplicatesIdenticalPayloads(t *testing.T) {
|
||||
func TestChatCompletionsUploadsInlineFilesBeforeCompletion(t *testing.T) {
|
||||
ds := &inlineUploadDSStub{}
|
||||
h := &openAITestSurface{Store: mockOpenAIConfig{wideInput: true}, Auth: streamStatusAuthStub{}, DS: ds}
|
||||
reqBody := `{"model":"deepseek-v4-flash","messages":[{"role":"user","content":[{"type":"input_text","text":"hi"},{"type":"image_url","image_url":{"url":"data:image/png;base64,QUJDRA=="}}]}],"stream":false}`
|
||||
reqBody := `{"model":"deepseek-v4-vision","messages":[{"role":"user","content":[{"type":"input_text","text":"hi"},{"type":"image_url","image_url":{"url":"data:image/png;base64,QUJDRA=="}}]}],"stream":false}`
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", strings.NewReader(reqBody))
|
||||
req.Header.Set("Authorization", "Bearer direct-token")
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
@@ -163,6 +166,9 @@ func TestChatCompletionsUploadsInlineFilesBeforeCompletion(t *testing.T) {
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected 1 upload call, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
if ds.uploadCalls[0].ModelType != "vision" {
|
||||
t.Fatalf("expected vision model type for vision request, got %q", ds.uploadCalls[0].ModelType)
|
||||
}
|
||||
if ds.completionReq == nil {
|
||||
t.Fatal("expected completion payload to be captured")
|
||||
}
|
||||
@@ -177,7 +183,7 @@ func TestResponsesUploadsInlineFilesBeforeCompletion(t *testing.T) {
|
||||
h := &openAITestSurface{Store: mockOpenAIConfig{wideInput: true}, Auth: streamStatusAuthStub{}, DS: ds}
|
||||
r := chi.NewRouter()
|
||||
registerOpenAITestRoutes(r, h)
|
||||
reqBody := `{"model":"deepseek-v4-flash","input":[{"role":"user","content":[{"type":"input_text","text":"hi"},{"type":"input_image","image_url":{"url":"data:image/png;base64,QUJDRA=="}}]}],"stream":false}`
|
||||
reqBody := `{"model":"deepseek-v4-pro","input":[{"role":"user","content":[{"type":"input_text","text":"hi"},{"type":"input_image","image_url":{"url":"data:image/png;base64,QUJDRA=="}}]}],"stream":false}`
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/responses", strings.NewReader(reqBody))
|
||||
req.Header.Set("Authorization", "Bearer direct-token")
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
@@ -191,6 +197,9 @@ func TestResponsesUploadsInlineFilesBeforeCompletion(t *testing.T) {
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected 1 upload call, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
if ds.uploadCalls[0].ModelType != "expert" {
|
||||
t.Fatalf("expected expert model type for pro request, got %q", ds.uploadCalls[0].ModelType)
|
||||
}
|
||||
refIDs, _ := ds.completionReq["ref_file_ids"].([]any)
|
||||
if len(refIDs) != 1 || refIDs[0] != "file-inline-1" {
|
||||
t.Fatalf("unexpected completion ref_file_ids: %#v", ds.completionReq["ref_file_ids"])
|
||||
|
||||
@@ -12,6 +12,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/auth"
|
||||
"ds2api/internal/config"
|
||||
dsclient "ds2api/internal/deepseek/client"
|
||||
"ds2api/internal/httpapi/openai/shared"
|
||||
"ds2api/internal/promptcompat"
|
||||
@@ -42,6 +43,7 @@ type inlineUploadState struct {
|
||||
ctx context.Context
|
||||
handler *Handler
|
||||
auth *auth.RequestAuth
|
||||
modelType string
|
||||
uploadedByID map[string]string
|
||||
uploadCount int
|
||||
inlineFileBytes int
|
||||
@@ -58,10 +60,19 @@ func (h *Handler) PreprocessInlineFileInputs(ctx context.Context, a *auth.Reques
|
||||
if h == nil || h.DS == nil || len(req) == 0 {
|
||||
return nil
|
||||
}
|
||||
modelType := "default"
|
||||
if requestedModel, ok := req["model"].(string); ok {
|
||||
if resolvedModel, ok := config.ResolveModel(h.Store, requestedModel); ok {
|
||||
if resolvedType, ok := config.GetModelType(resolvedModel); ok {
|
||||
modelType = resolvedType
|
||||
}
|
||||
}
|
||||
}
|
||||
state := &inlineUploadState{
|
||||
ctx: ctx,
|
||||
handler: h,
|
||||
auth: a,
|
||||
modelType: modelType,
|
||||
uploadedByID: map[string]string{},
|
||||
}
|
||||
for _, key := range []string{"messages", "input", "attachments"} {
|
||||
@@ -174,6 +185,7 @@ func (s *inlineUploadState) uploadInlineFile(file inlineDecodedFile) (string, er
|
||||
result, err := s.handler.DS.UploadFile(s.ctx, s.auth, dsclient.UploadFileRequest{
|
||||
Filename: file.Filename,
|
||||
ContentType: contentType,
|
||||
ModelType: s.modelType,
|
||||
Data: file.Data,
|
||||
}, 3)
|
||||
if err != nil {
|
||||
|
||||
@@ -8,6 +8,7 @@ import (
|
||||
|
||||
"ds2api/internal/auth"
|
||||
"ds2api/internal/chathistory"
|
||||
"ds2api/internal/config"
|
||||
dsclient "ds2api/internal/deepseek/client"
|
||||
"ds2api/internal/httpapi/openai/shared"
|
||||
)
|
||||
@@ -66,10 +67,12 @@ func (h *Handler) UploadFile(w http.ResponseWriter, r *http.Request) {
|
||||
if contentType == "" && len(data) > 0 {
|
||||
contentType = http.DetectContentType(data)
|
||||
}
|
||||
modelType := resolveUploadModelType(h.Store, r)
|
||||
result, err := h.DS.UploadFile(r.Context(), a, dsclient.UploadFileRequest{
|
||||
Filename: header.Filename,
|
||||
ContentType: contentType,
|
||||
Purpose: strings.TrimSpace(r.FormValue("purpose")),
|
||||
ModelType: modelType,
|
||||
Data: data,
|
||||
}, 3)
|
||||
if err != nil {
|
||||
@@ -82,6 +85,32 @@ func (h *Handler) UploadFile(w http.ResponseWriter, r *http.Request) {
|
||||
shared.WriteJSON(w, http.StatusOK, buildOpenAIFileObject(result))
|
||||
}
|
||||
|
||||
func resolveUploadModelType(store shared.ConfigReader, r *http.Request) string {
|
||||
for _, candidate := range []string{r.FormValue("model_type"), r.Header.Get("X-Model-Type")} {
|
||||
if modelType := normalizeUploadModelType(candidate); modelType != "" {
|
||||
return modelType
|
||||
}
|
||||
}
|
||||
requestedModel := strings.TrimSpace(r.FormValue("model"))
|
||||
if requestedModel != "" {
|
||||
if resolvedModel, ok := config.ResolveModel(store, requestedModel); ok {
|
||||
if modelType, ok := config.GetModelType(resolvedModel); ok {
|
||||
return modelType
|
||||
}
|
||||
}
|
||||
}
|
||||
return "default"
|
||||
}
|
||||
|
||||
func normalizeUploadModelType(raw string) string {
|
||||
switch strings.ToLower(strings.TrimSpace(raw)) {
|
||||
case "default", "expert", "vision":
|
||||
return strings.ToLower(strings.TrimSpace(raw))
|
||||
default:
|
||||
return ""
|
||||
}
|
||||
}
|
||||
|
||||
func buildOpenAIFileObject(result *dsclient.UploadFileResult) map[string]any {
|
||||
if result == nil {
|
||||
obj := map[string]any{
|
||||
|
||||
@@ -77,7 +77,7 @@ func (m *filesRouteDSStub) DeleteAllSessionsForToken(_ context.Context, _ string
|
||||
return nil
|
||||
}
|
||||
|
||||
func newMultipartUploadRequest(t *testing.T, purpose string, filename string, data []byte) *http.Request {
|
||||
func newMultipartUploadRequest(t *testing.T, purpose string, filename string, data []byte, model string) *http.Request {
|
||||
t.Helper()
|
||||
var body bytes.Buffer
|
||||
writer := multipart.NewWriter(&body)
|
||||
@@ -86,6 +86,11 @@ func newMultipartUploadRequest(t *testing.T, purpose string, filename string, da
|
||||
t.Fatalf("write purpose failed: %v", err)
|
||||
}
|
||||
}
|
||||
if model != "" {
|
||||
if err := writer.WriteField("model", model); err != nil {
|
||||
t.Fatalf("write model failed: %v", err)
|
||||
}
|
||||
}
|
||||
part, err := writer.CreateFormFile("file", filename)
|
||||
if err != nil {
|
||||
t.Fatalf("create form file failed: %v", err)
|
||||
@@ -108,7 +113,7 @@ func TestFilesRouteUploadSuccess(t *testing.T) {
|
||||
r := chi.NewRouter()
|
||||
registerOpenAITestRoutes(r, h)
|
||||
|
||||
req := newMultipartUploadRequest(t, "assistants", "notes.txt", []byte("hello world"))
|
||||
req := newMultipartUploadRequest(t, "assistants", "notes.txt", []byte("hello world"), "deepseek-v4-vision")
|
||||
rec := httptest.NewRecorder()
|
||||
r.ServeHTTP(rec, req)
|
||||
|
||||
@@ -121,6 +126,9 @@ func TestFilesRouteUploadSuccess(t *testing.T) {
|
||||
if ds.lastReq.Purpose != "assistants" {
|
||||
t.Fatalf("expected purpose assistants, got %q", ds.lastReq.Purpose)
|
||||
}
|
||||
if ds.lastReq.ModelType != "vision" {
|
||||
t.Fatalf("expected vision model type, got %q", ds.lastReq.ModelType)
|
||||
}
|
||||
if string(ds.lastReq.Data) != "hello world" {
|
||||
t.Fatalf("unexpected uploaded data: %q", string(ds.lastReq.Data))
|
||||
}
|
||||
@@ -145,7 +153,7 @@ func TestFilesRouteUploadIncludesAccountIDForManagedAccount(t *testing.T) {
|
||||
r := chi.NewRouter()
|
||||
registerOpenAITestRoutes(r, h)
|
||||
|
||||
req := newMultipartUploadRequest(t, "assistants", "notes.txt", []byte("hello world"))
|
||||
req := newMultipartUploadRequest(t, "assistants", "notes.txt", []byte("hello world"), "deepseek-v4-vision")
|
||||
rec := httptest.NewRecorder()
|
||||
r.ServeHTTP(rec, req)
|
||||
|
||||
|
||||
@@ -7,6 +7,7 @@ import (
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/auth"
|
||||
"ds2api/internal/config"
|
||||
dsclient "ds2api/internal/deepseek/client"
|
||||
"ds2api/internal/httpapi/openai/shared"
|
||||
"ds2api/internal/promptcompat"
|
||||
@@ -35,10 +36,15 @@ func (s Service) ApplyCurrentInputFile(ctx context.Context, a *auth.RequestAuth,
|
||||
if strings.TrimSpace(fileText) == "" {
|
||||
return stdReq, errors.New("current user input file produced empty transcript")
|
||||
}
|
||||
modelType := "default"
|
||||
if resolvedType, ok := config.GetModelType(stdReq.ResolvedModel); ok {
|
||||
modelType = resolvedType
|
||||
}
|
||||
result, err := s.DS.UploadFile(ctx, a, dsclient.UploadFileRequest{
|
||||
Filename: currentInputFilename,
|
||||
ContentType: currentInputContentType,
|
||||
Purpose: currentInputPurpose,
|
||||
ModelType: modelType,
|
||||
Data: []byte(fileText),
|
||||
}, 3)
|
||||
if err != nil {
|
||||
@@ -62,7 +68,7 @@ func (s Service) ApplyCurrentInputFile(ctx context.Context, a *auth.RequestAuth,
|
||||
stdReq.RefFileIDs = prependUniqueRefFileID(stdReq.RefFileIDs, fileID)
|
||||
stdReq.FinalPrompt, stdReq.ToolNames = promptcompat.BuildOpenAIPrompt(messages, stdReq.ToolsRaw, "", stdReq.ToolChoice, stdReq.Thinking)
|
||||
// Token accounting must reflect the actual downstream context:
|
||||
// the uploaded history.txt file content + the neutral live prompt.
|
||||
// the uploaded DS2API_HISTORY.txt file content + the continuation live prompt.
|
||||
stdReq.PromptTokenText = fileText + "\n" + stdReq.FinalPrompt
|
||||
return stdReq, nil
|
||||
}
|
||||
@@ -87,5 +93,5 @@ func latestUserInputForFile(messages []any) (int, string) {
|
||||
}
|
||||
|
||||
func currentInputFilePrompt() string {
|
||||
return "The current request and prior conversation context have already been provided. Answer the latest user request directly."
|
||||
return "Continue from the latest state in the attached DS2API_HISTORY.txt context. Treat it as the current working state and answer the latest user request directly."
|
||||
}
|
||||
|
||||
@@ -61,26 +61,33 @@ func (streamStatusManagedAuthStub) DetermineCaller(_ *http.Request) (*auth.Reque
|
||||
|
||||
func (streamStatusManagedAuthStub) Release(_ *auth.RequestAuth) {}
|
||||
|
||||
func TestBuildOpenAICurrentInputContextTranscriptUsesInjectedFileWrapper(t *testing.T) {
|
||||
func TestBuildOpenAICurrentInputContextTranscriptUsesNumberedHistorySections(t *testing.T) {
|
||||
_, historyMessages := splitOpenAIHistoryMessages(historySplitTestMessages(), 1)
|
||||
transcript := buildOpenAICurrentInputContextTranscript(historyMessages)
|
||||
|
||||
if strings.Contains(transcript, "[file content end]") || strings.Contains(transcript, "[file content begin]") || strings.Contains(transcript, "[file name]:") {
|
||||
t.Fatalf("expected plain transcript without file wrapper tags, got %q", transcript)
|
||||
t.Fatalf("expected transcript without file wrapper tags, got %q", transcript)
|
||||
}
|
||||
if !strings.Contains(transcript, "<|begin▁of▁sentence|>") {
|
||||
t.Fatalf("expected serialized conversation markers, got %q", transcript)
|
||||
if !strings.Contains(transcript, "# DS2API_HISTORY.txt") {
|
||||
t.Fatalf("expected history transcript header, got %q", transcript)
|
||||
}
|
||||
if !strings.Contains(transcript, "first user turn") || !strings.Contains(transcript, "tool result") {
|
||||
t.Fatalf("expected historical turns preserved, got %q", transcript)
|
||||
if !strings.Contains(transcript, "Prior conversation history and tool progress.") {
|
||||
t.Fatalf("expected history transcript description, got %q", transcript)
|
||||
}
|
||||
if !strings.Contains(transcript, "[reasoning_content]") || !strings.Contains(transcript, "hidden reasoning") {
|
||||
t.Fatalf("expected reasoning block preserved, got %q", transcript)
|
||||
for _, want := range []string{
|
||||
"=== 1. USER ===",
|
||||
"=== 2. ASSISTANT ===",
|
||||
"=== 3. TOOL ===",
|
||||
"first user turn",
|
||||
"tool result",
|
||||
"[reasoning_content]",
|
||||
"hidden reasoning",
|
||||
"<|DSML|tool_calls>",
|
||||
} {
|
||||
if !strings.Contains(transcript, want) {
|
||||
t.Fatalf("expected transcript to contain %q, got %q", want, transcript)
|
||||
}
|
||||
}
|
||||
if !strings.Contains(transcript, "<|DSML|tool_calls>") {
|
||||
t.Fatalf("expected tool calls preserved, got %q", transcript)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestSplitOpenAIHistoryMessagesUsesLatestUserTurn(t *testing.T) {
|
||||
@@ -220,7 +227,7 @@ func TestApplyCurrentInputFileDisabledPassThrough(t *testing.T) {
|
||||
DS: ds,
|
||||
}
|
||||
req := map[string]any{
|
||||
"model": "deepseek-v4-flash",
|
||||
"model": "deepseek-v4-vision",
|
||||
"messages": historySplitTestMessages(),
|
||||
}
|
||||
stdReq, err := promptcompat.NormalizeOpenAIChatRequest(h.Store, req, "")
|
||||
@@ -243,7 +250,7 @@ func TestApplyCurrentInputFileDisabledPassThrough(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T) {
|
||||
func TestApplyCurrentInputFileUploadsFirstTurnWithNumberedHistoryTranscript(t *testing.T) {
|
||||
ds := &inlineUploadDSStub{}
|
||||
h := &openAITestSurface{
|
||||
Store: mockOpenAIConfig{
|
||||
@@ -273,15 +280,21 @@ func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T)
|
||||
t.Fatalf("expected 1 current input upload, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
upload := ds.uploadCalls[0]
|
||||
if upload.Filename != "history.txt" {
|
||||
if upload.Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("unexpected upload filename: %q", upload.Filename)
|
||||
}
|
||||
uploadedText := string(upload.Data)
|
||||
if strings.Contains(uploadedText, "[file content end]") || strings.Contains(uploadedText, "[file content begin]") || strings.Contains(uploadedText, "[file name]:") {
|
||||
t.Fatalf("expected uploaded transcript without file wrapper tags, got %q", uploadedText)
|
||||
}
|
||||
if !strings.Contains(uploadedText, "<|begin▁of▁sentence|><|User|>first turn content that is long enough") {
|
||||
t.Fatalf("expected serialized current user turn markers, got %q", uploadedText)
|
||||
for _, want := range []string{
|
||||
"# DS2API_HISTORY.txt",
|
||||
"=== 1. USER ===",
|
||||
"first turn content that is long enough",
|
||||
} {
|
||||
if !strings.Contains(uploadedText, want) {
|
||||
t.Fatalf("expected uploaded transcript to contain %q, got %q", want, uploadedText)
|
||||
}
|
||||
}
|
||||
if !strings.Contains(uploadedText, promptcompat.ThinkingInjectionMarker) {
|
||||
t.Fatalf("expected thinking injection in current input file, got %q", uploadedText)
|
||||
@@ -290,11 +303,11 @@ func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T)
|
||||
if strings.Contains(out.FinalPrompt, "first turn content that is long enough") {
|
||||
t.Fatalf("expected current input text to be replaced in live prompt, got %s", out.FinalPrompt)
|
||||
}
|
||||
if strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "history.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
if strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
t.Fatalf("expected live prompt not to instruct file reads, got %s", out.FinalPrompt)
|
||||
}
|
||||
if !strings.Contains(out.FinalPrompt, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral continuation instruction in live prompt, got %s", out.FinalPrompt)
|
||||
if !strings.Contains(out.FinalPrompt, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt in live prompt, got %s", out.FinalPrompt)
|
||||
}
|
||||
if len(out.RefFileIDs) != 1 || out.RefFileIDs[0] != "file-inline-1" {
|
||||
t.Fatalf("expected current input file id in ref_file_ids, got %#v", out.RefFileIDs)
|
||||
@@ -302,6 +315,9 @@ func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T)
|
||||
if !strings.Contains(out.PromptTokenText, "first turn content that is long enough") {
|
||||
t.Fatalf("expected prompt token text to preserve original full context, got %q", out.PromptTokenText)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "# DS2API_HISTORY.txt") || !strings.Contains(out.PromptTokenText, "=== 1. USER ===") {
|
||||
t.Fatalf("expected prompt token text to include numbered history transcript, got %q", out.PromptTokenText)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCurrentInputFilePreservesFullContextPromptForTokenCounting(t *testing.T) {
|
||||
@@ -316,7 +332,7 @@ func TestApplyCurrentInputFilePreservesFullContextPromptForTokenCounting(t *test
|
||||
DS: ds,
|
||||
}
|
||||
req := map[string]any{
|
||||
"model": "deepseek-v4-flash",
|
||||
"model": "deepseek-v4-vision",
|
||||
"messages": historySplitTestMessages(),
|
||||
}
|
||||
stdReq, err := promptcompat.NormalizeOpenAIChatRequest(h.Store, req, "")
|
||||
@@ -337,10 +353,13 @@ func TestApplyCurrentInputFilePreservesFullContextPromptForTokenCounting(t *test
|
||||
t.Fatalf("expected prompt token text to contain file context with full conversation, got %q", out.PromptTokenText)
|
||||
}
|
||||
if strings.Contains(out.PromptTokenText, "[file content end]") || strings.Contains(out.PromptTokenText, "[file name]:") {
|
||||
t.Fatalf("expected prompt token text to use raw transcript without wrapper tags, got %q", out.PromptTokenText)
|
||||
t.Fatalf("expected prompt token text to omit file wrapper tags, got %q", out.PromptTokenText)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected prompt token text to also include neutral live prompt, got %q", out.PromptTokenText)
|
||||
if !strings.Contains(out.PromptTokenText, "# DS2API_HISTORY.txt") || !strings.Contains(out.PromptTokenText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected prompt token text to include numbered history transcript, got %q", out.PromptTokenText)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected prompt token text to also include continuation prompt, got %q", out.PromptTokenText)
|
||||
}
|
||||
if strings.Contains(out.FinalPrompt, "first user turn") || strings.Contains(out.FinalPrompt, "latest user turn") {
|
||||
t.Fatalf("expected live prompt to hide original turns, got %q", out.FinalPrompt)
|
||||
@@ -359,7 +378,7 @@ func TestApplyCurrentInputFileUploadsFullContextFile(t *testing.T) {
|
||||
DS: ds,
|
||||
}
|
||||
req := map[string]any{
|
||||
"model": "deepseek-v4-flash",
|
||||
"model": "deepseek-v4-vision",
|
||||
"messages": historySplitTestMessages(),
|
||||
}
|
||||
stdReq, err := promptcompat.NormalizeOpenAIChatRequest(h.Store, req, "")
|
||||
@@ -378,20 +397,23 @@ func TestApplyCurrentInputFileUploadsFullContextFile(t *testing.T) {
|
||||
t.Fatalf("expected one current input upload, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
upload := ds.uploadCalls[0]
|
||||
if upload.Filename != "history.txt" {
|
||||
t.Fatalf("expected history.txt upload, got %q", upload.Filename)
|
||||
if upload.Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("expected DS2API_HISTORY.txt upload, got %q", upload.Filename)
|
||||
}
|
||||
if upload.ModelType != "vision" {
|
||||
t.Fatalf("expected vision model type for vision request, got %q", upload.ModelType)
|
||||
}
|
||||
uploadedText := string(upload.Data)
|
||||
for _, want := range []string{"system instructions", "first user turn", "hidden reasoning", "tool result", "latest user turn", promptcompat.ThinkingInjectionMarker} {
|
||||
for _, want := range []string{"# DS2API_HISTORY.txt", "=== 1. SYSTEM ===", "=== 2. USER ===", "=== 3. ASSISTANT ===", "=== 4. TOOL ===", "=== 5. USER ===", "system instructions", "first user turn", "hidden reasoning", "tool result", "latest user turn", promptcompat.ThinkingInjectionMarker} {
|
||||
if !strings.Contains(uploadedText, want) {
|
||||
t.Fatalf("expected full context file to contain %q, got %q", want, uploadedText)
|
||||
}
|
||||
}
|
||||
if strings.Contains(out.FinalPrompt, "first user turn") || strings.Contains(out.FinalPrompt, "latest user turn") || strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "history.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
t.Fatalf("expected live prompt to use only a neutral continuation instruction, got %s", out.FinalPrompt)
|
||||
if strings.Contains(out.FinalPrompt, "first user turn") || strings.Contains(out.FinalPrompt, "latest user turn") || strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
t.Fatalf("expected live prompt to use only a continuation instruction, got %s", out.FinalPrompt)
|
||||
}
|
||||
if !strings.Contains(out.FinalPrompt, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral continuation instruction in live prompt, got %s", out.FinalPrompt)
|
||||
if !strings.Contains(out.FinalPrompt, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt in live prompt, got %s", out.FinalPrompt)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -423,6 +445,9 @@ func TestApplyCurrentInputFileCarriesHistoryText(t *testing.T) {
|
||||
if out.HistoryText != string(ds.uploadCalls[0].Data) {
|
||||
t.Fatalf("expected current input file flow to preserve uploaded text in history, got %q", out.HistoryText)
|
||||
}
|
||||
if !strings.Contains(out.HistoryText, "# DS2API_HISTORY.txt") || !strings.Contains(out.HistoryText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected history text to use numbered transcript format, got %q", out.HistoryText)
|
||||
}
|
||||
}
|
||||
|
||||
func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *testing.T) {
|
||||
@@ -454,7 +479,7 @@ func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *t
|
||||
t.Fatalf("expected 1 upload call, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
upload := ds.uploadCalls[0]
|
||||
if upload.Filename != "history.txt" {
|
||||
if upload.Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("unexpected upload filename: %q", upload.Filename)
|
||||
}
|
||||
if upload.Purpose != "assistants" {
|
||||
@@ -462,7 +487,10 @@ func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *t
|
||||
}
|
||||
historyText := string(upload.Data)
|
||||
if strings.Contains(historyText, "[file content end]") || strings.Contains(historyText, "[file content begin]") || strings.Contains(historyText, "[file name]:") {
|
||||
t.Fatalf("expected plain history transcript without wrapper tags, got %s", historyText)
|
||||
t.Fatalf("expected history transcript without file wrapper tags, got %s", historyText)
|
||||
}
|
||||
if !strings.Contains(historyText, "# DS2API_HISTORY.txt") || !strings.Contains(historyText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected history transcript to use numbered sections, got %s", historyText)
|
||||
}
|
||||
if !strings.Contains(historyText, "latest user turn") {
|
||||
t.Fatalf("expected full context to include latest turn, got %s", historyText)
|
||||
@@ -471,8 +499,8 @@ func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *t
|
||||
t.Fatal("expected completion payload to be captured")
|
||||
}
|
||||
promptText, _ := ds.completionReq["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral completion prompt, got %s", promptText)
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt, got %s", promptText)
|
||||
}
|
||||
if strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("expected prompt to hide original turns, got %s", promptText)
|
||||
@@ -523,12 +551,16 @@ func TestResponsesCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *testing
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected 1 upload call, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
historyText := string(ds.uploadCalls[0].Data)
|
||||
if !strings.Contains(historyText, "# DS2API_HISTORY.txt") || !strings.Contains(historyText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected uploaded history text to use numbered transcript format, got %s", historyText)
|
||||
}
|
||||
if ds.completionReq == nil {
|
||||
t.Fatal("expected completion payload to be captured")
|
||||
}
|
||||
promptText, _ := ds.completionReq["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral completion prompt, got %s", promptText)
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt, got %s", promptText)
|
||||
}
|
||||
if strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("expected prompt to hide original turns, got %s", promptText)
|
||||
@@ -669,11 +701,15 @@ func TestCurrentInputFileWorksAcrossAutoDeleteModes(t *testing.T) {
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected current input upload for mode=%s, got %d", mode, len(ds.uploadCalls))
|
||||
}
|
||||
historyText := string(ds.uploadCalls[0].Data)
|
||||
if !strings.Contains(historyText, "# DS2API_HISTORY.txt") || !strings.Contains(historyText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected uploaded history text to use numbered transcript format, got %s", historyText)
|
||||
}
|
||||
if ds.completionReq == nil {
|
||||
t.Fatalf("expected completion payload for mode=%s", mode)
|
||||
}
|
||||
promptText, _ := ds.completionReq["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") || strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") || strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("unexpected prompt for mode=%s: %s", mode, promptText)
|
||||
}
|
||||
})
|
||||
|
||||
@@ -222,7 +222,13 @@ func (h *Handler) consumeResponsesStreamAttempt(r *http.Request, resp *http.Resp
|
||||
finalReason = "content_filter"
|
||||
}
|
||||
},
|
||||
OnContextDone: func() {
|
||||
streamRuntime.markContextCancelled()
|
||||
},
|
||||
})
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
return true, false
|
||||
}
|
||||
terminalWritten := streamRuntime.finalize(finalReason, allowDeferEmpty && finalReason != "content_filter")
|
||||
if terminalWritten {
|
||||
return true, false
|
||||
@@ -235,6 +241,10 @@ func logResponsesStreamTerminal(streamRuntime *responsesStreamRuntime, attempts
|
||||
if attempts > 0 {
|
||||
source = "synthetic_retry"
|
||||
}
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
config.Logger.Info("[openai_empty_retry] terminal cancelled", "surface", "responses", "stream", true, "retry_attempts", attempts, "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
}
|
||||
if streamRuntime.failed {
|
||||
config.Logger.Info("[openai_empty_retry] terminal empty output", "surface", "responses", "stream", true, "retry_attempts", attempts, "success_source", "none", "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
|
||||
@@ -0,0 +1,70 @@
|
||||
package responses
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"ds2api/internal/promptcompat"
|
||||
"ds2api/internal/stream"
|
||||
)
|
||||
|
||||
func makeResponsesOpenAISSEHTTPResponse(lines ...string) *http.Response {
|
||||
body := strings.Join(lines, "\n")
|
||||
if !strings.HasSuffix(body, "\n") {
|
||||
body += "\n"
|
||||
}
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Header: make(http.Header),
|
||||
Body: io.NopCloser(strings.NewReader(body)),
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsumeResponsesStreamAttemptMarksContextCancelledState(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil).WithContext(ctx)
|
||||
rec := httptest.NewRecorder()
|
||||
streamRuntime := newResponsesStreamRuntime(
|
||||
rec,
|
||||
http.NewResponseController(rec),
|
||||
true,
|
||||
"resp-cancelled",
|
||||
"deepseek-v4-flash",
|
||||
"prompt",
|
||||
false,
|
||||
false,
|
||||
true,
|
||||
nil,
|
||||
nil,
|
||||
false,
|
||||
false,
|
||||
promptcompat.DefaultToolChoicePolicy(),
|
||||
"",
|
||||
nil,
|
||||
)
|
||||
resp := makeResponsesOpenAISSEHTTPResponse(
|
||||
`data: {"p":"response/content","v":"hello"}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
|
||||
h := &Handler{}
|
||||
terminalWritten, retryable := h.consumeResponsesStreamAttempt(req, resp, streamRuntime, "text", false, true)
|
||||
if !terminalWritten || retryable {
|
||||
t.Fatalf("expected cancelled attempt to terminate without retry, got terminalWritten=%v retryable=%v", terminalWritten, retryable)
|
||||
}
|
||||
if !streamRuntime.failed {
|
||||
t.Fatalf("expected cancelled response stream to be marked failed")
|
||||
}
|
||||
if got, want := streamRuntime.finalErrorCode, string(stream.StopReasonContextCancelled); got != want {
|
||||
t.Fatalf("expected cancelled final error code %q, got %q", want, got)
|
||||
}
|
||||
if streamRuntime.finalErrorMessage == "" {
|
||||
t.Fatalf("expected cancelled final error message to be preserved")
|
||||
}
|
||||
}
|
||||
@@ -139,6 +139,13 @@ func (s *responsesStreamRuntime) failResponse(status int, message, code string)
|
||||
s.sendDone()
|
||||
}
|
||||
|
||||
func (s *responsesStreamRuntime) markContextCancelled() {
|
||||
s.failed = true
|
||||
s.finalErrorStatus = 499
|
||||
s.finalErrorMessage = "request context cancelled"
|
||||
s.finalErrorCode = string(streamengine.StopReasonContextCancelled)
|
||||
}
|
||||
|
||||
func (s *responsesStreamRuntime) finalize(finishReason string, deferEmptyOutput bool) bool {
|
||||
s.failed = false
|
||||
s.finalErrorStatus = 0
|
||||
|
||||
134
internal/httpapi/requestbody/json_utf8.go
Normal file
134
internal/httpapi/requestbody/json_utf8.go
Normal file
@@ -0,0 +1,134 @@
|
||||
package requestbody
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"io"
|
||||
"mime"
|
||||
"net/http"
|
||||
"strings"
|
||||
"unicode/utf8"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrInvalidUTF8Body = errors.New("invalid utf-8 request body")
|
||||
errRequestBodyTooLarge = errors.New("request body too large")
|
||||
)
|
||||
|
||||
const maxJSONUTF8ValidationSize = 100 << 20
|
||||
|
||||
// ValidateJSONUTF8 validates complete JSON request bodies before downstream
|
||||
// decoders can silently replace malformed UTF-8 or stop before trailing bytes.
|
||||
func ValidateJSONUTF8(next http.Handler) http.Handler {
|
||||
if next == nil {
|
||||
return http.HandlerFunc(func(http.ResponseWriter, *http.Request) {})
|
||||
}
|
||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
if shouldValidateJSONBody(r) {
|
||||
r.Body = validateAndReplayBody(r.Body)
|
||||
}
|
||||
next.ServeHTTP(w, r)
|
||||
})
|
||||
}
|
||||
|
||||
func shouldValidateJSONBody(r *http.Request) bool {
|
||||
if r == nil || r.Body == nil {
|
||||
return false
|
||||
}
|
||||
path := ""
|
||||
if r.URL != nil {
|
||||
path = r.URL.Path
|
||||
}
|
||||
return isJSONContentType(r.Header.Get("Content-Type")) || isKnownJSONRequestPath(r.Method, path)
|
||||
}
|
||||
|
||||
func isJSONContentType(raw string) bool {
|
||||
raw = strings.TrimSpace(raw)
|
||||
if raw == "" {
|
||||
return false
|
||||
}
|
||||
mediaType, _, err := mime.ParseMediaType(raw)
|
||||
if err != nil {
|
||||
mediaType = raw
|
||||
}
|
||||
mediaType = strings.ToLower(strings.TrimSpace(mediaType))
|
||||
return strings.Contains(mediaType, "json")
|
||||
}
|
||||
|
||||
func isKnownJSONRequestPath(method, path string) bool {
|
||||
switch strings.ToUpper(strings.TrimSpace(method)) {
|
||||
case http.MethodPost, http.MethodPut, http.MethodPatch, http.MethodDelete:
|
||||
default:
|
||||
return false
|
||||
}
|
||||
path = strings.TrimSpace(path)
|
||||
if path == "" {
|
||||
return false
|
||||
}
|
||||
switch {
|
||||
case path == "/v1/chat/completions" || path == "/chat/completions":
|
||||
return true
|
||||
case path == "/v1/responses" || path == "/responses":
|
||||
return true
|
||||
case path == "/v1/embeddings" || path == "/embeddings":
|
||||
return true
|
||||
case path == "/anthropic/v1/messages" || path == "/v1/messages" || path == "/messages":
|
||||
return true
|
||||
case path == "/anthropic/v1/messages/count_tokens" || path == "/v1/messages/count_tokens" || path == "/messages/count_tokens":
|
||||
return true
|
||||
case strings.HasPrefix(path, "/v1beta/models/") || strings.HasPrefix(path, "/v1/models/"):
|
||||
return strings.Contains(path, ":generateContent") || strings.Contains(path, ":streamGenerateContent")
|
||||
case strings.HasPrefix(path, "/admin/"):
|
||||
return true
|
||||
default:
|
||||
return false
|
||||
}
|
||||
}
|
||||
|
||||
func validateAndReplayBody(body io.ReadCloser) io.ReadCloser {
|
||||
if body == nil {
|
||||
return body
|
||||
}
|
||||
raw, err := io.ReadAll(io.LimitReader(body, maxJSONUTF8ValidationSize+1))
|
||||
if err != nil {
|
||||
return &errorReadCloser{err: err, closer: body}
|
||||
}
|
||||
if len(raw) > maxJSONUTF8ValidationSize {
|
||||
return &errorReadCloser{err: errRequestBodyTooLarge, closer: body}
|
||||
}
|
||||
if !utf8.Valid(raw) {
|
||||
return &errorReadCloser{err: ErrInvalidUTF8Body, closer: body}
|
||||
}
|
||||
return &replayReadCloser{Reader: bytes.NewReader(raw), closer: body}
|
||||
}
|
||||
|
||||
type replayReadCloser struct {
|
||||
*bytes.Reader
|
||||
closer io.Closer
|
||||
}
|
||||
|
||||
func (r *replayReadCloser) Close() error {
|
||||
if r == nil || r.closer == nil {
|
||||
return nil
|
||||
}
|
||||
return r.closer.Close()
|
||||
}
|
||||
|
||||
type errorReadCloser struct {
|
||||
err error
|
||||
closer io.Closer
|
||||
}
|
||||
|
||||
func (r *errorReadCloser) Read([]byte) (int, error) {
|
||||
if r == nil || r.err == nil {
|
||||
return 0, io.EOF
|
||||
}
|
||||
return 0, r.err
|
||||
}
|
||||
|
||||
func (r *errorReadCloser) Close() error {
|
||||
if r == nil || r.closer == nil {
|
||||
return nil
|
||||
}
|
||||
return r.closer.Close()
|
||||
}
|
||||
158
internal/httpapi/requestbody/json_utf8_test.go
Normal file
158
internal/httpapi/requestbody/json_utf8_test.go
Normal file
@@ -0,0 +1,158 @@
|
||||
package requestbody
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"encoding/json"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
type singleByteReadCloser struct {
|
||||
data []byte
|
||||
pos int
|
||||
}
|
||||
|
||||
func (r *singleByteReadCloser) Read(p []byte) (int, error) {
|
||||
if r.pos >= len(r.data) {
|
||||
return 0, io.EOF
|
||||
}
|
||||
p[0] = r.data[r.pos]
|
||||
r.pos++
|
||||
return 1, nil
|
||||
}
|
||||
|
||||
func (r *singleByteReadCloser) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
func TestValidateJSONUTF8AllowsSplitMultibyteRunes(t *testing.T) {
|
||||
body := []byte(`{"text":"你好"}`)
|
||||
handler := ValidateJSONUTF8(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
var req map[string]any
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
t.Fatalf("unexpected decode error: %v", err)
|
||||
}
|
||||
w.WriteHeader(http.StatusNoContent)
|
||||
}))
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", &singleByteReadCloser{data: body})
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
handler.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusNoContent {
|
||||
t.Fatalf("expected 204 for valid utf-8 json, got %d body=%q", rec.Code, rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateJSONUTF8RejectsInvalidBytesBeforeJSONDecode(t *testing.T) {
|
||||
body := append([]byte(`{"text":"`), 0xff)
|
||||
body = append(body, []byte(`"}`)...)
|
||||
handler := ValidateJSONUTF8(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
var req map[string]any
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
w.WriteHeader(http.StatusBadRequest)
|
||||
_, _ = w.Write([]byte(err.Error()))
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", bytes.NewReader(body))
|
||||
req.Header.Set("Content-Type", "application/json; charset=utf-8")
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
handler.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for invalid utf-8 json, got %d body=%q", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(rec.Body.String()), "invalid utf-8") {
|
||||
t.Fatalf("expected utf-8 validation error, got %q", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateJSONUTF8RejectsInvalidBytesWithoutJSONContentTypeOnKnownPath(t *testing.T) {
|
||||
body := append([]byte(`{"text":"`), 0xff)
|
||||
body = append(body, []byte(`"}`)...)
|
||||
handler := ValidateJSONUTF8(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
var req map[string]any
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
w.WriteHeader(http.StatusBadRequest)
|
||||
_, _ = w.Write([]byte(err.Error()))
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", bytes.NewReader(body))
|
||||
req.Header.Set("Content-Type", "text/plain")
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
handler.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for invalid utf-8 json, got %d body=%q", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(rec.Body.String()), "invalid utf-8") {
|
||||
t.Fatalf("expected utf-8 validation error, got %q", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestValidateJSONUTF8RejectsTrailingInvalidBytesAfterJSONValue(t *testing.T) {
|
||||
body := append([]byte(`{"text":"ok"}`), 0xff)
|
||||
handler := ValidateJSONUTF8(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
var req map[string]any
|
||||
if err := json.NewDecoder(r.Body).Decode(&req); err != nil {
|
||||
w.WriteHeader(http.StatusBadRequest)
|
||||
_, _ = w.Write([]byte(err.Error()))
|
||||
return
|
||||
}
|
||||
w.WriteHeader(http.StatusOK)
|
||||
}))
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", bytes.NewReader(body))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
handler.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for trailing invalid utf-8, got %d body=%q", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(rec.Body.String()), "invalid utf-8") {
|
||||
t.Fatalf("expected utf-8 validation error, got %q", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsJSONContentType(t *testing.T) {
|
||||
for _, raw := range []string{
|
||||
"application/json",
|
||||
"application/json; charset=utf-8",
|
||||
"application/problem+json",
|
||||
"application/vnd.api+json",
|
||||
} {
|
||||
if !isJSONContentType(raw) {
|
||||
t.Fatalf("expected %q to be recognized as json", raw)
|
||||
}
|
||||
}
|
||||
for _, raw := range []string{
|
||||
"multipart/form-data; boundary=abc",
|
||||
"text/plain",
|
||||
"application/octet-stream",
|
||||
} {
|
||||
if isJSONContentType(raw) {
|
||||
t.Fatalf("expected %q not to be recognized as json", raw)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func TestIsKnownJSONRequestPathIncludesGeminiStream(t *testing.T) {
|
||||
if !isKnownJSONRequestPath(http.MethodPost, "/v1beta/models/gemini-pro:streamGenerateContent") {
|
||||
t.Fatal("expected Gemini stream generate path to be recognized as json")
|
||||
}
|
||||
}
|
||||
@@ -516,32 +516,51 @@ function observeContinueState(state, chunk) {
|
||||
if (topID > 0) {
|
||||
state.responseMessageID = topID;
|
||||
}
|
||||
if (chunk.p === 'response/status') {
|
||||
setContinueStatus(state, asString(chunk.v));
|
||||
}
|
||||
observeContinueDirectPatch(state, chunk.p, chunk.v);
|
||||
if (chunk.p === 'response') {
|
||||
observeContinueBatchPatches(state, 'response', chunk.v);
|
||||
} else {
|
||||
observeContinueBatchPatches(state, '', chunk.v);
|
||||
}
|
||||
const response = chunk.v && typeof chunk.v === 'object' ? chunk.v.response : null;
|
||||
if (response && typeof response === 'object') {
|
||||
const id = numberValue(response.message_id);
|
||||
if (id > 0) {
|
||||
state.responseMessageID = id;
|
||||
}
|
||||
setContinueStatus(state, asString(response.status));
|
||||
if (response.auto_continue === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
}
|
||||
observeContinueResponseObject(state, response);
|
||||
const messageResponse = chunk.message && typeof chunk.message === 'object' && chunk.message.response;
|
||||
if (messageResponse && typeof messageResponse === 'object') {
|
||||
const id = numberValue(messageResponse.message_id);
|
||||
if (id > 0) {
|
||||
state.responseMessageID = id;
|
||||
}
|
||||
setContinueStatus(state, asString(messageResponse.status));
|
||||
observeContinueResponseObject(state, messageResponse);
|
||||
}
|
||||
|
||||
function observeContinueDirectPatch(state, path, value) {
|
||||
if (!state) {
|
||||
return;
|
||||
}
|
||||
switch (asString(path).trim().replace(/^\/+|\/+$/g, '')) {
|
||||
case 'response/status':
|
||||
case 'status':
|
||||
case 'response/quasi_status':
|
||||
case 'quasi_status':
|
||||
setContinueStatus(state, asString(value));
|
||||
break;
|
||||
case 'response/auto_continue':
|
||||
case 'auto_continue':
|
||||
if (value === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
function observeContinueResponseObject(state, response) {
|
||||
if (!state || !response || typeof response !== 'object') {
|
||||
return;
|
||||
}
|
||||
const id = numberValue(response.message_id);
|
||||
if (id > 0) {
|
||||
state.responseMessageID = id;
|
||||
}
|
||||
setContinueStatus(state, asString(response.status));
|
||||
if (response.auto_continue === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
}
|
||||
|
||||
@@ -569,6 +588,12 @@ function observeContinueBatchPatches(state, parentPath, raw) {
|
||||
case 'quasi_status':
|
||||
setContinueStatus(state, asString(patch.v));
|
||||
break;
|
||||
case 'response/auto_continue':
|
||||
case 'auto_continue':
|
||||
if (patch.v === true) {
|
||||
state.lastStatus = 'AUTO_CONTINUE';
|
||||
}
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
@@ -248,6 +248,9 @@ function replaceDSMLToolMarkupOutsideIgnored(text) {
|
||||
if (tag) {
|
||||
if (tag.dsmlLike) {
|
||||
out += `<${tag.closing ? '/' : ''}${tag.name}${raw.slice(tag.nameEnd, tag.end + 1)}`;
|
||||
if (raw[tag.end] !== '>') {
|
||||
out += '>';
|
||||
}
|
||||
} else {
|
||||
out += raw.slice(tag.start, tag.end + 1);
|
||||
}
|
||||
@@ -424,31 +427,42 @@ function scanToolMarkupTagAt(text, start) {
|
||||
}
|
||||
const lower = raw.toLowerCase();
|
||||
let i = start + 1;
|
||||
while (i < raw.length && raw[i] === '<') {
|
||||
i += 1;
|
||||
}
|
||||
const closing = raw[i] === '/';
|
||||
if (closing) {
|
||||
i += 1;
|
||||
}
|
||||
let dsmlLike = false;
|
||||
if (i < raw.length && isToolMarkupPipe(raw[i])) {
|
||||
dsmlLike = true;
|
||||
i += 1;
|
||||
}
|
||||
if (lower.startsWith('dsml', i)) {
|
||||
dsmlLike = true;
|
||||
i += 'dsml'.length;
|
||||
while (i < raw.length && isToolMarkupSeparator(raw[i])) {
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
const prefix = consumeToolMarkupNamePrefix(raw, lower, i);
|
||||
i = prefix.next;
|
||||
const dsmlLike = prefix.dsmlLike;
|
||||
const { name, len } = matchToolMarkupName(lower, i);
|
||||
if (!name) {
|
||||
return null;
|
||||
}
|
||||
const nameEnd = i + len;
|
||||
const originalNameEnd = i + len;
|
||||
let nameEnd = originalNameEnd;
|
||||
while (nameEnd < raw.length && isToolMarkupPipe(raw[nameEnd])) {
|
||||
nameEnd += 1;
|
||||
}
|
||||
const hasTrailingPipe = nameEnd > originalNameEnd;
|
||||
if (!hasXmlTagBoundary(raw, nameEnd)) {
|
||||
return null;
|
||||
}
|
||||
const end = findXmlTagEnd(raw, nameEnd);
|
||||
let end = findXmlTagEnd(raw, nameEnd);
|
||||
if (end < 0) {
|
||||
if (!hasTrailingPipe) {
|
||||
return null;
|
||||
}
|
||||
end = nameEnd - 1;
|
||||
}
|
||||
if (hasTrailingPipe) {
|
||||
const nextLT = raw.indexOf('<', nameEnd);
|
||||
if (nextLT >= 0 && end >= nextLT) {
|
||||
end = nameEnd - 1;
|
||||
}
|
||||
}
|
||||
if (end < 0) {
|
||||
return null;
|
||||
}
|
||||
@@ -520,37 +534,94 @@ function findPartialToolMarkupStart(text) {
|
||||
if (lastLT < 0) {
|
||||
return -1;
|
||||
}
|
||||
const tail = raw.slice(lastLT);
|
||||
const start = includeDuplicateLeadingLessThan(raw, lastLT);
|
||||
const tail = raw.slice(start);
|
||||
if (tail.includes('>')) {
|
||||
return -1;
|
||||
}
|
||||
const lowerTail = tail.toLowerCase();
|
||||
const candidates = [
|
||||
'<tool_calls', '<invoke', '<parameter',
|
||||
'<|tool_calls', '<|invoke', '<|parameter',
|
||||
'<|tool_calls', '<|invoke', '<|parameter',
|
||||
'<|dsml|tool_calls', '<|dsml|invoke', '<|dsml|parameter',
|
||||
'<|dsml|tool_calls', '<|dsml|invoke', '<|dsml|parameter',
|
||||
'<dsmltool_calls', '<dsmlinvoke', '<dsmlparameter',
|
||||
'<dsml tool_calls', '<dsml invoke', '<dsml parameter',
|
||||
'<dsml|tool_calls', '<dsml|invoke', '<dsml|parameter',
|
||||
'<|dsmltool_calls', '<|dsmlinvoke', '<|dsmlparameter',
|
||||
'<|dsml tool_calls', '<|dsml invoke', '<|dsml parameter',
|
||||
];
|
||||
for (const candidate of candidates) {
|
||||
if (candidate.startsWith(lowerTail)) {
|
||||
return lastLT;
|
||||
}
|
||||
return isPartialToolMarkupTagPrefix(tail) ? start : -1;
|
||||
}
|
||||
|
||||
function includeDuplicateLeadingLessThan(text, idx) {
|
||||
let out = idx;
|
||||
while (out > 0 && text[out - 1] === '<') {
|
||||
out -= 1;
|
||||
}
|
||||
return -1;
|
||||
return out;
|
||||
}
|
||||
|
||||
function isToolMarkupPipe(ch) {
|
||||
return ch === '|' || ch === '|';
|
||||
}
|
||||
|
||||
function isToolMarkupSeparator(ch) {
|
||||
return ch === ' ' || ch === '\t' || ch === '\r' || ch === '\n' || isToolMarkupPipe(ch);
|
||||
function isPartialToolMarkupTagPrefix(text) {
|
||||
const raw = toStringSafe(text);
|
||||
if (!raw || raw[0] !== '<' || raw.includes('>')) {
|
||||
return false;
|
||||
}
|
||||
const lower = raw.toLowerCase();
|
||||
let i = 1;
|
||||
while (i < raw.length && raw[i] === '<') {
|
||||
i += 1;
|
||||
}
|
||||
if (i >= raw.length) {
|
||||
return true;
|
||||
}
|
||||
if (raw[i] === '/') {
|
||||
i += 1;
|
||||
}
|
||||
while (i <= raw.length) {
|
||||
if (i === raw.length) {
|
||||
return true;
|
||||
}
|
||||
if (hasToolMarkupNamePrefix(lower.slice(i))) {
|
||||
return true;
|
||||
}
|
||||
if ('dsml'.startsWith(lower.slice(i))) {
|
||||
return true;
|
||||
}
|
||||
const next = consumeToolMarkupNamePrefixOnce(raw, lower, i);
|
||||
if (!next.ok) {
|
||||
return false;
|
||||
}
|
||||
i = next.next;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function consumeToolMarkupNamePrefix(raw, lower, idx) {
|
||||
let next = idx;
|
||||
let dsmlLike = false;
|
||||
while (true) {
|
||||
const consumed = consumeToolMarkupNamePrefixOnce(raw, lower, next);
|
||||
if (!consumed.ok) {
|
||||
return { next, dsmlLike };
|
||||
}
|
||||
next = consumed.next;
|
||||
dsmlLike = true;
|
||||
}
|
||||
}
|
||||
|
||||
function consumeToolMarkupNamePrefixOnce(raw, lower, idx) {
|
||||
if (idx < raw.length && isToolMarkupPipe(raw[idx])) {
|
||||
return { next: idx + 1, ok: true };
|
||||
}
|
||||
if (idx < raw.length && [' ', '\t', '\r', '\n'].includes(raw[idx])) {
|
||||
return { next: idx + 1, ok: true };
|
||||
}
|
||||
if (lower.startsWith('dsml', idx)) {
|
||||
return { next: idx + 'dsml'.length, ok: true };
|
||||
}
|
||||
return { next: idx, ok: false };
|
||||
}
|
||||
|
||||
function hasToolMarkupNamePrefix(lowerTail) {
|
||||
for (const name of TOOL_MARKUP_NAMES) {
|
||||
if (lowerTail.startsWith(name) || name.startsWith(lowerTail)) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
function matchToolMarkupName(lower, start) {
|
||||
@@ -775,10 +846,18 @@ function parseMarkupValue(raw, paramName = '') {
|
||||
if (cdata.ok) {
|
||||
const literal = parseJSONLiteralValue(cdata.value);
|
||||
if (literal.ok) {
|
||||
const literalArray = coerceArrayValue(literal.value, paramName);
|
||||
if (literalArray.ok) {
|
||||
return literalArray.value;
|
||||
}
|
||||
return literal.value;
|
||||
}
|
||||
const structured = parseStructuredCDATAParameterValue(paramName, cdata.value);
|
||||
return structured.ok ? structured.value : cdata.value;
|
||||
if (structured.ok) {
|
||||
return structured.value;
|
||||
}
|
||||
const looseArray = parseLooseJSONArrayValue(cdata.value, paramName);
|
||||
return looseArray.ok ? looseArray.value : cdata.value;
|
||||
}
|
||||
const s = toStringSafe(extractRawTagValue(raw)).trim();
|
||||
if (!s) {
|
||||
@@ -791,8 +870,14 @@ function parseMarkupValue(raw, paramName = '') {
|
||||
return nested;
|
||||
}
|
||||
if (nested && typeof nested === 'object') {
|
||||
const nestedArray = coerceArrayValue(nested, paramName);
|
||||
if (nestedArray.ok) {
|
||||
return nestedArray.value;
|
||||
}
|
||||
if (isOnlyRawValue(nested)) {
|
||||
return toStringSafe(nested._raw);
|
||||
const rawValue = toStringSafe(nested._raw);
|
||||
const looseArray = parseLooseJSONArrayValue(rawValue, paramName);
|
||||
return looseArray.ok ? looseArray.value : rawValue;
|
||||
}
|
||||
return nested;
|
||||
}
|
||||
@@ -800,8 +885,16 @@ function parseMarkupValue(raw, paramName = '') {
|
||||
|
||||
const literal = parseJSONLiteralValue(s);
|
||||
if (literal.ok) {
|
||||
const literalArray = coerceArrayValue(literal.value, paramName);
|
||||
if (literalArray.ok) {
|
||||
return literalArray.value;
|
||||
}
|
||||
return literal.value;
|
||||
}
|
||||
const looseArray = parseLooseJSONArrayValue(s, paramName);
|
||||
if (looseArray.ok) {
|
||||
return looseArray.value;
|
||||
}
|
||||
return s;
|
||||
}
|
||||
|
||||
@@ -937,6 +1030,226 @@ function parseJSONLiteralValue(raw) {
|
||||
}
|
||||
}
|
||||
|
||||
function parseLooseJSONArrayValue(raw, paramName = '') {
|
||||
if (preservesCDATAStringParameter(paramName)) {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
const s = toStringSafe(raw).trim();
|
||||
if (!s) {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
const candidate = parseLooseJSONArrayCandidate(s, paramName);
|
||||
if (candidate.ok) {
|
||||
return candidate;
|
||||
}
|
||||
|
||||
const segments = splitTopLevelJSONValues(s);
|
||||
if (segments.length < 2) {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
|
||||
const out = [];
|
||||
for (const segment of segments) {
|
||||
const parsed = parseLooseArrayElementValue(segment);
|
||||
if (!parsed.ok) {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
out.push(parsed.value);
|
||||
}
|
||||
return { ok: true, value: out };
|
||||
}
|
||||
|
||||
function parseLooseJSONArrayCandidate(raw, paramName = '') {
|
||||
const parsed = parseLooseArrayElementValue(raw);
|
||||
if (!parsed.ok) {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
return coerceArrayValue(parsed.value, paramName);
|
||||
}
|
||||
|
||||
function parseLooseArrayElementValue(raw) {
|
||||
const s = toStringSafe(raw).trim();
|
||||
if (!s) {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
|
||||
const literal = parseJSONLiteralValue(s);
|
||||
if (literal.ok) {
|
||||
return literal;
|
||||
}
|
||||
|
||||
const repairedBackslashes = repairInvalidJSONBackslashes(s);
|
||||
if (repairedBackslashes !== s) {
|
||||
try {
|
||||
const parsed = JSON.parse(repairedBackslashes);
|
||||
return { ok: true, value: parsed };
|
||||
} catch (_err) {
|
||||
// Fall through.
|
||||
}
|
||||
}
|
||||
|
||||
const repairedLoose = repairLooseJSON(s);
|
||||
if (repairedLoose !== s) {
|
||||
try {
|
||||
const parsed = JSON.parse(repairedLoose);
|
||||
return { ok: true, value: parsed };
|
||||
} catch (_err) {
|
||||
// Fall through.
|
||||
}
|
||||
}
|
||||
|
||||
if (s.includes('<') && s.includes('>')) {
|
||||
const parsed = parseMarkupInput(s);
|
||||
if (Array.isArray(parsed)) {
|
||||
return { ok: true, value: parsed };
|
||||
}
|
||||
if (parsed && typeof parsed === 'object') {
|
||||
return { ok: true, value: parsed };
|
||||
}
|
||||
}
|
||||
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
|
||||
function coerceArrayValue(value, paramName = '') {
|
||||
if (Array.isArray(value)) {
|
||||
return { ok: true, value };
|
||||
}
|
||||
if (!value || typeof value !== 'object') {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
|
||||
const keys = Object.keys(value);
|
||||
if (keys.length !== 1) {
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
|
||||
if (Object.prototype.hasOwnProperty.call(value, 'item')) {
|
||||
const items = value.item;
|
||||
const nested = coerceArrayValue(items, '');
|
||||
return nested.ok ? nested : { ok: true, value: [items] };
|
||||
}
|
||||
|
||||
if (paramName && Object.prototype.hasOwnProperty.call(value, paramName)) {
|
||||
const nested = coerceArrayValue(value[paramName], '');
|
||||
if (nested.ok) {
|
||||
return nested;
|
||||
}
|
||||
}
|
||||
|
||||
return { ok: false, value: null };
|
||||
}
|
||||
|
||||
function splitTopLevelJSONValues(raw) {
|
||||
const s = toStringSafe(raw).trim();
|
||||
if (!s) {
|
||||
return [];
|
||||
}
|
||||
|
||||
const values = [];
|
||||
let start = 0;
|
||||
let depth = 0;
|
||||
let inString = false;
|
||||
let escaped = false;
|
||||
|
||||
for (let i = 0; i < s.length; i += 1) {
|
||||
const ch = s[i];
|
||||
if (inString) {
|
||||
if (escaped) {
|
||||
escaped = false;
|
||||
continue;
|
||||
}
|
||||
if (ch === '\\') {
|
||||
escaped = true;
|
||||
continue;
|
||||
}
|
||||
if (ch === '"') {
|
||||
inString = false;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if (ch === '"') {
|
||||
inString = true;
|
||||
continue;
|
||||
}
|
||||
if (ch === '{' || ch === '[') {
|
||||
depth += 1;
|
||||
continue;
|
||||
}
|
||||
if (ch === '}' || ch === ']') {
|
||||
if (depth > 0) {
|
||||
depth -= 1;
|
||||
}
|
||||
continue;
|
||||
}
|
||||
if (ch === ',' && depth === 0) {
|
||||
const segment = s.slice(start, i).trim();
|
||||
if (!segment) {
|
||||
return [];
|
||||
}
|
||||
values.push(segment);
|
||||
start = i + 1;
|
||||
}
|
||||
}
|
||||
|
||||
const last = s.slice(start).trim();
|
||||
if (!last) {
|
||||
return [];
|
||||
}
|
||||
values.push(last);
|
||||
return values.length > 1 ? values : [];
|
||||
}
|
||||
|
||||
function repairInvalidJSONBackslashes(s) {
|
||||
if (!s || !s.includes('\\')) {
|
||||
return s;
|
||||
}
|
||||
|
||||
let out = '';
|
||||
for (let i = 0; i < s.length; i += 1) {
|
||||
const ch = s[i];
|
||||
if (ch !== '\\') {
|
||||
out += ch;
|
||||
continue;
|
||||
}
|
||||
if (i + 1 < s.length) {
|
||||
const next = s[i + 1];
|
||||
if ('"\\/bfnrt'.includes(next)) {
|
||||
out += `\\${next}`;
|
||||
i += 1;
|
||||
continue;
|
||||
}
|
||||
if (next === 'u' && i + 5 < s.length) {
|
||||
let isHex = true;
|
||||
for (let j = 1; j <= 4; j += 1) {
|
||||
const r = s[i + 1 + j];
|
||||
if (!/[0-9a-fA-F]/.test(r)) {
|
||||
isHex = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
if (isHex) {
|
||||
out += `\\u${s.slice(i + 2, i + 6)}`;
|
||||
i += 5;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
out += '\\\\';
|
||||
}
|
||||
return out;
|
||||
}
|
||||
|
||||
function repairLooseJSON(s) {
|
||||
const raw = toStringSafe(s).trim();
|
||||
if (!raw) {
|
||||
return raw;
|
||||
}
|
||||
let out = raw.replace(/([{,]\s*)([a-zA-Z_][a-zA-Z0-9_]*)\s*:/g, '$1"$2":');
|
||||
out = out.replace(/(:\s*)(\{(?:[^{}]|\{[^{}]*\})*\}(?:\s*,\s*\{(?:[^{}]|\{[^{}]*\})*\})+)/g, '$1[$2]');
|
||||
return out;
|
||||
}
|
||||
|
||||
function sanitizeLooseCDATA(text) {
|
||||
const raw = toStringSafe(text);
|
||||
if (!raw) {
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
'use strict';
|
||||
|
||||
const XML_TOOL_SEGMENT_TAGS = [
|
||||
'<|dsml|tool_calls>', '<|dsml|tool_calls\n', '<|dsml|tool_calls ',
|
||||
'<|dsml|tool_calls>', '<|dsml|tool_calls\n', '<|dsml|tool_calls ',
|
||||
'<|dsml|invoke ', '<|dsml|invoke\n', '<|dsml|invoke\t', '<|dsml|invoke\r',
|
||||
'<|dsmltool_calls>', '<|dsmltool_calls\n', '<|dsmltool_calls ',
|
||||
'<|dsmlinvoke ', '<|dsmlinvoke\n', '<|dsmlinvoke\t', '<|dsmlinvoke\r',
|
||||
'<|dsml tool_calls>', '<|dsml tool_calls\n', '<|dsml tool_calls ',
|
||||
'<|dsml invoke ', '<|dsml invoke\n', '<|dsml invoke\t', '<|dsml invoke\r',
|
||||
'<dsml|tool_calls>', '<dsml|tool_calls\n', '<dsml|tool_calls ',
|
||||
'<dsml|invoke ', '<dsml|invoke\n', '<dsml|invoke\t', '<dsml|invoke\r',
|
||||
'<dsmltool_calls>', '<dsmltool_calls\n', '<dsmltool_calls ',
|
||||
'<dsmlinvoke ', '<dsmlinvoke\n', '<dsmlinvoke\t', '<dsmlinvoke\r',
|
||||
'<dsml tool_calls>', '<dsml tool_calls\n', '<dsml tool_calls ',
|
||||
'<dsml invoke ', '<dsml invoke\n', '<dsml invoke\t', '<dsml invoke\r',
|
||||
'<|tool_calls>', '<|tool_calls\n', '<|tool_calls ',
|
||||
'<|invoke ', '<|invoke\n', '<|invoke\t', '<|invoke\r',
|
||||
'<|tool_calls>', '<|tool_calls\n', '<|tool_calls ',
|
||||
'<|invoke ', '<|invoke\n', '<|invoke\t', '<|invoke\r',
|
||||
'<tool_calls>', '<tool_calls\n', '<tool_calls ',
|
||||
'<invoke ', '<invoke\n', '<invoke\t', '<invoke\r',
|
||||
];
|
||||
|
||||
const XML_TOOL_OPENING_TAGS = [
|
||||
'<|dsml|tool_calls',
|
||||
'<|dsml|tool_calls',
|
||||
'<|dsmltool_calls',
|
||||
'<|dsml tool_calls',
|
||||
'<dsml|tool_calls',
|
||||
'<dsmltool_calls',
|
||||
'<dsml tool_calls',
|
||||
'<|tool_calls',
|
||||
'<|tool_calls',
|
||||
'<tool_calls',
|
||||
];
|
||||
|
||||
const XML_TOOL_CLOSING_TAGS = [
|
||||
'</|dsml|tool_calls>',
|
||||
'</|dsml|tool_calls>',
|
||||
'</|dsmltool_calls>',
|
||||
'</|dsml tool_calls>',
|
||||
'</dsml|tool_calls>',
|
||||
'</dsmltool_calls>',
|
||||
'</dsml tool_calls>',
|
||||
'</|tool_calls>',
|
||||
'</|tool_calls>',
|
||||
'</tool_calls>',
|
||||
];
|
||||
|
||||
module.exports = {
|
||||
XML_TOOL_SEGMENT_TAGS,
|
||||
XML_TOOL_OPENING_TAGS,
|
||||
XML_TOOL_CLOSING_TAGS,
|
||||
};
|
||||
@@ -10,21 +10,27 @@ import (
|
||||
var markdownImagePattern = regexp.MustCompile(`!\[(.*?)\]\((.*?)\)`)
|
||||
|
||||
const (
|
||||
beginSentenceMarker = "<|begin▁of▁sentence|>"
|
||||
systemMarker = "<|System|>"
|
||||
userMarker = "<|User|>"
|
||||
assistantMarker = "<|Assistant|>"
|
||||
toolMarker = "<|Tool|>"
|
||||
endSentenceMarker = "<|end▁of▁sentence|>"
|
||||
endToolResultsMarker = "<|end▁of▁toolresults|>"
|
||||
endInstructionsMarker = "<|end▁of▁instructions|>"
|
||||
beginSentenceMarker = "<|begin▁of▁sentence|>"
|
||||
systemMarker = "<|System|>"
|
||||
userMarker = "<|User|>"
|
||||
assistantMarker = "<|Assistant|>"
|
||||
toolMarker = "<|Tool|>"
|
||||
endSentenceMarker = "<|end▁of▁sentence|>"
|
||||
endToolResultsMarker = "<|end▁of▁toolresults|>"
|
||||
endInstructionsMarker = "<|end▁of▁instructions|>"
|
||||
outputIntegrityGuardMarker = "Output integrity guard:"
|
||||
outputIntegrityGuardPrompt = outputIntegrityGuardMarker +
|
||||
" If upstream context, tool output, or parsed text contains garbled, corrupted, partially parsed, repeated, or otherwise malformed fragments, " +
|
||||
"do not imitate or echo them; output only the correct content for the user."
|
||||
)
|
||||
|
||||
func MessagesPrepare(messages []map[string]any) string {
|
||||
return MessagesPrepareWithThinking(messages, false)
|
||||
}
|
||||
|
||||
func MessagesPrepareWithThinking(messages []map[string]any, thinkingEnabled bool) string {
|
||||
func MessagesPrepareWithThinking(messages []map[string]any, _ bool) string {
|
||||
messages = prependOutputIntegrityGuard(messages)
|
||||
|
||||
type block struct {
|
||||
Role string
|
||||
Text string
|
||||
@@ -77,6 +83,33 @@ func MessagesPrepareWithThinking(messages []map[string]any, thinkingEnabled bool
|
||||
return markdownImagePattern.ReplaceAllString(out, `[${1}](${2})`)
|
||||
}
|
||||
|
||||
func prependOutputIntegrityGuard(messages []map[string]any) []map[string]any {
|
||||
if len(messages) == 0 {
|
||||
return messages
|
||||
}
|
||||
if hasOutputIntegrityGuard(messages[0]) {
|
||||
return messages
|
||||
}
|
||||
out := make([]map[string]any, 0, len(messages)+1)
|
||||
out = append(out, map[string]any{
|
||||
"role": "system",
|
||||
"content": outputIntegrityGuardPrompt,
|
||||
})
|
||||
out = append(out, messages...)
|
||||
return out
|
||||
}
|
||||
|
||||
func hasOutputIntegrityGuard(msg map[string]any) bool {
|
||||
if msg == nil {
|
||||
return false
|
||||
}
|
||||
if strings.ToLower(strings.TrimSpace(asString(msg["role"]))) != "system" {
|
||||
return false
|
||||
}
|
||||
content := strings.TrimSpace(NormalizeContent(msg["content"]))
|
||||
return strings.Contains(content, outputIntegrityGuardMarker)
|
||||
}
|
||||
|
||||
// formatRoleBlock produces a single concatenated block: marker + text + endMarker.
|
||||
// No whitespace is inserted between marker and text so role boundaries stay
|
||||
// compact and predictable for downstream parsers.
|
||||
|
||||
@@ -35,8 +35,8 @@ func TestMessagesPrepareUsesTurnSuffixes(t *testing.T) {
|
||||
if !strings.HasPrefix(got, "<|begin▁of▁sentence|>") {
|
||||
t.Fatalf("expected begin-of-sentence marker, got %q", got)
|
||||
}
|
||||
if !strings.Contains(got, "<|System|>System rule<|end▁of▁instructions|>") {
|
||||
t.Fatalf("expected system instructions suffix, got %q", got)
|
||||
if !strings.Contains(got, "<|System|>") || !strings.Contains(got, "<|end▁of▁instructions|>") || !strings.Contains(got, "System rule") {
|
||||
t.Fatalf("expected system instructions to remain present, got %q", got)
|
||||
}
|
||||
if !strings.Contains(got, "<|User|>Question") {
|
||||
t.Fatalf("expected user question, got %q", got)
|
||||
@@ -49,6 +49,23 @@ func TestMessagesPrepareUsesTurnSuffixes(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestMessagesPreparePrependsOutputIntegrityGuard(t *testing.T) {
|
||||
messages := []map[string]any{
|
||||
{"role": "system", "content": "System rule"},
|
||||
{"role": "user", "content": "Question"},
|
||||
}
|
||||
got := MessagesPrepare(messages)
|
||||
if !strings.HasPrefix(got, beginSentenceMarker+systemMarker+outputIntegrityGuardPrompt) {
|
||||
t.Fatalf("expected output integrity guard to be prepended, got %q", got)
|
||||
}
|
||||
if !strings.Contains(got, outputIntegrityGuardPrompt+"\n\nSystem rule") {
|
||||
t.Fatalf("expected output integrity guard to precede system prompt content, got %q", got)
|
||||
}
|
||||
if !strings.Contains(got, "<|User|>Question") {
|
||||
t.Fatalf("expected user question after guard, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeContentArrayFallsBackToContentWhenTextEmpty(t *testing.T) {
|
||||
got := NormalizeContent([]any{
|
||||
map[string]any{"type": "text", "text": "", "content": "from-content"},
|
||||
|
||||
@@ -1,35 +1,108 @@
|
||||
package promptcompat
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/prompt"
|
||||
)
|
||||
|
||||
const CurrentInputContextFilename = "history.txt"
|
||||
const CurrentInputContextFilename = "DS2API_HISTORY.txt"
|
||||
|
||||
const historyTranscriptTitle = "# DS2API_HISTORY.txt"
|
||||
const historyTranscriptSummary = "Prior conversation history and tool progress."
|
||||
|
||||
func BuildOpenAIHistoryTranscript(messages []any) string {
|
||||
return buildOpenAIInjectedFileTranscript(messages)
|
||||
return buildOpenAIHistoryTranscript(messages)
|
||||
}
|
||||
|
||||
func BuildOpenAICurrentUserInputTranscript(text string) string {
|
||||
if strings.TrimSpace(text) == "" {
|
||||
return ""
|
||||
}
|
||||
return BuildOpenAICurrentInputContextTranscript([]any{
|
||||
return buildOpenAIHistoryTranscript([]any{
|
||||
map[string]any{"role": "user", "content": text},
|
||||
})
|
||||
}
|
||||
|
||||
func BuildOpenAICurrentInputContextTranscript(messages []any) string {
|
||||
return buildOpenAIInjectedFileTranscript(messages)
|
||||
return buildOpenAIHistoryTranscript(messages)
|
||||
}
|
||||
|
||||
func buildOpenAIInjectedFileTranscript(messages []any) string {
|
||||
normalized := NormalizeOpenAIMessagesForPrompt(messages, "")
|
||||
transcript := strings.TrimSpace(prompt.MessagesPrepare(normalized))
|
||||
func buildOpenAIHistoryTranscript(messages []any) string {
|
||||
if len(messages) == 0 {
|
||||
return ""
|
||||
}
|
||||
var b strings.Builder
|
||||
b.WriteString(historyTranscriptTitle)
|
||||
b.WriteString("\n")
|
||||
b.WriteString(historyTranscriptSummary)
|
||||
b.WriteString("\n\n")
|
||||
|
||||
entry := 0
|
||||
for _, raw := range messages {
|
||||
msg, ok := raw.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
role := normalizeOpenAIRoleForPrompt(strings.ToLower(strings.TrimSpace(asString(msg["role"]))))
|
||||
content := strings.TrimSpace(buildOpenAIHistoryEntry(role, msg))
|
||||
if content == "" {
|
||||
continue
|
||||
}
|
||||
entry++
|
||||
fmt.Fprintf(&b, "=== %d. %s ===\n%s\n\n", entry, strings.ToUpper(roleLabelForHistory(role)), content)
|
||||
}
|
||||
|
||||
transcript := strings.TrimSpace(b.String())
|
||||
if transcript == "" {
|
||||
return ""
|
||||
}
|
||||
return transcript
|
||||
return transcript + "\n"
|
||||
}
|
||||
|
||||
func buildOpenAIHistoryEntry(role string, msg map[string]any) string {
|
||||
switch role {
|
||||
case "assistant":
|
||||
return strings.TrimSpace(buildAssistantContentForPrompt(msg))
|
||||
case "tool", "function":
|
||||
return strings.TrimSpace(buildToolHistoryContent(msg))
|
||||
case "system", "user":
|
||||
return strings.TrimSpace(NormalizeOpenAIContentForPrompt(msg["content"]))
|
||||
default:
|
||||
return strings.TrimSpace(NormalizeOpenAIContentForPrompt(msg["content"]))
|
||||
}
|
||||
}
|
||||
|
||||
func buildToolHistoryContent(msg map[string]any) string {
|
||||
content := strings.TrimSpace(NormalizeOpenAIContentForPrompt(msg["content"]))
|
||||
parts := make([]string, 0, 2)
|
||||
if name := strings.TrimSpace(asString(msg["name"])); name != "" {
|
||||
parts = append(parts, "name="+name)
|
||||
}
|
||||
if callID := strings.TrimSpace(asString(msg["tool_call_id"])); callID != "" {
|
||||
parts = append(parts, "tool_call_id="+callID)
|
||||
}
|
||||
header := ""
|
||||
if len(parts) > 0 {
|
||||
header = "[" + strings.Join(parts, " ") + "]"
|
||||
}
|
||||
switch {
|
||||
case header != "" && content != "":
|
||||
return header + "\n" + content
|
||||
case header != "":
|
||||
return header
|
||||
default:
|
||||
return content
|
||||
}
|
||||
}
|
||||
|
||||
func roleLabelForHistory(role string) string {
|
||||
role = strings.ToLower(strings.TrimSpace(role))
|
||||
switch role {
|
||||
case "function":
|
||||
return "tool"
|
||||
case "":
|
||||
return "unknown"
|
||||
default:
|
||||
return role
|
||||
}
|
||||
}
|
||||
|
||||
@@ -88,6 +88,38 @@ func TestBuildOpenAIFinalPrompt_VercelPreparePathKeepsFinalAnswerInstruction(t *
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildOpenAIFinalPromptPrependsOutputIntegrityGuard(t *testing.T) {
|
||||
messages := []any{
|
||||
map[string]any{"role": "system", "content": "You are helpful"},
|
||||
map[string]any{"role": "user", "content": "请调用工具"},
|
||||
}
|
||||
tools := []any{
|
||||
map[string]any{
|
||||
"type": "function",
|
||||
"function": map[string]any{
|
||||
"name": "search",
|
||||
"description": "search docs",
|
||||
"parameters": map[string]any{
|
||||
"type": "object",
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
finalPrompt, _ := buildOpenAIFinalPrompt(messages, tools, "", false)
|
||||
guardIdx := strings.Index(finalPrompt, "Output integrity guard")
|
||||
toolIdx := strings.Index(finalPrompt, "TOOL CALL FORMAT")
|
||||
if guardIdx < 0 {
|
||||
t.Fatalf("expected output integrity guard in final prompt, got: %q", finalPrompt)
|
||||
}
|
||||
if toolIdx < 0 {
|
||||
t.Fatalf("expected tool instructions in final prompt, got: %q", finalPrompt)
|
||||
}
|
||||
if guardIdx > toolIdx {
|
||||
t.Fatalf("expected output integrity guard to precede tool instructions, got: %q", finalPrompt)
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildOpenAIFinalPromptReadLikeToolIncludesCacheGuard(t *testing.T) {
|
||||
messages := []any{
|
||||
map[string]any{"role": "user", "content": "请读取文件"},
|
||||
|
||||
@@ -27,6 +27,7 @@ import (
|
||||
"ds2api/internal/httpapi/openai/files"
|
||||
"ds2api/internal/httpapi/openai/responses"
|
||||
"ds2api/internal/httpapi/openai/shared"
|
||||
"ds2api/internal/httpapi/requestbody"
|
||||
"ds2api/internal/webui"
|
||||
)
|
||||
|
||||
@@ -75,6 +76,7 @@ func NewApp() (*App, error) {
|
||||
r.Use(filteredLogger())
|
||||
r.Use(middleware.Recoverer)
|
||||
r.Use(cors)
|
||||
r.Use(requestbody.ValidateJSONUTF8)
|
||||
r.Use(timeout(0))
|
||||
|
||||
healthzHandler := func(w http.ResponseWriter, _ *http.Request) {
|
||||
|
||||
89
internal/server/router_utf8_test.go
Normal file
89
internal/server/router_utf8_test.go
Normal file
@@ -0,0 +1,89 @@
|
||||
package server
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func TestJSONRequestsRejectInvalidUTF8BeforeDecode(t *testing.T) {
|
||||
t.Setenv("DS2API_CONFIG_JSON", `{"keys":["managed-key"],"accounts":[{"email":"u@example.com","password":"p"}]}`)
|
||||
t.Setenv("DS2API_ENV_WRITEBACK", "0")
|
||||
|
||||
app, err := NewApp()
|
||||
if err != nil {
|
||||
t.Fatalf("NewApp() error: %v", err)
|
||||
}
|
||||
|
||||
body := append([]byte(`{"model":"deepseek-v4-flash","messages":[{"role":"user","content":"`), 0xff)
|
||||
body = append(body, []byte(`"}]}`)...)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", bytes.NewReader(body))
|
||||
req.Header.Set("Content-Type", "application/json; charset=utf-8")
|
||||
req.Header.Set("x-api-key", "direct-token")
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
app.Router.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for invalid utf-8 request body, got %d body=%q", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(rec.Body.String()), "invalid json") {
|
||||
t.Fatalf("expected invalid json error, got %q", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestKnownJSONRequestsRejectInvalidUTF8WithoutJSONContentType(t *testing.T) {
|
||||
t.Setenv("DS2API_CONFIG_JSON", `{"keys":["managed-key"],"accounts":[{"email":"u@example.com","password":"p"}]}`)
|
||||
t.Setenv("DS2API_ENV_WRITEBACK", "0")
|
||||
|
||||
app, err := NewApp()
|
||||
if err != nil {
|
||||
t.Fatalf("NewApp() error: %v", err)
|
||||
}
|
||||
|
||||
body := append([]byte(`{"model":"deepseek-v4-flash","messages":[{"role":"user","content":"`), 0xff)
|
||||
body = append(body, []byte(`"}]}`)...)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", bytes.NewReader(body))
|
||||
req.Header.Set("Content-Type", "text/plain")
|
||||
req.Header.Set("x-api-key", "direct-token")
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
app.Router.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for invalid utf-8 request body, got %d body=%q", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(rec.Body.String()), "invalid json") {
|
||||
t.Fatalf("expected invalid json error, got %q", rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestJSONRequestsRejectTrailingInvalidUTF8AfterCompleteJSON(t *testing.T) {
|
||||
t.Setenv("DS2API_CONFIG_JSON", `{"keys":["managed-key"],"accounts":[{"email":"u@example.com","password":"p"}]}`)
|
||||
t.Setenv("DS2API_ENV_WRITEBACK", "0")
|
||||
|
||||
app, err := NewApp()
|
||||
if err != nil {
|
||||
t.Fatalf("NewApp() error: %v", err)
|
||||
}
|
||||
|
||||
body := append([]byte(`{"model":"deepseek-v4-flash","messages":[{"role":"user","content":"ok"}]}`), 0xff)
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", bytes.NewReader(body))
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
req.Header.Set("x-api-key", "direct-token")
|
||||
|
||||
rec := httptest.NewRecorder()
|
||||
app.Router.ServeHTTP(rec, req)
|
||||
|
||||
if rec.Code != http.StatusBadRequest {
|
||||
t.Fatalf("expected 400 for trailing invalid utf-8, got %d body=%q", rec.Code, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(strings.ToLower(rec.Body.String()), "invalid json") {
|
||||
t.Fatalf("expected invalid json error, got %q", rec.Body.String())
|
||||
}
|
||||
}
|
||||
@@ -41,6 +41,15 @@ func TestCollectStreamTextOnly(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestCollectStreamHandlesLongSingleSSELine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
resp := makeHTTPResponse(makeLargeContentSSEBody(t, payload))
|
||||
result := CollectStream(resp, false, true)
|
||||
if result.Text != payload {
|
||||
t.Fatalf("long SSE line payload mismatch: got len=%d want len=%d", len(result.Text), len(payload))
|
||||
}
|
||||
}
|
||||
|
||||
func TestCollectStreamThinkingAndText(t *testing.T) {
|
||||
resp := makeHTTPResponse(
|
||||
"data: {\"p\":\"response/thinking_content\",\"v\":\"Thinking...\"}\n" +
|
||||
|
||||
@@ -9,8 +9,7 @@ import (
|
||||
|
||||
const (
|
||||
parsedLineBufferSize = 128
|
||||
scannerBufferSize = 64 * 1024
|
||||
maxScannerLineSize = 2 * 1024 * 1024
|
||||
lineReaderBufferSize = 64 * 1024
|
||||
minFlushChars = 160
|
||||
maxFlushWait = 80 * time.Millisecond
|
||||
)
|
||||
@@ -29,8 +28,8 @@ func StartParsedLinePump(ctx context.Context, body io.Reader, thinkingEnabled bo
|
||||
eof bool
|
||||
}
|
||||
lineCh := make(chan scanItem, 1)
|
||||
stopScanner := make(chan struct{})
|
||||
defer close(stopScanner)
|
||||
stopReader := make(chan struct{})
|
||||
defer close(stopReader)
|
||||
go func() {
|
||||
sendScanItem := func(item scanItem) bool {
|
||||
select {
|
||||
@@ -38,20 +37,28 @@ func StartParsedLinePump(ctx context.Context, body io.Reader, thinkingEnabled bo
|
||||
return true
|
||||
case <-ctx.Done():
|
||||
return false
|
||||
case <-stopScanner:
|
||||
case <-stopReader:
|
||||
return false
|
||||
}
|
||||
}
|
||||
defer close(lineCh)
|
||||
scanner := bufio.NewScanner(body)
|
||||
scanner.Buffer(make([]byte, 0, scannerBufferSize), maxScannerLineSize)
|
||||
for scanner.Scan() {
|
||||
line := append([]byte{}, scanner.Bytes()...)
|
||||
if !sendScanItem(scanItem{line: line}) {
|
||||
reader := bufio.NewReaderSize(body, lineReaderBufferSize)
|
||||
for {
|
||||
line, err := reader.ReadBytes('\n')
|
||||
if len(line) > 0 {
|
||||
line = append([]byte{}, line...)
|
||||
if !sendScanItem(scanItem{line: line}) {
|
||||
return
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
err = nil
|
||||
}
|
||||
_ = sendScanItem(scanItem{err: err, eof: true})
|
||||
return
|
||||
}
|
||||
}
|
||||
_ = sendScanItem(scanItem{err: scanner.Err(), eof: true})
|
||||
}()
|
||||
|
||||
ticker := time.NewTicker(maxFlushWait)
|
||||
|
||||
@@ -2,10 +2,23 @@ package sse
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"strings"
|
||||
"testing"
|
||||
)
|
||||
|
||||
func makeLargeContentSSEBody(t *testing.T, payload string) string {
|
||||
t.Helper()
|
||||
line, err := json.Marshal(map[string]any{
|
||||
"p": "response/content",
|
||||
"v": payload,
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("marshal SSE line failed: %v", err)
|
||||
}
|
||||
return "data: " + string(line) + "\n" + "data: [DONE]\n"
|
||||
}
|
||||
|
||||
func TestStartParsedLinePumpParsesAndStops(t *testing.T) {
|
||||
body := strings.NewReader("data: {\"p\":\"response/content\",\"v\":\"hi\"}\n\ndata: [DONE]\n")
|
||||
results, done := StartParsedLinePump(context.Background(), body, false, "text")
|
||||
@@ -28,3 +41,28 @@ func TestStartParsedLinePumpParsesAndStops(t *testing.T) {
|
||||
t.Fatalf("expected last line to stop stream, got parsed=%v stop=%v", last.Parsed, last.Stop)
|
||||
}
|
||||
}
|
||||
|
||||
func TestStartParsedLinePumpHandlesLongSingleSSELine(t *testing.T) {
|
||||
payload := strings.Repeat("x", 2*1024*1024+4096)
|
||||
results, done := StartParsedLinePump(context.Background(), strings.NewReader(makeLargeContentSSEBody(t, payload)), false, "text")
|
||||
|
||||
var got strings.Builder
|
||||
var sawDone bool
|
||||
for r := range results {
|
||||
for _, p := range r.Parts {
|
||||
got.WriteString(p.Text)
|
||||
}
|
||||
if r.Stop {
|
||||
sawDone = true
|
||||
}
|
||||
}
|
||||
if err := <-done; err != nil {
|
||||
t.Fatalf("unexpected long-line read error: %v", err)
|
||||
}
|
||||
if got.String() != payload {
|
||||
t.Fatalf("long SSE line payload mismatch: got len=%d want len=%d", got.Len(), len(payload))
|
||||
}
|
||||
if !sawDone {
|
||||
t.Fatal("expected DONE after long SSE line")
|
||||
}
|
||||
}
|
||||
|
||||
164
internal/toolcall/toolcalls_array_parse.go
Normal file
164
internal/toolcall/toolcalls_array_parse.go
Normal file
@@ -0,0 +1,164 @@
|
||||
package toolcall
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"html"
|
||||
"strings"
|
||||
)
|
||||
|
||||
func parseLooseJSONArrayValue(raw, paramName string) ([]any, bool) {
|
||||
if preservesCDATAStringParameter(paramName) {
|
||||
return nil, false
|
||||
}
|
||||
trimmed := strings.TrimSpace(html.UnescapeString(raw))
|
||||
if trimmed == "" {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
if parsed, ok := parseLooseJSONArrayCandidate(trimmed, paramName); ok {
|
||||
return parsed, true
|
||||
}
|
||||
|
||||
segments, ok := splitTopLevelJSONValues(trimmed)
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
out := make([]any, 0, len(segments))
|
||||
for _, segment := range segments {
|
||||
parsed, ok := parseLooseArrayElementValue(segment)
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
out = append(out, parsed)
|
||||
}
|
||||
return out, true
|
||||
}
|
||||
|
||||
func parseLooseJSONArrayCandidate(raw, paramName string) ([]any, bool) {
|
||||
parsed, ok := parseLooseArrayElementValue(raw)
|
||||
if !ok {
|
||||
return nil, false
|
||||
}
|
||||
return coerceArrayValue(parsed, paramName)
|
||||
}
|
||||
|
||||
func parseLooseArrayElementValue(raw string) (any, bool) {
|
||||
trimmed := strings.TrimSpace(html.UnescapeString(raw))
|
||||
if trimmed == "" {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
var parsed any
|
||||
if err := json.Unmarshal([]byte(trimmed), &parsed); err == nil {
|
||||
return parsed, true
|
||||
}
|
||||
|
||||
repairedBackslashes := repairInvalidJSONBackslashes(trimmed)
|
||||
if repairedBackslashes != trimmed {
|
||||
if err := json.Unmarshal([]byte(repairedBackslashes), &parsed); err == nil {
|
||||
return parsed, true
|
||||
}
|
||||
}
|
||||
|
||||
repairedLoose := RepairLooseJSON(trimmed)
|
||||
if repairedLoose != trimmed {
|
||||
if err := json.Unmarshal([]byte(repairedLoose), &parsed); err == nil {
|
||||
return parsed, true
|
||||
}
|
||||
}
|
||||
|
||||
if strings.Contains(trimmed, "<") && strings.Contains(trimmed, ">") {
|
||||
if parsedXML, ok := parseXMLFragmentValue(trimmed); ok {
|
||||
return parsedXML, true
|
||||
}
|
||||
}
|
||||
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func coerceArrayValue(value any, paramName string) ([]any, bool) {
|
||||
switch x := value.(type) {
|
||||
case []any:
|
||||
return x, true
|
||||
case map[string]any:
|
||||
if len(x) != 1 {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
if items, ok := x["item"]; ok {
|
||||
if arr, ok := coerceArrayValue(items, ""); ok {
|
||||
return arr, true
|
||||
}
|
||||
return []any{items}, true
|
||||
}
|
||||
|
||||
if paramName != "" {
|
||||
if wrapped, ok := x[paramName]; ok {
|
||||
if arr, ok := coerceArrayValue(wrapped, ""); ok {
|
||||
return arr, true
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil, false
|
||||
}
|
||||
|
||||
func splitTopLevelJSONValues(raw string) ([]string, bool) {
|
||||
trimmed := strings.TrimSpace(raw)
|
||||
if trimmed == "" {
|
||||
return nil, false
|
||||
}
|
||||
|
||||
values := make([]string, 0, 2)
|
||||
start := 0
|
||||
depth := 0
|
||||
inString := false
|
||||
escaped := false
|
||||
|
||||
for i, r := range trimmed {
|
||||
if inString {
|
||||
if escaped {
|
||||
escaped = false
|
||||
continue
|
||||
}
|
||||
switch r {
|
||||
case '\\':
|
||||
escaped = true
|
||||
case '"':
|
||||
inString = false
|
||||
}
|
||||
continue
|
||||
}
|
||||
|
||||
switch r {
|
||||
case '"':
|
||||
inString = true
|
||||
case '{', '[':
|
||||
depth++
|
||||
case '}', ']':
|
||||
if depth > 0 {
|
||||
depth--
|
||||
}
|
||||
case ',':
|
||||
if depth == 0 {
|
||||
segment := strings.TrimSpace(trimmed[start:i])
|
||||
if segment == "" {
|
||||
return nil, false
|
||||
}
|
||||
values = append(values, segment)
|
||||
start = i + 1
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
last := strings.TrimSpace(trimmed[start:])
|
||||
if last == "" {
|
||||
return nil, false
|
||||
}
|
||||
values = append(values, last)
|
||||
if len(values) < 2 {
|
||||
return nil, false
|
||||
}
|
||||
return values, true
|
||||
}
|
||||
@@ -44,6 +44,9 @@ func rewriteDSMLToolMarkupOutsideIgnored(text string) string {
|
||||
}
|
||||
b.WriteString(tag.Name)
|
||||
b.WriteString(text[tag.NameEnd : tag.End+1])
|
||||
if text[tag.End] != '>' {
|
||||
b.WriteByte('>')
|
||||
}
|
||||
i = tag.End + 1
|
||||
continue
|
||||
}
|
||||
|
||||
@@ -298,11 +298,17 @@ func parseInvokeParameterValue(paramName, raw string) any {
|
||||
}
|
||||
if value, ok := extractStandaloneCDATA(trimmed); ok {
|
||||
if parsed, ok := parseJSONLiteralValue(value); ok {
|
||||
if parsedArray, ok := coerceArrayValue(parsed, paramName); ok {
|
||||
return parsedArray
|
||||
}
|
||||
return parsed
|
||||
}
|
||||
if parsed, ok := parseStructuredCDATAParameterValue(paramName, value); ok {
|
||||
return parsed
|
||||
}
|
||||
if parsed, ok := parseLooseJSONArrayValue(value, paramName); ok {
|
||||
return parsed
|
||||
}
|
||||
return value
|
||||
}
|
||||
decoded := html.UnescapeString(extractRawTagValue(trimmed))
|
||||
@@ -311,6 +317,9 @@ func parseInvokeParameterValue(paramName, raw string) any {
|
||||
switch v := parsedValue.(type) {
|
||||
case map[string]any:
|
||||
if len(v) > 0 {
|
||||
if parsedArray, ok := coerceArrayValue(v, paramName); ok {
|
||||
return parsedArray
|
||||
}
|
||||
return v
|
||||
}
|
||||
case []any:
|
||||
@@ -321,6 +330,12 @@ func parseInvokeParameterValue(paramName, raw string) any {
|
||||
return ""
|
||||
}
|
||||
if parsedText, ok := parseJSONLiteralValue(text); ok {
|
||||
if parsedArray, ok := coerceArrayValue(parsedText, paramName); ok {
|
||||
return parsedArray
|
||||
}
|
||||
return parsedText
|
||||
}
|
||||
if parsedText, ok := parseLooseJSONArrayValue(text, paramName); ok {
|
||||
return parsedText
|
||||
}
|
||||
return v
|
||||
@@ -331,13 +346,25 @@ func parseInvokeParameterValue(paramName, raw string) any {
|
||||
if parsed := parseStructuredToolCallInput(decoded); len(parsed) > 0 {
|
||||
if len(parsed) == 1 {
|
||||
if rawValue, ok := parsed["_raw"].(string); ok {
|
||||
if parsedText, ok := parseLooseJSONArrayValue(rawValue, paramName); ok {
|
||||
return parsedText
|
||||
}
|
||||
return rawValue
|
||||
}
|
||||
}
|
||||
if parsedArray, ok := coerceArrayValue(parsed, paramName); ok {
|
||||
return parsedArray
|
||||
}
|
||||
return parsed
|
||||
}
|
||||
}
|
||||
if parsed, ok := parseJSONLiteralValue(decoded); ok {
|
||||
if parsedArray, ok := coerceArrayValue(parsed, paramName); ok {
|
||||
return parsedArray
|
||||
}
|
||||
return parsed
|
||||
}
|
||||
if parsed, ok := parseLooseJSONArrayValue(decoded, paramName); ok {
|
||||
return parsed
|
||||
}
|
||||
return decoded
|
||||
|
||||
@@ -128,34 +128,39 @@ func scanToolMarkupTagAt(text string, start int) (ToolMarkupTag, bool) {
|
||||
}
|
||||
lower := strings.ToLower(text)
|
||||
i := start + 1
|
||||
for i < len(text) && text[i] == '<' {
|
||||
i++
|
||||
}
|
||||
closing := false
|
||||
if i < len(text) && text[i] == '/' {
|
||||
closing = true
|
||||
i++
|
||||
}
|
||||
dsmlLike := false
|
||||
if next, ok := consumeToolMarkupPipe(text, i); ok {
|
||||
dsmlLike = true
|
||||
i = next
|
||||
}
|
||||
if strings.HasPrefix(lower[i:], "dsml") {
|
||||
dsmlLike = true
|
||||
i += len("dsml")
|
||||
for next, ok := consumeToolMarkupSeparator(text, i); ok; next, ok = consumeToolMarkupSeparator(text, i) {
|
||||
i = next
|
||||
}
|
||||
}
|
||||
i, dsmlLike := consumeToolMarkupNamePrefix(lower, text, i)
|
||||
name, nameLen := matchToolMarkupName(lower, i)
|
||||
if nameLen == 0 {
|
||||
return ToolMarkupTag{}, false
|
||||
}
|
||||
nameEnd := i + nameLen
|
||||
nameEndBeforePipes := nameEnd
|
||||
for next, ok := consumeToolMarkupPipe(text, nameEnd); ok; next, ok = consumeToolMarkupPipe(text, nameEnd) {
|
||||
nameEnd = next
|
||||
}
|
||||
hasTrailingPipe := nameEnd > nameEndBeforePipes
|
||||
if !hasToolMarkupBoundary(text, nameEnd) {
|
||||
return ToolMarkupTag{}, false
|
||||
}
|
||||
end := findXMLTagEnd(text, nameEnd)
|
||||
if end < 0 {
|
||||
return ToolMarkupTag{}, false
|
||||
if !hasTrailingPipe {
|
||||
return ToolMarkupTag{}, false
|
||||
}
|
||||
end = nameEnd - 1
|
||||
}
|
||||
if hasTrailingPipe {
|
||||
if nextLT := strings.IndexByte(text[nameEnd:], '<'); nextLT >= 0 && end >= nameEnd+nextLT {
|
||||
end = nameEnd - 1
|
||||
}
|
||||
}
|
||||
trimmed := strings.TrimSpace(text[start : end+1])
|
||||
return ToolMarkupTag{
|
||||
@@ -171,6 +176,74 @@ func scanToolMarkupTagAt(text string, start int) (ToolMarkupTag, bool) {
|
||||
}, true
|
||||
}
|
||||
|
||||
func IsPartialToolMarkupTagPrefix(text string) bool {
|
||||
if text == "" || text[0] != '<' || strings.Contains(text, ">") {
|
||||
return false
|
||||
}
|
||||
lower := strings.ToLower(text)
|
||||
i := 1
|
||||
for i < len(text) && text[i] == '<' {
|
||||
i++
|
||||
}
|
||||
if i >= len(text) {
|
||||
return true
|
||||
}
|
||||
if text[i] == '/' {
|
||||
i++
|
||||
}
|
||||
for i <= len(text) {
|
||||
if i == len(text) {
|
||||
return true
|
||||
}
|
||||
if hasToolMarkupNamePrefix(lower[i:]) {
|
||||
return true
|
||||
}
|
||||
if strings.HasPrefix("dsml", lower[i:]) {
|
||||
return true
|
||||
}
|
||||
next, ok := consumeToolMarkupNamePrefixOnce(lower, text, i)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
i = next
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func consumeToolMarkupNamePrefix(lower, text string, idx int) (int, bool) {
|
||||
dsmlLike := false
|
||||
for {
|
||||
next, ok := consumeToolMarkupNamePrefixOnce(lower, text, idx)
|
||||
if !ok {
|
||||
return idx, dsmlLike
|
||||
}
|
||||
idx = next
|
||||
dsmlLike = true
|
||||
}
|
||||
}
|
||||
|
||||
func consumeToolMarkupNamePrefixOnce(lower, text string, idx int) (int, bool) {
|
||||
if next, ok := consumeToolMarkupPipe(text, idx); ok {
|
||||
return next, true
|
||||
}
|
||||
if idx < len(text) && (text[idx] == ' ' || text[idx] == '\t' || text[idx] == '\r' || text[idx] == '\n') {
|
||||
return idx + 1, true
|
||||
}
|
||||
if strings.HasPrefix(lower[idx:], "dsml") {
|
||||
return idx + len("dsml"), true
|
||||
}
|
||||
return idx, false
|
||||
}
|
||||
|
||||
func hasToolMarkupNamePrefix(lowerTail string) bool {
|
||||
for _, name := range toolMarkupNames {
|
||||
if strings.HasPrefix(lowerTail, name) || strings.HasPrefix(name, lowerTail) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
func matchToolMarkupName(lower string, start int) (string, int) {
|
||||
for _, name := range toolMarkupNames {
|
||||
if strings.HasPrefix(lower[start:], name) {
|
||||
@@ -193,19 +266,6 @@ func consumeToolMarkupPipe(text string, idx int) (int, bool) {
|
||||
return idx, false
|
||||
}
|
||||
|
||||
func consumeToolMarkupSeparator(text string, idx int) (int, bool) {
|
||||
if idx >= len(text) {
|
||||
return idx, false
|
||||
}
|
||||
if text[idx] == ' ' || text[idx] == '\t' || text[idx] == '\r' || text[idx] == '\n' {
|
||||
return idx + 1, true
|
||||
}
|
||||
if next, ok := consumeToolMarkupPipe(text, idx); ok {
|
||||
return next, true
|
||||
}
|
||||
return idx, false
|
||||
}
|
||||
|
||||
func hasToolMarkupBoundary(text string, idx int) bool {
|
||||
if idx >= len(text) {
|
||||
return true
|
||||
|
||||
@@ -41,6 +41,52 @@ func TestParseToolCallsSupportsDSMLShell(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsToleratesDSMLTrailingPipeTagTerminator(t *testing.T) {
|
||||
text := strings.Join([]string{
|
||||
`<|DSML|tool_calls| `,
|
||||
` <|DSML|invoke name="terminal">`,
|
||||
` <|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>`,
|
||||
` <|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>`,
|
||||
` </|DSML|invoke>`,
|
||||
`</|DSML|tool_calls>`,
|
||||
}, "\n")
|
||||
calls := ParseToolCalls(text, []string{"terminal"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one trailing-pipe DSML call, got %#v", calls)
|
||||
}
|
||||
if calls[0].Name != "terminal" {
|
||||
t.Fatalf("expected terminal tool, got %#v", calls[0])
|
||||
}
|
||||
if calls[0].Input["command"] != `find "/home" -type d` {
|
||||
t.Fatalf("expected command argument, got %#v", calls[0].Input)
|
||||
}
|
||||
if calls[0].Input["timeout"] != float64(10) {
|
||||
t.Fatalf("expected numeric timeout, got %#v", calls[0].Input)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsToleratesExtraLeadingLessThanBeforeDSML(t *testing.T) {
|
||||
text := `<<|DSML|tool_calls><<|DSML|invoke name="Bash"><<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter></|DSML|invoke></|DSML|tool_calls>`
|
||||
calls := ParseToolCalls(text, []string{"Bash"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one extra-leading-less-than DSML call, got %#v", calls)
|
||||
}
|
||||
if calls[0].Name != "Bash" || calls[0].Input["command"] != "pwd" {
|
||||
t.Fatalf("unexpected extra-leading-less-than DSML parse result: %#v", calls[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsToleratesRepeatedDSMLPrefixNoise(t *testing.T) {
|
||||
text := `<<DSML|DSML|tool_calls><<DSML|DSML|invoke name="Bash"><<DSML|DSML|parameter name="command"><![CDATA[git status]]></DSML|DSML|parameter></DSML|DSML|invoke></DSML|DSML|tool_calls>`
|
||||
calls := ParseToolCalls(text, []string{"Bash"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one repeated-prefix DSML call, got %#v", calls)
|
||||
}
|
||||
if calls[0].Name != "Bash" || calls[0].Input["command"] != "git status" {
|
||||
t.Fatalf("unexpected repeated-prefix DSML parse result: %#v", calls[0])
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsSupportsDSMLShellWithCanonicalExampleInCDATA(t *testing.T) {
|
||||
content := `<tool_calls><invoke name="demo"><parameter name="value">x</parameter></invoke></tool_calls>`
|
||||
text := `<|DSML|tool_calls><|DSML|invoke name="Write"><|DSML|parameter name="file_path">notes.md</|DSML|parameter><|DSML|parameter name="content"><![CDATA[` + content + `]]></|DSML|parameter></|DSML|invoke></|DSML|tool_calls>`
|
||||
@@ -248,6 +294,59 @@ func TestParseToolCallsTreatsSingleItemCDATAAsArray(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsTreatsLooseJSONListAsArray(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
body string
|
||||
}{
|
||||
{
|
||||
name: "plain text",
|
||||
body: `{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}`,
|
||||
},
|
||||
{
|
||||
name: "cdata",
|
||||
body: `<![CDATA[{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}]]>`,
|
||||
},
|
||||
}
|
||||
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
text := `<tool_calls><invoke name="TodoWrite"><parameter name="todos">` + tt.body + `</parameter></invoke></tool_calls>`
|
||||
calls := ParseToolCalls(text, []string{"TodoWrite"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one TodoWrite call, got %#v", calls)
|
||||
}
|
||||
items, ok := calls[0].Input["todos"].([]any)
|
||||
if !ok || len(items) != 2 {
|
||||
t.Fatalf("expected loose JSON list to parse as array, got %#v", calls[0].Input["todos"])
|
||||
}
|
||||
first, ok := items[0].(map[string]any)
|
||||
if !ok {
|
||||
t.Fatalf("expected first todo object, got %#v", items[0])
|
||||
}
|
||||
if first["content"] != "Test TodoWrite tool" || first["status"] != "completed" {
|
||||
t.Fatalf("unexpected first todo: %#v", first)
|
||||
}
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsKeepsPreservedTextParametersAsText(t *testing.T) {
|
||||
text := `<tool_calls><invoke name="Write"><parameter name="content"><![CDATA[{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}]]></parameter></invoke></tool_calls>`
|
||||
calls := ParseToolCalls(text, []string{"Write"})
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one Write call, got %#v", calls)
|
||||
}
|
||||
got, ok := calls[0].Input["content"].(string)
|
||||
if !ok {
|
||||
t.Fatalf("expected content to stay a string, got %#v", calls[0].Input["content"])
|
||||
}
|
||||
want := `{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}`
|
||||
if got != want {
|
||||
t.Fatalf("expected content to stay raw, got %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestParseToolCallsTreatsCDATAObjectFragmentAsObject(t *testing.T) {
|
||||
payload := `<question><![CDATA[Pick one]]></question><options><item><label><![CDATA[A]]></label></item><item><label><![CDATA[B]]></label></item></options>`
|
||||
text := `<tool_calls><invoke name="AskUserQuestion"><parameter name="questions"><![CDATA[` + payload + `]]></parameter></invoke></tool_calls>`
|
||||
|
||||
@@ -1,10 +1,6 @@
|
||||
package toolstream
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/toolcall"
|
||||
)
|
||||
import "ds2api/internal/toolcall"
|
||||
|
||||
func ProcessChunk(state *State, chunk string, toolNames []string) []Event {
|
||||
if state == nil {
|
||||
@@ -174,31 +170,27 @@ func findToolSegmentStart(state *State, s string) int {
|
||||
if s == "" {
|
||||
return -1
|
||||
}
|
||||
lower := strings.ToLower(s)
|
||||
offset := 0
|
||||
for {
|
||||
bestKeyIdx := -1
|
||||
matchedTag := ""
|
||||
for _, tag := range xmlToolTagsToDetect {
|
||||
idx := strings.Index(lower[offset:], tag)
|
||||
if idx >= 0 {
|
||||
idx += offset
|
||||
if bestKeyIdx < 0 || idx < bestKeyIdx {
|
||||
bestKeyIdx = idx
|
||||
matchedTag = tag
|
||||
}
|
||||
}
|
||||
}
|
||||
if bestKeyIdx < 0 {
|
||||
tag, ok := toolcall.FindToolMarkupTagOutsideIgnored(s, offset)
|
||||
if !ok {
|
||||
return -1
|
||||
}
|
||||
if !insideCodeFenceWithState(state, s[:bestKeyIdx]) {
|
||||
return bestKeyIdx
|
||||
start := includeDuplicateLeadingLessThan(s, tag.Start)
|
||||
if !insideCodeFenceWithState(state, s[:start]) {
|
||||
return start
|
||||
}
|
||||
offset = bestKeyIdx + len(matchedTag)
|
||||
offset = tag.End + 1
|
||||
}
|
||||
}
|
||||
|
||||
func includeDuplicateLeadingLessThan(s string, idx int) int {
|
||||
for idx > 0 && s[idx-1] == '<' {
|
||||
idx--
|
||||
}
|
||||
return idx
|
||||
}
|
||||
|
||||
func consumeToolCapture(state *State, toolNames []string) (prefix string, calls []toolcall.ParsedToolCall, suffix string, ready bool) {
|
||||
captured := state.capture.String()
|
||||
if captured == "" {
|
||||
|
||||
@@ -153,27 +153,14 @@ func findPartialXMLToolTagStart(s string) int {
|
||||
if lastLT < 0 {
|
||||
return -1
|
||||
}
|
||||
tail := s[lastLT:]
|
||||
start := includeDuplicateLeadingLessThan(s, lastLT)
|
||||
tail := s[start:]
|
||||
// If there's a '>' in the tail, the tag is closed — not partial.
|
||||
if strings.Contains(tail, ">") {
|
||||
return -1
|
||||
}
|
||||
lowerTail := strings.ToLower(tail)
|
||||
for _, tag := range []string{
|
||||
"<tool_calls", "<invoke", "<parameter",
|
||||
"<|tool_calls", "<|invoke", "<|parameter",
|
||||
"<|tool_calls", "<|invoke", "<|parameter",
|
||||
"<|dsml|tool_calls", "<|dsml|invoke", "<|dsml|parameter",
|
||||
"<|dsml|tool_calls", "<|dsml|invoke", "<|dsml|parameter",
|
||||
"<dsmltool_calls", "<dsmlinvoke", "<dsmlparameter",
|
||||
"<dsml tool_calls", "<dsml invoke", "<dsml parameter",
|
||||
"<dsml|tool_calls", "<dsml|invoke", "<dsml|parameter",
|
||||
"<|dsmltool_calls", "<|dsmlinvoke", "<|dsmlparameter",
|
||||
"<|dsml tool_calls", "<|dsml invoke", "<|dsml parameter",
|
||||
} {
|
||||
if strings.HasPrefix(tag, lowerTail) {
|
||||
return lastLT
|
||||
}
|
||||
if toolcall.IsPartialToolMarkupTagPrefix(tail) {
|
||||
return start
|
||||
}
|
||||
return -1
|
||||
}
|
||||
|
||||
@@ -1,35 +0,0 @@
|
||||
package toolstream
|
||||
|
||||
import "regexp"
|
||||
|
||||
// --- XML tool call support for the streaming sieve ---
|
||||
|
||||
//nolint:unused // kept as explicit tag inventory for future XML sieve refinements.
|
||||
var xmlToolCallClosingTags = []string{"</tool_calls>", "</|dsml|tool_calls>", "</|dsmltool_calls>", "</|dsml tool_calls>", "</dsml|tool_calls>", "</dsmltool_calls>", "</dsml tool_calls>", "</|tool_calls>", "</|tool_calls>"}
|
||||
|
||||
// xmlToolCallBlockPattern matches a complete canonical XML tool call block.
|
||||
//
|
||||
//nolint:unused // reserved for future fast-path XML block detection.
|
||||
var xmlToolCallBlockPattern = regexp.MustCompile(`(?is)((?:<tool_calls\b|<\|dsml\|tool_calls\b)[^>]*>\s*(?:.*?)\s*(?:</tool_calls>|</\|dsml\|tool_calls>))`)
|
||||
|
||||
// xmlToolTagsToDetect is the set of XML tag prefixes used by findToolSegmentStart.
|
||||
var xmlToolTagsToDetect = []string{
|
||||
"<|dsml|tool_calls>", "<|dsml|tool_calls\n", "<|dsml|tool_calls ",
|
||||
"<|dsml|tool_calls>", "<|dsml|tool_calls\n", "<|dsml|tool_calls ",
|
||||
"<|dsml|invoke ", "<|dsml|invoke\n", "<|dsml|invoke\t", "<|dsml|invoke\r",
|
||||
"<|dsmltool_calls>", "<|dsmltool_calls\n", "<|dsmltool_calls ",
|
||||
"<|dsmlinvoke ", "<|dsmlinvoke\n", "<|dsmlinvoke\t", "<|dsmlinvoke\r",
|
||||
"<|dsml tool_calls>", "<|dsml tool_calls\n", "<|dsml tool_calls ",
|
||||
"<|dsml invoke ", "<|dsml invoke\n", "<|dsml invoke\t", "<|dsml invoke\r",
|
||||
"<dsml|tool_calls>", "<dsml|tool_calls\n", "<dsml|tool_calls ",
|
||||
"<dsml|invoke ", "<dsml|invoke\n", "<dsml|invoke\t", "<dsml|invoke\r",
|
||||
"<dsmltool_calls>", "<dsmltool_calls\n", "<dsmltool_calls ",
|
||||
"<dsmlinvoke ", "<dsmlinvoke\n", "<dsmlinvoke\t", "<dsmlinvoke\r",
|
||||
"<dsml tool_calls>", "<dsml tool_calls\n", "<dsml tool_calls ",
|
||||
"<dsml invoke ", "<dsml invoke\n", "<dsml invoke\t", "<dsml invoke\r",
|
||||
"<|tool_calls>", "<|tool_calls\n", "<|tool_calls ",
|
||||
"<|invoke ", "<|invoke\n", "<|invoke\t", "<|invoke\r",
|
||||
"<|tool_calls>", "<|tool_calls\n", "<|tool_calls ",
|
||||
"<|invoke ", "<|invoke\n", "<|invoke\t", "<|invoke\r",
|
||||
"<tool_calls>", "<tool_calls\n", "<tool_calls ", "<invoke ", "<invoke\n", "<invoke\t", "<invoke\r",
|
||||
}
|
||||
@@ -72,6 +72,97 @@ func TestProcessToolSieveInterceptsDSMLToolCallWithoutLeak(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveInterceptsDSMLTrailingPipeToolCallWithoutLeak(t *testing.T) {
|
||||
var state State
|
||||
chunks := []string{
|
||||
"<|DSML|tool_calls| \n",
|
||||
` <|DSML|invoke name="terminal">` + "\n",
|
||||
` <|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>` + "\n",
|
||||
` <|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>` + "\n",
|
||||
" </|DSML|invoke>\n",
|
||||
"</|DSML|tool_calls>",
|
||||
}
|
||||
var events []Event
|
||||
for _, c := range chunks {
|
||||
events = append(events, ProcessChunk(&state, c, []string{"terminal"})...)
|
||||
}
|
||||
events = append(events, Flush(&state, []string{"terminal"})...)
|
||||
|
||||
var textContent strings.Builder
|
||||
var calls []any
|
||||
for _, evt := range events {
|
||||
textContent.WriteString(evt.Content)
|
||||
for _, call := range evt.ToolCalls {
|
||||
calls = append(calls, call)
|
||||
}
|
||||
}
|
||||
if text := textContent.String(); strings.Contains(strings.ToLower(text), "dsml") || strings.Contains(text, "terminal") {
|
||||
t.Fatalf("trailing-pipe DSML tool call leaked to text: %q events=%#v", text, events)
|
||||
}
|
||||
if len(calls) != 1 {
|
||||
t.Fatalf("expected one trailing-pipe DSML tool call, got %d events=%#v", len(calls), events)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveInterceptsExtraLeadingLessThanDSMLToolCallWithoutLeak(t *testing.T) {
|
||||
var state State
|
||||
chunks := []string{
|
||||
"<<|DSML|tool_calls>\n",
|
||||
` <<|DSML|invoke name="Bash">` + "\n",
|
||||
` <<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter>` + "\n",
|
||||
" </|DSML|invoke>\n",
|
||||
"</|DSML|tool_calls>",
|
||||
}
|
||||
var events []Event
|
||||
for _, c := range chunks {
|
||||
events = append(events, ProcessChunk(&state, c, []string{"Bash"})...)
|
||||
}
|
||||
events = append(events, Flush(&state, []string{"Bash"})...)
|
||||
|
||||
var textContent strings.Builder
|
||||
toolCalls := 0
|
||||
for _, evt := range events {
|
||||
textContent.WriteString(evt.Content)
|
||||
toolCalls += len(evt.ToolCalls)
|
||||
}
|
||||
if text := textContent.String(); strings.Contains(text, "<") || strings.Contains(text, "Bash") {
|
||||
t.Fatalf("extra-leading-less-than DSML tool call leaked to text: %q events=%#v", text, events)
|
||||
}
|
||||
if toolCalls != 1 {
|
||||
t.Fatalf("expected one extra-leading-less-than DSML tool call, got %d events=%#v", toolCalls, events)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveInterceptsRepeatedDSMLPrefixNoiseWithoutLeak(t *testing.T) {
|
||||
var state State
|
||||
chunks := []string{
|
||||
"<<DSML|DSML|tool",
|
||||
"_calls>\n",
|
||||
` <<DSML|DSML|invoke name="Bash">` + "\n",
|
||||
` <<DSML|DSML|parameter name="command"><![CDATA[git status]]></DSML|DSML|parameter>` + "\n",
|
||||
" </DSML|DSML|invoke>\n",
|
||||
"</DSML|DSML|tool_calls>",
|
||||
}
|
||||
var events []Event
|
||||
for _, c := range chunks {
|
||||
events = append(events, ProcessChunk(&state, c, []string{"Bash"})...)
|
||||
}
|
||||
events = append(events, Flush(&state, []string{"Bash"})...)
|
||||
|
||||
var textContent strings.Builder
|
||||
toolCalls := 0
|
||||
for _, evt := range events {
|
||||
textContent.WriteString(evt.Content)
|
||||
toolCalls += len(evt.ToolCalls)
|
||||
}
|
||||
if text := textContent.String(); strings.Contains(strings.ToLower(text), "dsml") || strings.Contains(text, "Bash") {
|
||||
t.Fatalf("repeated-prefix DSML tool call leaked to text: %q events=%#v", text, events)
|
||||
}
|
||||
if toolCalls != 1 {
|
||||
t.Fatalf("expected one repeated-prefix DSML tool call, got %d events=%#v", toolCalls, events)
|
||||
}
|
||||
}
|
||||
|
||||
func TestProcessToolSieveHandlesLongXMLToolCall(t *testing.T) {
|
||||
var state State
|
||||
const toolName = "write_to_file"
|
||||
@@ -442,6 +533,8 @@ func TestFindToolSegmentStartDetectsXMLToolCalls(t *testing.T) {
|
||||
want int
|
||||
}{
|
||||
{"tool_calls_tag", "some text <tool_calls>\n", 10},
|
||||
{"dsml_trailing_pipe_tag", "some text <|DSML|tool_calls| \n", 10},
|
||||
{"dsml_extra_leading_less_than", "some text <<|DSML|tool_calls>\n", 10},
|
||||
{"invoke_tag_missing_wrapper", "some text <invoke name=\"read_file\">\n", 10},
|
||||
{"bare_tool_call_text", "prefix <tool_call>\n", -1},
|
||||
{"xml_inside_code_fence", "```xml\n<tool_calls><invoke name=\"read_file\"></invoke></tool_calls>\n```", -1},
|
||||
@@ -465,6 +558,8 @@ func TestFindPartialXMLToolTagStart(t *testing.T) {
|
||||
want int
|
||||
}{
|
||||
{"partial_tool_calls", "Hello <tool_ca", 6},
|
||||
{"partial_dsml_trailing_pipe", "Hello <|DSML|tool_calls|", 6},
|
||||
{"partial_dsml_extra_leading_less_than", "Hello <<|DSML|tool_calls", 6},
|
||||
{"partial_invoke", "Hello <inv", 6},
|
||||
{"bare_tool_call_not_held", "Hello <tool_name", -1},
|
||||
{"partial_lt_only", "Text <", 5},
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"ds2api/internal/config"
|
||||
@@ -12,7 +13,10 @@ func TestMessagesPrepareBasic(t *testing.T) {
|
||||
if got == "" {
|
||||
t.Fatal("expected non-empty prompt")
|
||||
}
|
||||
if got != "<|begin▁of▁sentence|><|User|>Hello<|Assistant|>" {
|
||||
if !strings.HasPrefix(got, "<|begin▁of▁sentence|><|System|>") {
|
||||
t.Fatalf("expected output integrity guard at the start, got %q", got)
|
||||
}
|
||||
if !strings.Contains(got, "Hello") || !strings.HasSuffix(got, "<|Assistant|>") {
|
||||
t.Fatalf("unexpected prompt: %q", got)
|
||||
}
|
||||
}
|
||||
@@ -26,8 +30,11 @@ func TestMessagesPrepareRoles(t *testing.T) {
|
||||
{"role": "user", "content": "How are you"},
|
||||
}
|
||||
got := MessagesPrepare(messages)
|
||||
if !contains(got, "<|System|>You are helper<|end▁of▁instructions|><|User|>Hi") {
|
||||
t.Fatalf("expected system/user separation in %q", got)
|
||||
if !contains(got, "Output integrity guard") {
|
||||
t.Fatalf("expected output integrity guard in %q", got)
|
||||
}
|
||||
if !contains(got, "You are helper") || !contains(got, "<|User|>Hi") {
|
||||
t.Fatalf("expected system/user content in %q", got)
|
||||
}
|
||||
if !contains(got, "<|begin▁of▁sentence|>") {
|
||||
t.Fatalf("expected begin marker in %q", got)
|
||||
@@ -77,9 +84,12 @@ func TestMessagesPrepareArrayTextVariants(t *testing.T) {
|
||||
},
|
||||
}
|
||||
got := MessagesPrepare(messages)
|
||||
if got != "<|begin▁of▁sentence|><|User|>line1\nline2<|Assistant|>" {
|
||||
if !contains(got, "line1\nline2") {
|
||||
t.Fatalf("unexpected content from text variants: %q", got)
|
||||
}
|
||||
if !strings.Contains(got, "Output integrity guard") {
|
||||
t.Fatalf("expected output integrity guard in %q", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestConvertClaudeToDeepSeek(t *testing.T) {
|
||||
|
||||
@@ -1,11 +1,5 @@
|
||||
package util
|
||||
|
||||
import (
|
||||
"strings"
|
||||
|
||||
tiktoken "github.com/hupe1980/go-tiktoken"
|
||||
)
|
||||
|
||||
const (
|
||||
defaultTokenizerModel = "gpt-4o"
|
||||
claudeTokenizerModel = "claude"
|
||||
@@ -33,41 +27,6 @@ func CountOutputTokens(text, model string) int {
|
||||
return base
|
||||
}
|
||||
|
||||
func countWithTokenizer(text, model string) int {
|
||||
text = strings.TrimSpace(text)
|
||||
if text == "" {
|
||||
return 0
|
||||
}
|
||||
encoding, err := tiktoken.NewEncodingForModel(tokenizerModelForCount(model))
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
ids, _, err := encoding.Encode(text, nil, nil)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return len(ids)
|
||||
}
|
||||
|
||||
func tokenizerModelForCount(model string) string {
|
||||
model = strings.ToLower(strings.TrimSpace(model))
|
||||
if model == "" {
|
||||
return defaultTokenizerModel
|
||||
}
|
||||
switch {
|
||||
case strings.HasPrefix(model, "claude"):
|
||||
return claudeTokenizerModel
|
||||
case strings.HasPrefix(model, "gpt-4"), strings.HasPrefix(model, "gpt-5"), strings.HasPrefix(model, "o1"), strings.HasPrefix(model, "o3"), strings.HasPrefix(model, "o4"):
|
||||
return defaultTokenizerModel
|
||||
case strings.HasPrefix(model, "deepseek-v4"):
|
||||
return defaultTokenizerModel
|
||||
case strings.HasPrefix(model, "deepseek"):
|
||||
return defaultTokenizerModel
|
||||
default:
|
||||
return defaultTokenizerModel
|
||||
}
|
||||
}
|
||||
|
||||
func conservativePromptPadding(base int) int {
|
||||
padding := base / 50
|
||||
if padding < 4 {
|
||||
|
||||
7
internal/util/token_count_heuristic.go
Normal file
7
internal/util/token_count_heuristic.go
Normal file
@@ -0,0 +1,7 @@
|
||||
//go:build 386 || arm || mips || mipsle || wasm
|
||||
|
||||
package util
|
||||
|
||||
func countWithTokenizer(_, _ string) int {
|
||||
return 0
|
||||
}
|
||||
98
internal/util/token_count_tiktoken.go
Normal file
98
internal/util/token_count_tiktoken.go
Normal file
@@ -0,0 +1,98 @@
|
||||
//go:build !386 && !arm && !mips && !mipsle && !wasm
|
||||
|
||||
package util
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
tiktoken "github.com/hupe1980/go-tiktoken"
|
||||
)
|
||||
|
||||
var (
|
||||
tokenEncodingPools sync.Map
|
||||
tokenEncodingUnsupported sync.Map
|
||||
)
|
||||
|
||||
func countWithTokenizer(text, model string) int {
|
||||
text = strings.TrimSpace(text)
|
||||
if text == "" {
|
||||
return 0
|
||||
}
|
||||
encoding, release := tokenizerEncodingForCount(tokenizerModelForCount(model))
|
||||
if encoding == nil {
|
||||
return 0
|
||||
}
|
||||
defer release()
|
||||
ids, _, err := encoding.Encode(text, nil, nil)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
return len(ids)
|
||||
}
|
||||
|
||||
func tokenizerEncodingForCount(model string) (*tiktoken.Encoding, func()) {
|
||||
model = strings.TrimSpace(model)
|
||||
if model == "" {
|
||||
model = defaultTokenizerModel
|
||||
}
|
||||
if _, ok := tokenEncodingUnsupported.Load(model); ok {
|
||||
return nil, func() {}
|
||||
}
|
||||
if rawPool, ok := tokenEncodingPools.Load(model); ok {
|
||||
pool, _ := rawPool.(*sync.Pool)
|
||||
return getEncodingFromPool(pool)
|
||||
}
|
||||
|
||||
encoding, err := tiktoken.NewEncodingForModel(model)
|
||||
if err != nil {
|
||||
tokenEncodingUnsupported.Store(model, struct{}{})
|
||||
return nil, func() {}
|
||||
}
|
||||
pool := &sync.Pool{
|
||||
New: func() any {
|
||||
encoding, err := tiktoken.NewEncodingForModel(model)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return encoding
|
||||
},
|
||||
}
|
||||
actualPool, _ := tokenEncodingPools.LoadOrStore(model, pool)
|
||||
pool, _ = actualPool.(*sync.Pool)
|
||||
return encoding, func() {
|
||||
pool.Put(encoding)
|
||||
}
|
||||
}
|
||||
|
||||
func getEncodingFromPool(pool *sync.Pool) (*tiktoken.Encoding, func()) {
|
||||
if pool == nil {
|
||||
return nil, func() {}
|
||||
}
|
||||
encoding, _ := pool.Get().(*tiktoken.Encoding)
|
||||
if encoding == nil {
|
||||
return nil, func() {}
|
||||
}
|
||||
return encoding, func() {
|
||||
pool.Put(encoding)
|
||||
}
|
||||
}
|
||||
|
||||
func tokenizerModelForCount(model string) string {
|
||||
model = strings.ToLower(strings.TrimSpace(model))
|
||||
if model == "" {
|
||||
return defaultTokenizerModel
|
||||
}
|
||||
switch {
|
||||
case strings.HasPrefix(model, "claude"):
|
||||
return claudeTokenizerModel
|
||||
case strings.HasPrefix(model, "gpt-4"), strings.HasPrefix(model, "gpt-5"), strings.HasPrefix(model, "o1"), strings.HasPrefix(model, "o3"), strings.HasPrefix(model, "o4"):
|
||||
return defaultTokenizerModel
|
||||
case strings.HasPrefix(model, "deepseek-v4"):
|
||||
return defaultTokenizerModel
|
||||
case strings.HasPrefix(model, "deepseek"):
|
||||
return defaultTokenizerModel
|
||||
default:
|
||||
return defaultTokenizerModel
|
||||
}
|
||||
}
|
||||
35
internal/util/token_count_tiktoken_test.go
Normal file
35
internal/util/token_count_tiktoken_test.go
Normal file
@@ -0,0 +1,35 @@
|
||||
//go:build !386 && !arm && !mips && !mipsle && !wasm
|
||||
|
||||
package util
|
||||
|
||||
import "testing"
|
||||
|
||||
func TestTokenizerEncodingForCountCachesSupportedModel(t *testing.T) {
|
||||
encoding, release := tokenizerEncodingForCount(defaultTokenizerModel)
|
||||
if encoding == nil {
|
||||
t.Fatalf("expected tokenizer encoding for %q", defaultTokenizerModel)
|
||||
}
|
||||
release()
|
||||
|
||||
if _, ok := tokenEncodingPools.Load(defaultTokenizerModel); !ok {
|
||||
t.Fatalf("expected tokenizer encoding pool for %q", defaultTokenizerModel)
|
||||
}
|
||||
|
||||
encoding, release = tokenizerEncodingForCount(defaultTokenizerModel)
|
||||
if encoding == nil {
|
||||
t.Fatalf("expected cached tokenizer encoding for %q", defaultTokenizerModel)
|
||||
}
|
||||
release()
|
||||
}
|
||||
|
||||
func TestTokenizerEncodingForCountCachesUnsupportedModel(t *testing.T) {
|
||||
const model = "__ds2api_unsupported_tokenizer_model__"
|
||||
encoding, release := tokenizerEncodingForCount(model)
|
||||
release()
|
||||
if encoding != nil {
|
||||
t.Fatalf("expected nil encoding for unsupported model %q", model)
|
||||
}
|
||||
if _, ok := tokenEncodingUnsupported.Load(model); !ok {
|
||||
t.Fatalf("expected unsupported tokenizer model to be cached")
|
||||
}
|
||||
}
|
||||
@@ -1,23 +0,0 @@
|
||||
# Node split syntax gate targets
|
||||
# Keep this list in sync with api/chat-stream and internal/js/helpers/stream-tool-sieve split modules.
|
||||
|
||||
api/chat-stream.js
|
||||
internal/js/chat-stream/index.js
|
||||
internal/js/chat-stream/error_shape.js
|
||||
internal/js/chat-stream/http_internal.js
|
||||
internal/js/chat-stream/proxy_go.js
|
||||
internal/js/chat-stream/sse_parse.js
|
||||
internal/js/chat-stream/stream_emitter.js
|
||||
internal/js/chat-stream/token_usage.js
|
||||
internal/js/chat-stream/toolcall_policy.js
|
||||
internal/js/chat-stream/vercel_stream.js
|
||||
|
||||
internal/js/helpers/stream-tool-sieve.js
|
||||
internal/js/helpers/stream-tool-sieve/index.js
|
||||
internal/js/helpers/stream-tool-sieve/state.js
|
||||
internal/js/helpers/stream-tool-sieve/sieve.js
|
||||
internal/js/helpers/stream-tool-sieve/sieve-xml.js
|
||||
internal/js/helpers/stream-tool-sieve/jsonscan.js
|
||||
internal/js/helpers/stream-tool-sieve/parse.js
|
||||
internal/js/helpers/stream-tool-sieve/format.js
|
||||
internal/js/helpers/stream-tool-sieve/tool-keywords.js
|
||||
@@ -1,149 +0,0 @@
|
||||
# Line gate targets for large-file decoupling refactor.
|
||||
# Backend default limit: 300 lines
|
||||
# Frontend (webui/) default limit: 500 lines
|
||||
# Entry/facade limit: 120 lines (enforced in script)
|
||||
# Test files are ignored by the gate script.
|
||||
|
||||
internal/config/config.go
|
||||
internal/config/logger.go
|
||||
internal/config/paths.go
|
||||
internal/config/codec.go
|
||||
internal/config/store.go
|
||||
internal/config/store_index.go
|
||||
internal/config/store_accessors.go
|
||||
internal/config/account.go
|
||||
|
||||
internal/httpapi/admin/configmgmt/handler_config_read.go
|
||||
internal/httpapi/admin/configmgmt/handler_config_write.go
|
||||
internal/httpapi/admin/configmgmt/handler_config_import.go
|
||||
internal/httpapi/admin/settings/handler_settings_read.go
|
||||
internal/httpapi/admin/settings/handler_settings_write.go
|
||||
internal/httpapi/admin/settings/handler_settings_parse.go
|
||||
internal/httpapi/admin/settings/handler_settings_runtime.go
|
||||
internal/httpapi/admin/accounts/handler_accounts_crud.go
|
||||
internal/httpapi/admin/accounts/handler_accounts_testing.go
|
||||
internal/httpapi/admin/accounts/handler_accounts_queue.go
|
||||
|
||||
internal/account/pool_core.go
|
||||
internal/account/pool_acquire.go
|
||||
internal/account/pool_waiters.go
|
||||
internal/account/pool_limits.go
|
||||
|
||||
internal/deepseek/client/client_core.go
|
||||
internal/deepseek/client/client_auth.go
|
||||
internal/deepseek/client/client_completion.go
|
||||
internal/deepseek/client/client_http_json.go
|
||||
internal/deepseek/client/client_http_helpers.go
|
||||
|
||||
internal/format/openai/render_chat.go
|
||||
internal/format/openai/render_responses.go
|
||||
internal/format/openai/render_stream_events.go
|
||||
internal/format/openai/render_usage.go
|
||||
|
||||
internal/httpapi/openai/shared/models.go
|
||||
internal/httpapi/openai/chat/handler_chat.go
|
||||
internal/httpapi/openai/shared/handler_errors.go
|
||||
internal/httpapi/openai/shared/handler_toolcall_policy.go
|
||||
internal/httpapi/openai/shared/handler_toolcall_format.go
|
||||
internal/httpapi/openai/responses/responses_handler.go
|
||||
internal/promptcompat/responses_input_normalize.go
|
||||
internal/promptcompat/responses_input_items.go
|
||||
internal/httpapi/openai/responses/responses_stream_runtime_core.go
|
||||
internal/httpapi/openai/responses/responses_stream_runtime_events.go
|
||||
internal/httpapi/openai/responses/responses_stream_runtime_toolcalls.go
|
||||
internal/toolstream/tool_sieve_state.go
|
||||
internal/toolstream/tool_sieve_core.go
|
||||
internal/toolstream/tool_sieve_xml.go
|
||||
internal/toolstream/tool_sieve_jsonscan.go
|
||||
|
||||
internal/toolcall/toolcalls_parse.go
|
||||
internal/toolcall/toolcalls_candidates.go
|
||||
internal/toolcall/toolcalls_format.go
|
||||
|
||||
internal/httpapi/claude/handler_routes.go
|
||||
internal/httpapi/claude/handler_messages.go
|
||||
internal/httpapi/claude/handler_tokens.go
|
||||
internal/httpapi/claude/handler_errors.go
|
||||
internal/httpapi/claude/handler_utils.go
|
||||
internal/httpapi/claude/stream_runtime_core.go
|
||||
internal/httpapi/claude/stream_runtime_emit.go
|
||||
internal/httpapi/claude/stream_runtime_finalize.go
|
||||
|
||||
internal/httpapi/gemini/handler_routes.go
|
||||
internal/httpapi/gemini/handler_generate.go
|
||||
internal/httpapi/gemini/handler_stream_runtime.go
|
||||
internal/httpapi/gemini/handler_errors.go
|
||||
internal/httpapi/gemini/convert_request.go
|
||||
internal/httpapi/gemini/convert_messages.go
|
||||
internal/httpapi/gemini/convert_tools.go
|
||||
internal/httpapi/gemini/convert_passthrough.go
|
||||
|
||||
internal/testsuite/runner_core.go
|
||||
internal/testsuite/runner_env.go
|
||||
internal/testsuite/runner_http.go
|
||||
internal/testsuite/runner_cases_openai.go
|
||||
internal/testsuite/runner_cases_openai_advanced.go
|
||||
internal/testsuite/runner_cases_admin.go
|
||||
internal/testsuite/runner_cases_claude.go
|
||||
internal/testsuite/runner_summary.go
|
||||
internal/testsuite/runner_utils.go
|
||||
internal/testsuite/runner_defaults.go
|
||||
internal/testsuite/runner_registry.go
|
||||
internal/testsuite/edge_cases_abort.go
|
||||
internal/testsuite/edge_cases_error_contract.go
|
||||
|
||||
api/chat-stream.js
|
||||
internal/js/chat-stream/index.js
|
||||
internal/js/chat-stream/vercel_stream.js
|
||||
internal/js/chat-stream/proxy_go.js
|
||||
internal/js/chat-stream/sse_parse.js
|
||||
internal/js/chat-stream/http_internal.js
|
||||
internal/js/chat-stream/toolcall_policy.js
|
||||
internal/js/chat-stream/error_shape.js
|
||||
internal/js/chat-stream/token_usage.js
|
||||
internal/js/chat-stream/stream_emitter.js
|
||||
|
||||
internal/js/helpers/stream-tool-sieve.js
|
||||
internal/js/helpers/stream-tool-sieve/index.js
|
||||
internal/js/helpers/stream-tool-sieve/state.js
|
||||
internal/js/helpers/stream-tool-sieve/sieve.js
|
||||
internal/js/helpers/stream-tool-sieve/sieve-xml.js
|
||||
internal/js/helpers/stream-tool-sieve/jsonscan.js
|
||||
internal/js/helpers/stream-tool-sieve/parse.js
|
||||
internal/js/helpers/stream-tool-sieve/format.js
|
||||
|
||||
webui/src/App.jsx
|
||||
webui/src/app/AppRoutes.jsx
|
||||
webui/src/app/useAdminAuth.js
|
||||
webui/src/app/useAdminConfig.js
|
||||
webui/src/layout/DashboardShell.jsx
|
||||
|
||||
webui/src/features/account/AccountManagerContainer.jsx
|
||||
webui/src/features/account/useAccountsData.js
|
||||
webui/src/features/account/useAccountActions.js
|
||||
webui/src/features/account/QueueCards.jsx
|
||||
webui/src/features/account/ApiKeysPanel.jsx
|
||||
webui/src/features/account/AccountsTable.jsx
|
||||
webui/src/features/account/AddKeyModal.jsx
|
||||
webui/src/features/account/AddAccountModal.jsx
|
||||
|
||||
webui/src/features/apiTester/ApiTesterContainer.jsx
|
||||
webui/src/features/apiTester/useApiTesterState.js
|
||||
webui/src/features/apiTester/useChatStreamClient.js
|
||||
webui/src/features/apiTester/ConfigPanel.jsx
|
||||
webui/src/features/apiTester/ChatPanel.jsx
|
||||
|
||||
webui/src/features/settings/SettingsContainer.jsx
|
||||
webui/src/features/settings/useSettingsForm.js
|
||||
webui/src/features/settings/settingsApi.js
|
||||
webui/src/features/settings/SecuritySection.jsx
|
||||
webui/src/features/settings/RuntimeSection.jsx
|
||||
webui/src/features/settings/BehaviorSection.jsx
|
||||
webui/src/features/settings/ModelSection.jsx
|
||||
webui/src/features/settings/BackupSection.jsx
|
||||
|
||||
webui/src/features/vercel/VercelSyncContainer.jsx
|
||||
webui/src/features/vercel/useVercelSyncState.js
|
||||
webui/src/features/vercel/VercelSyncForm.jsx
|
||||
webui/src/features/vercel/VercelSyncStatus.jsx
|
||||
webui/src/features/vercel/VercelGuide.jsx
|
||||
@@ -1,28 +0,0 @@
|
||||
# Stage 6 Manual Smoke Checklist
|
||||
|
||||
- Date: 2026-02-22
|
||||
- Tester: release-maintainer
|
||||
- Environment: local macOS + latest Chrome
|
||||
|
||||
## Items
|
||||
|
||||
1. Login flow (`/admin/login`) succeeds and failure message shape unchanged.
|
||||
2. Account manager:
|
||||
- add/edit/delete account
|
||||
- queue status cards render and refresh
|
||||
3. API tester:
|
||||
- non-stream request succeeds
|
||||
- stream request receives incremental output and final state
|
||||
4. Settings:
|
||||
- read settings
|
||||
- save settings
|
||||
- backup/export path works
|
||||
5. Vercel sync:
|
||||
- status poll
|
||||
- manual refresh
|
||||
- sync action and status feedback text
|
||||
|
||||
## Result
|
||||
|
||||
- Status: `PASS`
|
||||
- Notes: login/account/api-tester/settings/vercel-sync smoke passed with no behavior regressions.
|
||||
@@ -258,6 +258,28 @@ test('vercel stream exhausts DeepSeek continue before synthetic retry', async ()
|
||||
assert.equal(fetchBodies.some((body) => String(body.prompt || '').includes('Previous reply had no visible output')), false);
|
||||
});
|
||||
|
||||
test('vercel stream continues direct quasi_status incomplete before final tool call', async () => {
|
||||
const { frames, fetchURLs } = await runMockVercelStreamSequence([
|
||||
[
|
||||
'data: {"response_message_id":7,"p":"response/content","v":"<tool_calls><invoke name=\\"write_file\\"><parameter name=\\"content\\"><![CDATA[part-one"}\n\n',
|
||||
'data: {"p":"response/quasi_status","v":"INCOMPLETE"}\n\n',
|
||||
'data: [DONE]\n\n',
|
||||
],
|
||||
[
|
||||
'data: {"response_message_id":8,"p":"response/content","v":"-part-two]]></parameter></invoke></tool_calls>"}\n\n',
|
||||
'data: {"p":"response/status","v":"FINISHED"}\n\n',
|
||||
'data: [DONE]\n\n',
|
||||
],
|
||||
], { tool_names: ['write_file'] });
|
||||
const parsed = frames.filter((frame) => frame !== '[DONE]').map((frame) => JSON.parse(frame));
|
||||
const toolDelta = parsed.find((item) => item.choices?.[0]?.delta?.tool_calls);
|
||||
assert.equal(fetchURLs.filter((url) => url === 'https://chat.deepseek.com/api/v0/chat/continue').length, 1);
|
||||
assert.ok(toolDelta);
|
||||
const args = JSON.parse(toolDelta.choices[0].delta.tool_calls[0].function.arguments);
|
||||
assert.equal(args.content, 'part-one-part-two');
|
||||
assert.equal(parsed.at(-1).choices[0].finish_reason, 'tool_calls');
|
||||
});
|
||||
|
||||
|
||||
|
||||
test('vercel stream usage completion_tokens does not double-count visible output', async () => {
|
||||
|
||||
@@ -57,6 +57,49 @@ test('parseToolCalls parses DSML shell as XML-compatible tool call', () => {
|
||||
assert.deepEqual(calls[0].input, { path: 'README.MD' });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates DSML trailing pipe tag terminator', () => {
|
||||
const payload = [
|
||||
'<|DSML|tool_calls| ',
|
||||
' <|DSML|invoke name="terminal">',
|
||||
' <|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>',
|
||||
' <|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>',
|
||||
' </|DSML|invoke>',
|
||||
'</|DSML|tool_calls>',
|
||||
].join('\n');
|
||||
const calls = parseToolCalls(payload, ['terminal']);
|
||||
assert.equal(calls.length, 1);
|
||||
assert.equal(calls[0].name, 'terminal');
|
||||
assert.deepEqual(calls[0].input, { command: 'find "/home" -type d', timeout: 10 });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates extra leading less-than before DSML tags', () => {
|
||||
const payload = [
|
||||
'<<|DSML|tool_calls>',
|
||||
' <<|DSML|invoke name="Bash">',
|
||||
' <<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter>',
|
||||
' </|DSML|invoke>',
|
||||
'</|DSML|tool_calls>',
|
||||
].join('\n');
|
||||
const calls = parseToolCalls(payload, ['Bash']);
|
||||
assert.equal(calls.length, 1);
|
||||
assert.equal(calls[0].name, 'Bash');
|
||||
assert.deepEqual(calls[0].input, { command: 'pwd' });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates repeated DSML prefix noise', () => {
|
||||
const payload = [
|
||||
'<<DSML|DSML|tool_calls>',
|
||||
' <<DSML|DSML|invoke name="Bash">',
|
||||
' <<DSML|DSML|parameter name="command"><![CDATA[git status]]></DSML|DSML|parameter>',
|
||||
' </DSML|DSML|invoke>',
|
||||
'</DSML|DSML|tool_calls>',
|
||||
].join('\n');
|
||||
const calls = parseToolCalls(payload, ['Bash']);
|
||||
assert.equal(calls.length, 1);
|
||||
assert.equal(calls[0].name, 'Bash');
|
||||
assert.deepEqual(calls[0].input, { command: 'git status' });
|
||||
});
|
||||
|
||||
test('parseToolCalls tolerates DSML space-separator typo', () => {
|
||||
const payload = '<|DSML tool_calls><|DSML invoke name="Read"><|DSML parameter name="file_path"><![CDATA[/tmp/input.txt]]></|DSML parameter></|DSML invoke></|DSML tool_calls>';
|
||||
const calls = parseToolCalls(payload, ['Read']);
|
||||
@@ -188,6 +231,28 @@ test('parseToolCalls treats single-item CDATA body as array', () => {
|
||||
assert.deepEqual(calls[0].input.todos, ['one']);
|
||||
});
|
||||
|
||||
test('parseToolCalls treats loose JSON list as array', () => {
|
||||
for (const [label, body] of [
|
||||
['plain text', '{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}'],
|
||||
['cdata', '<![CDATA[{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}]]>'],
|
||||
]) {
|
||||
const payload = `<tool_calls><invoke name="TodoWrite"><parameter name="todos">${body}</parameter></invoke></tool_calls>`;
|
||||
const calls = parseToolCalls(payload, ['TodoWrite']);
|
||||
assert.equal(calls.length, 1, label);
|
||||
assert.deepEqual(calls[0].input.todos, [
|
||||
{ content: 'Test TodoWrite tool', status: 'completed' },
|
||||
{ content: 'Another task', status: 'pending' },
|
||||
]);
|
||||
}
|
||||
});
|
||||
|
||||
test('parseToolCalls keeps preserved text parameters as text', () => {
|
||||
const payload = '<tool_calls><invoke name="Write"><parameter name="content"><![CDATA[{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}]]></parameter></invoke></tool_calls>';
|
||||
const calls = parseToolCalls(payload, ['Write']);
|
||||
assert.equal(calls.length, 1);
|
||||
assert.equal(calls[0].input.content, '{"content":"Test TodoWrite tool","status":"completed"}, {"content":"Another task","status":"pending"}');
|
||||
});
|
||||
|
||||
test('formatOpenAIStreamToolCalls normalizes camelCase inputSchema string fields', () => {
|
||||
const formatted = formatOpenAIStreamToolCalls([
|
||||
{ name: 'Write', input: { content: { message: 'hi' }, taskId: 1 } },
|
||||
@@ -285,6 +350,39 @@ test('sieve emits tool_calls for DSML space-separator typo', () => {
|
||||
assert.equal(text.includes('<|DSML invoke'), false);
|
||||
});
|
||||
|
||||
test('sieve emits tool_calls for DSML trailing pipe tag terminator', () => {
|
||||
const events = runSieve([
|
||||
'<|DSML|tool_calls| \n',
|
||||
'<|DSML|invoke name="terminal">\n',
|
||||
'<|DSML|parameter name="command"><![CDATA[find "/home" -type d]]></|DSML|parameter>\n',
|
||||
'<|DSML|parameter name="timeout"><![CDATA[10]]></|DSML|parameter>\n',
|
||||
'</|DSML|invoke>\n',
|
||||
'</|DSML|tool_calls>',
|
||||
], ['terminal']);
|
||||
const finalCalls = events.filter((evt) => evt.type === 'tool_calls').flatMap((evt) => evt.calls || []);
|
||||
const text = collectText(events);
|
||||
assert.equal(finalCalls.length, 1);
|
||||
assert.equal(finalCalls[0].name, 'terminal');
|
||||
assert.deepEqual(finalCalls[0].input, { command: 'find "/home" -type d', timeout: 10 });
|
||||
assert.equal(text.toLowerCase().includes('dsml'), false);
|
||||
});
|
||||
|
||||
test('sieve emits tool_calls for extra leading less-than DSML tags without leaking prefix', () => {
|
||||
const events = runSieve([
|
||||
'<<|DSML|tool_calls>\n',
|
||||
'<<|DSML|invoke name="Bash">\n',
|
||||
'<<|DSML|parameter name="command"><![CDATA[pwd]]></|DSML|parameter>\n',
|
||||
'</|DSML|invoke>\n',
|
||||
'</|DSML|tool_calls>',
|
||||
], ['Bash']);
|
||||
const finalCalls = events.filter((evt) => evt.type === 'tool_calls').flatMap((evt) => evt.calls || []);
|
||||
const text = collectText(events);
|
||||
assert.equal(finalCalls.length, 1);
|
||||
assert.equal(finalCalls[0].name, 'Bash');
|
||||
assert.deepEqual(finalCalls[0].input, { command: 'pwd' });
|
||||
assert.equal(text.includes('<'), false);
|
||||
});
|
||||
|
||||
test('sieve keeps DSML space lookalike tag names as text', () => {
|
||||
const input = '<|DSML tool_calls_extra><|DSML invoke name="Read"><|DSML parameter name="file_path">/tmp/input.txt</|DSML parameter></|DSML invoke></|DSML tool_calls_extra>';
|
||||
const events = runSieve([input], ['Read']);
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-b10a2aca-39e9-4a38-be9d-9f22e398cb62","status":"PENDING","file_name":"history.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":null,"error_code":null,"inserted_at":1777485015.255,"updated_at":1777485015.255,"is_image":false,"audit_result":null}}}
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-b10a2aca-39e9-4a38-be9d-9f22e398cb62","status":"PENDING","file_name":"DS2API_HISTORY.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":null,"error_code":null,"inserted_at":1777485015.255,"updated_at":1777485015.255,"is_image":false,"audit_result":null}}}
|
||||
event: ready
|
||||
data: {"request_message_id":1,"response_message_id":2,"model_type":"default"}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-9c8ae986-75f7-4611-9956-5e1b502f3ec2","status":"SUCCESS","file_name":"history.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":145,"error_code":null,"inserted_at":1777485076.42,"updated_at":1777485076.42,"signed_path":"/file?file_id=9c8ae986-75f7-4611-9956-5e1b502f3ec2&state=a1REa2AdO8JmDuxMFiUTPJfpiyY4ie2weyUpYxfvEOrk5lxUCZifpRw9toZAEzn3DAjkgbR6blgZf41KLkHBKwwrcYTIjfxTRKijDqjEfguis03yddpuVrii6keG4%2BXIlcLAsyZG3qcGhfTGVZhsr%2BRl17J%2BcnT9roslhxBcEy4rthFJVMWUI%2BSHjuo2gLEUDfvMfULQ1gSLVGtr%2Fpq%2FcNPCPSxZapIQv04ZVmJLcdbzRkz%2Bb%2BxM5RWUIPujp%2B3ke1WDa3%2B6S4pP0Pv%2BAJ0MFUjQsloUwO4AsJ8YhGBFWg8Ehe1b2yt1N%2Fi%2BIjLRPt5xiNmALcJJXIY%3D","is_image":false,"audit_result":null}}}
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-9c8ae986-75f7-4611-9956-5e1b502f3ec2","status":"SUCCESS","file_name":"DS2API_HISTORY.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":145,"error_code":null,"inserted_at":1777485076.42,"updated_at":1777485076.42,"signed_path":"/file?file_id=9c8ae986-75f7-4611-9956-5e1b502f3ec2&state=a1REa2AdO8JmDuxMFiUTPJfpiyY4ie2weyUpYxfvEOrk5lxUCZifpRw9toZAEzn3DAjkgbR6blgZf41KLkHBKwwrcYTIjfxTRKijDqjEfguis03yddpuVrii6keG4%2BXIlcLAsyZG3qcGhfTGVZhsr%2BRl17J%2BcnT9roslhxBcEy4rthFJVMWUI%2BSHjuo2gLEUDfvMfULQ1gSLVGtr%2Fpq%2FcNPCPSxZapIQv04ZVmJLcdbzRkz%2Bb%2BxM5RWUIPujp%2B3ke1WDa3%2B6S4pP0Pv%2BAJ0MFUjQsloUwO4AsJ8YhGBFWg8Ehe1b2yt1N%2Fi%2BIjLRPt5xiNmALcJJXIY%3D","is_image":false,"audit_result":null}}}
|
||||
event: ready
|
||||
data: {"request_message_id":1,"response_message_id":2,"model_type":"expert"}
|
||||
|
||||
|
||||
@@ -5,8 +5,8 @@ ROOT_DIR="$(cd "$(dirname "$0")/../.." && pwd)"
|
||||
TARGETS_FILE="${1:-$ROOT_DIR/plans/node-syntax-gate-targets.txt}"
|
||||
|
||||
if [[ ! -f "$TARGETS_FILE" ]]; then
|
||||
echo "missing targets file: $TARGETS_FILE" >&2
|
||||
exit 1
|
||||
echo "checked=0 missing=0 invalid=0"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
checked=0
|
||||
|
||||
@@ -41,8 +41,8 @@ is_test_file() {
|
||||
}
|
||||
|
||||
if [[ ! -f "$TARGETS_FILE" ]]; then
|
||||
echo "missing targets file: $TARGETS_FILE" >&2
|
||||
exit 1
|
||||
echo "checked=0 missing=0 over_limit=0"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
missing=0
|
||||
|
||||
16
vercel.json
16
vercel.json
@@ -81,6 +81,22 @@
|
||||
"source": "/admin/version",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/chat-history(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/proxies(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/dev/raw-samples/(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/dev/captures(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin",
|
||||
"destination": "/admin/index.html"
|
||||
|
||||
@@ -217,6 +217,7 @@ export default function ApiTesterContainer({ config, onMessage, authFetch }) {
|
||||
setSelectedAccount={setSelectedAccount}
|
||||
effectiveKey={effectiveKey}
|
||||
selectedAccount={selectedAccount}
|
||||
model={model}
|
||||
onMessage={onMessage}
|
||||
response={response}
|
||||
isStreaming={isStreaming}
|
||||
|
||||
@@ -13,6 +13,7 @@ export default function ChatPanel({
|
||||
setSelectedAccount,
|
||||
effectiveKey,
|
||||
selectedAccount,
|
||||
model,
|
||||
onMessage,
|
||||
response,
|
||||
isStreaming,
|
||||
@@ -37,11 +38,15 @@ export default function ChatPanel({
|
||||
|
||||
setUploadingFiles(true)
|
||||
const initialSelectedAccount = String(selectedAccount || '').trim()
|
||||
const selectedModel = String(model || '').trim()
|
||||
let boundAccount = initialSelectedAccount
|
||||
for (const file of files) {
|
||||
const formData = new FormData()
|
||||
formData.append('file', file)
|
||||
formData.append('purpose', 'assistants')
|
||||
if (selectedModel) {
|
||||
formData.append('model', selectedModel)
|
||||
}
|
||||
|
||||
const headers = {
|
||||
'Authorization': `Bearer ${effectiveKey}`,
|
||||
@@ -181,8 +186,9 @@ export default function ChatPanel({
|
||||
/>
|
||||
<div className="absolute left-2 bottom-2 z-10">
|
||||
<button
|
||||
type="button"
|
||||
onClick={() => fileInputRef.current?.click()}
|
||||
disabled={uploadingFiles || isStreaming || !hasAvailableModel}
|
||||
disabled={uploadingFiles || isStreaming}
|
||||
className="p-2 text-muted-foreground hover:text-primary transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
|
||||
title="Attach files"
|
||||
>
|
||||
|
||||
@@ -16,7 +16,15 @@ const TOOL_MARKER = '<|Tool|>'
|
||||
const END_INSTRUCTIONS_MARKER = '<|end▁of▁instructions|>'
|
||||
const END_SENTENCE_MARKER = '<|end▁of▁sentence|>'
|
||||
const END_TOOL_RESULTS_MARKER = '<|end▁of▁toolresults|>'
|
||||
const CURRENT_INPUT_FILE_PROMPT = 'The current request and prior conversation context have already been provided. Answer the latest user request directly.'
|
||||
const CURRENT_INPUT_FILE_PROMPT = 'Continue from the latest state in the attached DS2API_HISTORY.txt context. Treat it as the current working state and answer the latest user request directly.'
|
||||
const LEGACY_CURRENT_INPUT_FILE_PROMPTS = new Set([
|
||||
'The current request and prior conversation context have already been provided. Answer the latest user request directly.',
|
||||
])
|
||||
|
||||
function isCurrentInputFilePrompt(value) {
|
||||
const text = String(value || '').trim()
|
||||
return text === CURRENT_INPUT_FILE_PROMPT || LEGACY_CURRENT_INPUT_FILE_PROMPTS.has(text)
|
||||
}
|
||||
|
||||
function formatDateTime(value, lang) {
|
||||
if (!value) return '-'
|
||||
@@ -312,7 +320,7 @@ function buildListModeMessages(item, t) {
|
||||
|
||||
const placeholderOnly = liveMessages.length === 1
|
||||
&& String(liveMessages[0]?.role || '').trim().toLowerCase() === 'user'
|
||||
&& String(liveMessages[0]?.content || '').trim() === CURRENT_INPUT_FILE_PROMPT
|
||||
&& isCurrentInputFilePrompt(liveMessages[0]?.content)
|
||||
|
||||
if (placeholderOnly) {
|
||||
return { messages: historyMessages, historyMerged: true }
|
||||
|
||||
@@ -394,7 +394,7 @@
|
||||
"thinkingInjectionPromptHelp": "Leave empty to use the built-in default prompt shown as the input placeholder.",
|
||||
"currentInputFileTitle": "Independent Split",
|
||||
"currentInputFileEnabled": "Independent split (by size)",
|
||||
"currentInputFileDesc": "Enabled by default. Once the character threshold is reached, upload the full context as a history.txt context file.",
|
||||
"currentInputFileDesc": "Enabled by default. Once the character threshold is reached, upload the full context as a DS2API_HISTORY.txt context file.",
|
||||
"currentInputFileMinChars": "Current input threshold (characters)",
|
||||
"currentInputFileHelp": "Default is 0, which uses independent split for any non-empty input.",
|
||||
"compatibilityTitle": "Compatibility",
|
||||
@@ -485,4 +485,4 @@
|
||||
"four": "Trigger a redeploy to apply the updated environment variables."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -394,7 +394,7 @@
|
||||
"thinkingInjectionPromptHelp": "留空时使用内置默认提示词;默认内容会显示在输入框占位文本中。",
|
||||
"currentInputFileTitle": "独立拆分",
|
||||
"currentInputFileEnabled": "独立拆分(按量)",
|
||||
"currentInputFileDesc": "默认开启。达到字符阈值后,将完整上下文上传为 history.txt 上下文文件。",
|
||||
"currentInputFileDesc": "默认开启。达到字符阈值后,将完整上下文上传为 DS2API_HISTORY.txt 上下文文件。",
|
||||
"currentInputFileMinChars": "当前输入阈值(字符数)",
|
||||
"currentInputFileHelp": "默认 0,表示只要有输入就会使用独立拆分。",
|
||||
"compatibilityTitle": "兼容性设置",
|
||||
@@ -485,4 +485,4 @@
|
||||
"four": "触发重新部署以应用新的环境变量。"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user