Compare commits

...

58 Commits

Author SHA1 Message Date
CJACK.
a10e03ebe0 Merge pull request #74 from CJackHwang/codex/fix-toolcall-whitelist-issue
Recognize and emit executable tool_calls in mixed prose streams; normalize roles and loosen tool-name matching
2026-03-03 00:40:41 +08:00
CJACK.
a6aa4a1839 补充工具调用行为说明并修正测试文档过时命令 2026-03-03 00:39:02 +08:00
CJACK.
c329bf26b6 Merge pull request #72 from CJackHwang/codex/review-changes-to-test-account-logic
Normalize mobile login numbers, skip completion flow for session-only account tests, and add tests
2026-03-02 23:56:27 +08:00
CJACK.
3ae5b57ebe fix(deepseek): normalize mobile before login token refresh 2026-03-02 23:48:54 +08:00
CJACK
d731a1fd4f 门禁 2026-03-01 07:20:24 +08:00
CJACK
93e9fb531d js对齐 2026-03-01 07:15:35 +08:00
CJACK
321b8a89ee 优化 2026-03-01 06:42:07 +08:00
CJACK
d84875e466 工具调用优化 2026-03-01 06:33:49 +08:00
CJACK
ea8c9a28a9 更新readme和icon 2026-03-01 06:22:41 +08:00
CJACK
a302fb3c25 修复 2026-03-01 05:55:46 +08:00
CJACK.
b89e154e43 Merge pull request #63 from CJackHwang/codex/fix-issues-in-image-analysis
Use repository root Dockerfile, make Go cross-build robust, and fix process wait logic
2026-02-28 18:51:57 +08:00
CJACK.
01924f4a69 fix(docker): auto-detect target arch for local ARM builds 2026-02-28 18:39:33 +08:00
CJACK.
3725694bdf Merge pull request #61 from ronghuaxueleng/main
feat(webui): 账号列表添加搜索过滤功能
2026-02-28 18:16:41 +08:00
root
21b12f583a fix(admin): 账号测试始终发送默认消息以验证完整链路
测试接口不再仅验证会话创建,改为始终发送「你是谁?」
走完整 completion 路径,确保被封禁账号能被正确识别为失败。
2026-02-28 10:18:26 +08:00
root
d97b86e0ee feat(webui): 账号列表添加搜索过滤功能
- 后端 GET /admin/accounts 支持 ?q= 参数,大小写不敏感匹配 identifier/email/mobile
- 前端搜索框内嵌于标题栏按钮行(测试全部按钮前)
- 搜索时重置到第 1 页,分页 total 反映过滤后数量
- 无匹配结果时显示专属提示文案(中英文)
2026-02-28 09:57:19 +08:00
qiangcao
0869ea56cd Merge branch 'CJackHwang:main' into main 2026-02-28 09:18:20 +08:00
CJACK.
4768440627 Merge pull request #60 from CJackHwang/main
同步
2026-02-27 23:18:44 +08:00
CJACK.
9f91da403f Merge pull request #59 from ronghuaxueleng/feature/account-improvements
feat: 账号测试状态持久化、分页选择器、点击账号名复制
2026-02-27 23:16:05 +08:00
CJACK.
89e5ad24b9 Merge pull request #57 from jacob-sheng/feat/zeabur-oneclick
feat(zeabur): 一键部署模板
2026-02-27 23:12:13 +08:00
CJACK.
3f106ac112 Merge pull request #55 from BigUncle/fix/claude-toolcall
fix(claude): 修复工具调用兼容与解析回退
2026-02-27 23:11:46 +08:00
root
f6f6a651fd feat: 账号测试状态持久化、分页选择器、点击账号名复制
- Account 结构加 TestStatus 字段,测试后写入 config.json
- listAccounts 接口返回 test_status,前端根据结果显示红/绿/黄状态点
- 分页选择器支持 10/20/50/100/500/1000/2000/5000
- 点击账号名自动复制到剪贴板,hover 显示复制图标,复制后显示绿色对勾
2026-02-27 21:30:43 +08:00
root
37b867c7ad Merge branch 'docker' 2026-02-27 20:59:16 +08:00
root
25ea28a277 feat: 账号测试状态持久化、分页选择器、点击账号名复制
- Account 结构加 TestStatus 字段,测试后写入 config.json
- listAccounts 接口返回 test_status,前端根据结果显示红/绿/黄状态点
- 分页选择器支持 10/20/50/100/500/1000/2000/5000
- 点击账号名自动复制到剪贴板,hover 显示复制图标,复制后显示绿色对勾
2026-02-27 20:58:18 +08:00
root
0ac49ab32b merge: 合并 main 分支到 docker,保留 docker-compose.yml 和 start.mjs 2026-02-27 20:21:20 +08:00
root
70c59eb71d chore: 将 .claude/ 和 CLAUDE.local.md 从 git 跟踪中排除 2026-02-27 20:19:00 +08:00
AYANGarch
f60a3ea501 docs(readme): add ds2api whale icon 2026-02-26 23:18:57 +08:00
AYANGarch
3f09d60cdc feat(zeabur): add one-click deploy template 2026-02-26 22:54:50 +08:00
BigUncle
d3b5493d2e fix(claude): guard thinking tool-call fallback when final text exists
- only parse tool_calls from thinking when finalText is empty

- apply the same guard in stream runtime finalizer

- add regression tests for non-stream and stream paths
2026-02-26 00:41:39 +08:00
BigUncle
255feb2e65 fix(claude): 修复工具调用兼容与解析回退
- Claude 工具定义兼容 input_schema 与 function.parameters

- tool_calls 解析增加 thinking 回退与大小写无关工具名匹配

- 补充 claude/util 相关回归测试
2026-02-25 18:03:25 +08:00
CJACK.
4b73315df0 Merge pull request #51 from CJackHwang/dev
feat: Implement multi-stage Docker build for releases, reusing pre-bu…
2026-02-23 04:06:18 +08:00
CJACK
a086e0cfa1 feat: Refactor Dockerfile to use BusyBox for core utilities and update healthcheck commands in Docker Compose and deployment documentation. 2026-02-23 04:05:22 +08:00
CJACK
f3bc022a36 feat: Implement multi-stage Docker build for releases, reusing pre-built artifacts from CI and updating documentation. 2026-02-23 03:52:55 +08:00
CJACK
b7cb7ef0c1 ci: use gh cli for release asset upload 2026-02-23 02:20:05 +08:00
CJACK
267420a46a ci: add workflow_dispatch with release tag input 2026-02-23 02:01:01 +08:00
CJACK
3c66ab958a ci: fix GHCR probe and require explicit release tag upload 2026-02-23 01:58:08 +08:00
CJACK.
cf2f79b6f4 Merge pull request #50 from CJackHwang/dev
更新
2026-02-23 01:38:40 +08:00
CJACK
ab6e817c8e 更新 2026-02-23 01:36:46 +08:00
CJACK.
9ae4630a3b Merge pull request #48 from CJackHwang/dev
Merge pull request #47 from CJackHwang/codex/fix-ci-workflow-errors-during-build

ci: 增强 release-artifacts 工作流对 GHCR 超时与上传失败的容错
2026-02-23 00:50:59 +08:00
CJACK.
d1b8537cfb Merge pull request #47 from CJackHwang/codex/fix-ci-workflow-errors-during-build
ci: 增强 release-artifacts 工作流对 GHCR 超时与上传失败的容错
2026-02-23 00:49:51 +08:00
CJACK.
d32b4481da ci: 提升发布流程对 GHCR 网络波动的容错 2026-02-23 00:49:09 +08:00
CJACK.
52a04ac575 Merge pull request #46 from CJackHwang/dev
feat: prevent raw tool call JSON leakage for unknown or rejected tool calls and consolidate container publishing to GHCR.
2026-02-23 00:30:17 +08:00
CJACK
0d3d535c08 feat: prevent raw tool call JSON leakage for unknown or rejected tool calls and consolidate container publishing to GHCR. 2026-02-23 00:27:46 +08:00
CJACK.
224462018a Merge pull request #45 from CJackHwang/dev
Merge pull request #44 from CJackHwang/codex/investigate-release-workflow-error

ci: 增加 Node 单测失败摘要输出
2026-02-22 23:36:36 +08:00
CJACK.
35e89230fd Merge pull request #44 from CJackHwang/codex/investigate-release-workflow-error
ci: 增加 Node 单测失败摘要输出
2026-02-22 23:31:34 +08:00
CJACK.
9a57af6092 ci: 增加 Node 单测失败摘要输出 2026-02-22 23:28:40 +08:00
CJACK.
2e1bd8a481 Merge pull request #42 from CJackHwang/codex/fix-sieve-tool-call-filtering-issues
fix(node): 移除被过滤工具调用的回退重发并对齐 Go 行为
2026-02-22 23:07:49 +08:00
CJACK.
1e678ecc1a fix(node): 移除被过滤工具调用的回退重发并对齐 Go 行为 2026-02-22 23:05:40 +08:00
root
962700f525 chore: 删除无用文件,清理 .gitignore Python 残留规则 2026-02-18 21:06:02 +08:00
root
e143d13ff6 feat: 编译和安装依赖使用国内镜像 2026-02-18 20:57:23 +08:00
root
2f853d7364 feat: 重写 start.mjs 适配 Go 运行时 2026-02-18 20:53:10 +08:00
root
36099a4ada chore: 删除 Python 残留文件(项目已迁移至 Go) 2026-02-18 20:50:07 +08:00
root
73bdb55cee merge: 合并 main 分支到 docker,保留 docker-compose.yml 和分页接口 2026-02-18 20:38:53 +08:00
root
3f3198c959 feat: 账号管理界面优化
- 账号列表支持分页(每页10条,倒序显示)
- API 密钥列表支持展开/关闭
2026-02-07 13:40:14 +08:00
root
6b8f7f8821 feat: 启动脚本显示所有环境变量 2026-02-07 10:55:34 +08:00
root
ac9a1ae742 merge: 合并 main 分支到 docker 2026-02-07 10:28:18 +08:00
root
bd4c2bacbc merge: 合并 main 分支到 docker 2026-02-02 20:31:42 +08:00
root
6cfc7051c4 Merge remote-tracking branch 'origin/main' into docker 2026-02-02 20:29:11 +08:00
root
22a2a97a76 feat: 添加 Docker 和 GitHub Actions 支持
- 添加 docker/Dockerfile 多阶段构建(前端+后端)
- 添加 docker-compose.yml 支持阿里云镜像部署
- 添加 .github/workflows/release.yml 自动发布到阿里云
- 添加 .dockerignore 优化构建
- 添加 VERSION 版本管理文件
- 添加 start.mjs 本地开发启动脚本
2026-02-02 20:23:33 +08:00
96 changed files with 3919 additions and 1793 deletions

View File

@@ -10,7 +10,9 @@ __pycache__
.Python .Python
build/ build/
develop-eggs/ develop-eggs/
dist/ dist/*
!dist/docker-input/
!dist/docker-input/*.tar.gz
downloads/ downloads/
eggs/ eggs/
.eggs/ .eggs/

View File

@@ -1,20 +1,20 @@
#### 💻 变更类型 | Change Type #### 💻 变更类型 | Change Type
<!-- For change type, change [ ] to [x]. --> <!-- For change type, change [ ] to [x]. -->
- [ ] ✨ feat - [ ] ✨ feat
- [ ] 🐛 fix - [ ] 🐛 fix
- [ ] ♻️ refactor - [ ] ♻️ refactor
- [ ] 💄 style - [ ] 💄 style
- [ ] 👷 build - [ ] 👷 build
- [ ] ⚡️ perf - [ ] ⚡️ perf
- [ ] 📝 docs - [ ] 📝 docs
- [ ] 🔨 chore - [ ] 🔨 chore
#### 🔀 变更说明 | Description of Change #### 🔀 变更说明 | Description of Change
<!-- Thank you for your Pull Request. Please provide a description above. -->
#### 📝 补充信息 | Additional Information #### 📝 补充信息 | Additional Information
<!-- Add any other context about the Pull Request here. -->

View File

@@ -4,6 +4,12 @@ on:
release: release:
types: types:
- published - published
workflow_dispatch:
inputs:
release_tag:
description: "Release tag to build/publish (e.g. v2.1.6)"
required: true
type: string
permissions: permissions:
contents: write contents: write
@@ -13,8 +19,7 @@ jobs:
build-and-upload: build-and-upload:
runs-on: ubuntu-latest runs-on: ubuntu-latest
env: env:
DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }} RELEASE_TAG: ${{ github.event.release.tag_name || github.event.inputs.release_tag }}
DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
steps: steps:
- name: Checkout - name: Checkout
uses: actions/checkout@v4 uses: actions/checkout@v4
@@ -45,7 +50,7 @@ jobs:
- name: Build Multi-Platform Archives - name: Build Multi-Platform Archives
run: | run: |
set -euo pipefail set -euo pipefail
TAG="${{ github.event.release.tag_name }}" TAG="${RELEASE_TAG}"
mkdir -p dist mkdir -p dist
targets=( targets=(
@@ -82,25 +87,44 @@ jobs:
rm -rf "${STAGE}" rm -rf "${STAGE}"
done done
- name: Prepare Docker release inputs
run: |
set -euo pipefail
TAG="${RELEASE_TAG}"
mkdir -p dist/docker-input
cp "dist/ds2api_${TAG}_linux_amd64.tar.gz" "dist/docker-input/linux_amd64.tar.gz"
cp "dist/ds2api_${TAG}_linux_arm64.tar.gz" "dist/docker-input/linux_arm64.tar.gz"
- name: Set up QEMU - name: Set up QEMU
uses: docker/setup-qemu-action@v3 uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3 uses: docker/setup-buildx-action@v3
- name: Log in to GHCR - name: Wait for GHCR endpoint
uses: docker/login-action@v3 run: |
with: set -euo pipefail
registry: ghcr.io for i in {1..6}; do
username: ${{ github.actor }} code="$(curl -sS -o /dev/null -w '%{http_code}' --max-time 15 https://ghcr.io/v2/ || true)"
password: ${{ secrets.GITHUB_TOKEN }} if [ "${code}" = "200" ] || [ "${code}" = "401" ] || [ "${code}" = "405" ]; then
exit 0
fi
sleep "$((i * 10))"
done
echo "GHCR endpoint is unreachable after multiple retries (last status: ${code:-unknown})." >&2
exit 1
- name: Log in to Docker Hub - name: Log in to GHCR (with retry)
if: "${{ env.DOCKERHUB_USERNAME != '' }}" run: |
uses: docker/login-action@v3 set -euo pipefail
with: for i in {1..6}; do
username: ${{ env.DOCKERHUB_USERNAME }} if echo "${{ secrets.GITHUB_TOKEN }}" | docker login ghcr.io -u "${{ github.actor }}" --password-stdin; then
password: ${{ env.DOCKERHUB_TOKEN }} exit 0
fi
sleep "$((i * 10))"
done
echo "Failed to login to GHCR after multiple retries." >&2
exit 1
- name: Extract Docker metadata - name: Extract Docker metadata
id: meta_release id: meta_release
@@ -108,16 +132,19 @@ jobs:
with: with:
images: | images: |
ghcr.io/${{ github.repository }} ghcr.io/${{ github.repository }}
${{ env.DOCKERHUB_USERNAME || 'cjackhwang' }}/ds2api
tags: | tags: |
type=raw,value=${{ github.event.release.tag_name }} type=raw,value=${{ env.RELEASE_TAG }}
type=raw,value=latest type=raw,value=latest
- name: Build and Push Docker Image - name: Build and Push Docker Image
uses: docker/build-push-action@v6 uses: docker/build-push-action@v6
env:
DOCKER_BUILD_RECORD_UPLOAD: "false"
DOCKER_BUILD_SUMMARY: "false"
with: with:
context: . context: .
file: ./Dockerfile file: ./Dockerfile
target: runtime-from-dist
push: true push: true
platforms: linux/amd64,linux/arm64 platforms: linux/amd64,linux/arm64
tags: ${{ steps.meta_release.outputs.tags }} tags: ${{ steps.meta_release.outputs.tags }}
@@ -126,15 +153,17 @@ jobs:
- name: Export Docker image archives for release assets - name: Export Docker image archives for release assets
run: | run: |
set -euo pipefail set -euo pipefail
TAG="${{ github.event.release.tag_name }}" TAG="${RELEASE_TAG}"
docker buildx build \ docker buildx build \
--platform linux/amd64 \ --platform linux/amd64 \
--target runtime-from-dist \
--output type=docker,dest="dist/ds2api_${TAG}_docker_linux_amd64.tar" \ --output type=docker,dest="dist/ds2api_${TAG}_docker_linux_amd64.tar" \
. .
docker buildx build \ docker buildx build \
--platform linux/arm64 \ --platform linux/arm64 \
--target runtime-from-dist \
--output type=docker,dest="dist/ds2api_${TAG}_docker_linux_arm64.tar" \ --output type=docker,dest="dist/ds2api_${TAG}_docker_linux_arm64.tar" \
. .
@@ -146,10 +175,29 @@ jobs:
set -euo pipefail set -euo pipefail
(cd dist && sha256sum *.tar.gz *.zip > sha256sums.txt) (cd dist && sha256sum *.tar.gz *.zip > sha256sums.txt)
- name: Validate release tag
run: |
set -euo pipefail
TAG="${RELEASE_TAG}"
if [ -z "${TAG}" ]; then
echo "release tag is empty; set release_tag when using workflow_dispatch." >&2
exit 1
fi
- name: Upload Release Assets - name: Upload Release Assets
uses: softprops/action-gh-release@v2 env:
with: GH_TOKEN: ${{ github.token }}
files: | run: |
set -euo pipefail
TAG="${RELEASE_TAG}"
FILES=(
dist/*.tar.gz dist/*.tar.gz
dist/*.zip dist/*.zip
dist/sha256sums.txt dist/sha256sums.txt
)
if gh release view "${TAG}" >/dev/null 2>&1; then
gh release upload "${TAG}" "${FILES[@]}" --clobber
else
gh release create "${TAG}" "${FILES[@]}" --title "${TAG}" --notes ""
fi

127
.github/workflows/release-dockerhub.yml vendored Normal file
View File

@@ -0,0 +1,127 @@
name: Release to Docker Hub
on:
workflow_dispatch:
inputs:
version_type:
description: '版本类型'
required: true
default: 'patch'
type: choice
options:
- patch
- minor
- major
permissions:
contents: write
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Get current version
id: get_version
run: |
LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0")
TAG_VERSION=${LATEST_TAG#v}
if [ -f VERSION ]; then
FILE_VERSION=$(cat VERSION | tr -d '[:space:]')
else
FILE_VERSION="0.0.0"
fi
function version_gt() { test "$(printf '%s\n' "$@" | sort -V | head -n 1)" != "$1"; }
if version_gt "$FILE_VERSION" "$TAG_VERSION"; then
VERSION="$FILE_VERSION"
else
VERSION="$TAG_VERSION"
fi
echo "Current version: $VERSION"
echo "current_version=$VERSION" >> $GITHUB_OUTPUT
- name: Calculate next version
id: next_version
env:
VERSION_TYPE: ${{ github.event.inputs.version_type }}
run: |
VERSION="${{ steps.get_version.outputs.current_version }}"
BASE_VERSION=$(echo "$VERSION" | sed 's/-.*$//')
IFS='.' read -r -a version_parts <<< "$BASE_VERSION"
MAJOR="${version_parts[0]:-0}"
MINOR="${version_parts[1]:-0}"
PATCH="${version_parts[2]:-0}"
case "$VERSION_TYPE" in
major)
NEW_VERSION="$((MAJOR + 1)).0.0"
;;
minor)
NEW_VERSION="${MAJOR}.$((MINOR + 1)).0"
;;
*)
NEW_VERSION="${MAJOR}.${MINOR}.$((PATCH + 1))"
;;
esac
echo "New version: $NEW_VERSION"
echo "new_version=$NEW_VERSION" >> $GITHUB_OUTPUT
echo "new_tag=v$NEW_VERSION" >> $GITHUB_OUTPUT
- name: Update VERSION file
run: |
echo "${{ steps.next_version.outputs.new_version }}" > VERSION
- name: Commit VERSION and create tag
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add VERSION
if ! git diff --cached --quiet; then
git commit -m "chore: bump version to ${{ steps.next_version.outputs.new_tag }} [skip ci]"
fi
NEW_TAG="${{ steps.next_version.outputs.new_tag }}"
git tag -a "$NEW_TAG" -m "Release $NEW_TAG"
git push origin HEAD:main "$NEW_TAG"
# Docker 构建并推送到 Docker Hub
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/ds2api:${{ steps.next_version.outputs.new_tag }}
${{ secrets.DOCKERHUB_USERNAME }}/ds2api:${{ steps.next_version.outputs.new_version }}
${{ secrets.DOCKERHUB_USERNAME }}/ds2api:latest
labels: |
org.opencontainers.image.version=${{ steps.next_version.outputs.new_version }}
org.opencontainers.image.revision=${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

128
.github/workflows/release.yml vendored Normal file
View File

@@ -0,0 +1,128 @@
name: Release to Aliyun CR
on:
workflow_dispatch:
inputs:
version_type:
description: '版本类型'
required: true
default: 'patch'
type: choice
options:
- patch
- minor
- major
permissions:
contents: write
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v5
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
- name: Get current version
id: get_version
run: |
LATEST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "v0.0.0")
TAG_VERSION=${LATEST_TAG#v}
if [ -f VERSION ]; then
FILE_VERSION=$(cat VERSION | tr -d '[:space:]')
else
FILE_VERSION="0.0.0"
fi
function version_gt() { test "$(printf '%s\n' "$@" | sort -V | head -n 1)" != "$1"; }
if version_gt "$FILE_VERSION" "$TAG_VERSION"; then
VERSION="$FILE_VERSION"
else
VERSION="$TAG_VERSION"
fi
echo "Current version: $VERSION"
echo "current_version=$VERSION" >> $GITHUB_OUTPUT
- name: Calculate next version
id: next_version
env:
VERSION_TYPE: ${{ github.event.inputs.version_type }}
run: |
VERSION="${{ steps.get_version.outputs.current_version }}"
BASE_VERSION=$(echo "$VERSION" | sed 's/-.*$//')
IFS='.' read -r -a version_parts <<< "$BASE_VERSION"
MAJOR="${version_parts[0]:-0}"
MINOR="${version_parts[1]:-0}"
PATCH="${version_parts[2]:-0}"
case "$VERSION_TYPE" in
major)
NEW_VERSION="$((MAJOR + 1)).0.0"
;;
minor)
NEW_VERSION="${MAJOR}.$((MINOR + 1)).0"
;;
*)
NEW_VERSION="${MAJOR}.${MINOR}.$((PATCH + 1))"
;;
esac
echo "New version: $NEW_VERSION"
echo "new_version=$NEW_VERSION" >> $GITHUB_OUTPUT
echo "new_tag=v$NEW_VERSION" >> $GITHUB_OUTPUT
- name: Update VERSION file
run: |
echo "${{ steps.next_version.outputs.new_version }}" > VERSION
- name: Commit VERSION and create tag
run: |
git config user.name "github-actions[bot]"
git config user.email "github-actions[bot]@users.noreply.github.com"
git add VERSION
if ! git diff --cached --quiet; then
git commit -m "chore: bump version to ${{ steps.next_version.outputs.new_tag }} [skip ci]"
fi
NEW_TAG="${{ steps.next_version.outputs.new_tag }}"
git tag -a "$NEW_TAG" -m "Release $NEW_TAG"
git push origin HEAD:main "$NEW_TAG"
# Docker 构建并推送到阿里云
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Aliyun Container Registry
uses: docker/login-action@v3
with:
registry: ${{ secrets.ALIYUN_REGISTRY }}
username: ${{ secrets.ALIYUN_REGISTRY_USER }}
password: ${{ secrets.ALIYUN_REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
file: ./Dockerfile
platforms: linux/amd64,linux/arm64
push: true
tags: |
${{ secrets.ALIYUN_REGISTRY }}/${{ secrets.ALIYUN_REGISTRY_NAMESPACE }}/ds2api:${{ steps.next_version.outputs.new_tag }}
${{ secrets.ALIYUN_REGISTRY }}/${{ secrets.ALIYUN_REGISTRY_NAMESPACE }}/ds2api:${{ steps.next_version.outputs.new_version }}
${{ secrets.ALIYUN_REGISTRY }}/${{ secrets.ALIYUN_REGISTRY_NAMESPACE }}/ds2api:latest
labels: |
org.opencontainers.image.version=${{ steps.next_version.outputs.new_version }}
org.opencontainers.image.revision=${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

40
.gitignore vendored
View File

@@ -2,37 +2,6 @@
config.json config.json
.env .env
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# Virtual environments
venv/
ENV/
env/
.venv
# IDE # IDE
.vscode/ .vscode/
.idea/ .idea/
@@ -44,7 +13,6 @@ env/
# Logs # Logs
*.log *.log
logs/ logs/
uvicorn.log
artifacts/ artifacts/
# Vercel # Vercel
@@ -56,8 +24,6 @@ webui/node_modules/
webui/dist/ webui/dist/
.npm .npm
.pnpm-store/ .pnpm-store/
# 保留 webui/package-lock.json 用于 CI 缓存
# package-lock.json # 如果有根目录的可以忽略
yarn.lock yarn.lock
pnpm-lock.yaml pnpm-lock.yaml
@@ -86,7 +52,9 @@ coverage*.out
cover/ cover/
# Misc # Misc
*.pyc
*.pyo
.git/ .git/
Thumbs.db Thumbs.db
# Claude Code
.claude/
CLAUDE.local.md

7
API.md
View File

@@ -284,6 +284,11 @@ data: [DONE]
**流式**:命中高置信特征后立即输出 `delta.tool_calls`(不等待完整 JSON 闭合),并持续发送 arguments 增量;已确认的 toolcall 原始 JSON 不会回流到 `delta.content` **流式**:命中高置信特征后立即输出 `delta.tool_calls`(不等待完整 JSON 闭合),并持续发送 arguments 增量;已确认的 toolcall 原始 JSON 不会回流到 `delta.content`
补充说明:
- **非代码块上下文**下,工具 JSON 即使与普通文本混合,也会按特征识别并产出可执行 tool call前后普通文本仍可透传
- Markdown fenced code block例如 ```json ... ```)中的 `tool_calls` 仅视为示例文本,不会被执行。
--- ---
### `GET /v1/models/{id}` ### `GET /v1/models/{id}`
@@ -301,7 +306,7 @@ OpenAI Responses 风格接口,兼容 `input` 或 `messages`。
| `messages` | array | ❌ | 与 `input` 二选一 | | `messages` | array | ❌ | 与 `input` 二选一 |
| `instructions` | string | ❌ | 自动前置为 system 消息 | | `instructions` | string | ❌ | 自动前置为 system 消息 |
| `stream` | boolean | ❌ | 默认 `false` | | `stream` | boolean | ❌ | 默认 `false` |
| `tools` | array | ❌ | 与 chat 同样的工具识别与转译策略 | | `tools` | array | ❌ | 与 chat 同样的工具识别与转译策略(含代码块示例豁免) |
| `tool_choice` | string/object | ❌ | 支持 `auto`/`none`/`required` 与强制函数(`{"type":"function","name":"..."}` | | `tool_choice` | string/object | ❌ | 支持 `auto`/`none`/`required` 与强制函数(`{"type":"function","name":"..."}` |
**非流式响应**:返回标准 `response` 对象,`id` 形如 `resp_xxx`,并写入内存 TTL 存储。 **非流式响应**:返回标准 `response` 对象,`id` 形如 `resp_xxx`,并写入内存 TTL 存储。

View File

@@ -135,11 +135,12 @@ docker-compose up -d --build
### 2.3 Docker Architecture ### 2.3 Docker Architecture
The `Dockerfile` uses a three-stage build: The `Dockerfile` now provides two image paths:
1. **WebUI build stage**: `node:20` image, runs `npm ci && npm run build` 1. **Default local/dev path (`runtime-from-source`)**: a three-stage build (WebUI build + Go build + runtime).
2. **Go build stage**: `golang:1.24` image, compiles the binary 2. **Release path (`runtime-from-dist`)**: CI first creates `dist/ds2api_<tag>_linux_<arch>.tar.gz`, then Docker directly reuses the binary and `static/admin` assets from those release archives, without running `npm build`/`go build` again.
3. **Runtime stage**: `debian:bookworm-slim` minimal image
The release path keeps Docker images aligned with release archives and reduces duplicate build work.
Container entry command: `/usr/local/bin/ds2api`, default exposed port: `5001`. Container entry command: `/usr/local/bin/ds2api`, default exposed port: `5001`.
@@ -160,7 +161,7 @@ Docker Compose includes a built-in health check:
```yaml ```yaml
healthcheck: healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:${PORT:-5001}/healthz"] test: ["CMD", "/usr/local/bin/busybox", "wget", "-qO-", "http://localhost:${PORT:-5001}/healthz"]
interval: 30s interval: 30s
timeout: 10s timeout: 10s
retries: 3 retries: 3
@@ -174,6 +175,18 @@ If container logs look normal but the admin panel is unreachable, check these fi
1. **Port alignment**: when `PORT` is not `5001`, use the same port in your URL (for example `http://localhost:8080/admin`). 1. **Port alignment**: when `PORT` is not `5001`, use the same port in your URL (for example `http://localhost:8080/admin`).
2. **WebUI assets in dev compose**: `docker-compose.dev.yml` runs `go run` in a dev image and does not auto-install Node.js inside the container; if `static/admin` is missing in your repo, `/admin` will return 404. Build once on host: `./scripts/build-webui.sh`. 2. **WebUI assets in dev compose**: `docker-compose.dev.yml` runs `go run` in a dev image and does not auto-install Node.js inside the container; if `static/admin` is missing in your repo, `/admin` will return 404. Build once on host: `./scripts/build-webui.sh`.
### 2.7 Zeabur One-Click (Dockerfile)
This repo includes a `zeabur.yaml` template for one-click deployment on Zeabur:
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/L4CFHP)
Notes:
- **Port**: DS2API listens on `5001` by default; the template sets `PORT=5001`.
- **Persistent config**: the template mounts `/data` and sets `DS2API_CONFIG_PATH=/data/config.json`. After importing config in Admin UI, it will be written and persisted to this path.
- **First login**: after deployment, open `/admin` and login with `DS2API_ADMIN_KEY` shown in Zeabur env/template instructions (recommended: rotate to a strong secret after first login).
--- ---
## 3. Vercel Deployment ## 3. Vercel Deployment
@@ -341,6 +354,7 @@ Built-in GitHub Actions workflow: `.github/workflows/release-artifacts.yml`
- **Trigger**: only on Release `published` (no build on normal push) - **Trigger**: only on Release `published` (no build on normal push)
- **Outputs**: multi-platform binary archives + `sha256sums.txt` - **Outputs**: multi-platform binary archives + `sha256sums.txt`
- **Container publishing**: GHCR only (`ghcr.io/cjackhwang/ds2api`)
| Platform | Architecture | Format | | Platform | Architecture | Format |
| --- | --- | --- | | --- | --- | --- |
@@ -378,6 +392,16 @@ cp config.example.json config.json
2. Wait for the `Release Artifacts` workflow to complete 2. Wait for the `Release Artifacts` workflow to complete
3. Download the matching archive from Release Assets 3. Download the matching archive from Release Assets
### Pull from GHCR (Optional)
```bash
# latest
docker pull ghcr.io/cjackhwang/ds2api:latest
# specific version (example)
docker pull ghcr.io/cjackhwang/ds2api:v2.1.2
```
--- ---
## 5. Reverse Proxy (Nginx) ## 5. Reverse Proxy (Nginx)

View File

@@ -135,11 +135,12 @@ docker-compose up -d --build
### 2.3 Docker 架构说明 ### 2.3 Docker 架构说明
`Dockerfile` 使用三阶段构建 `Dockerfile` 提供两条构建路径
1. **WebUI 构建阶段**`node:20` 镜像,执行 `npm ci && npm run build` 1. **本地/开发默认路径(`runtime-from-source`**三阶段构建WebUI 构建 + Go 构建 + 运行阶段)。
2. **Go 构建阶段**`golang:1.24` 镜像,编译二进制文件 2. **Release 路径(`runtime-from-dist`**CI 先生成 `dist/ds2api_<tag>_linux_<arch>.tar.gz`,再由 Docker 直接复用该发布包内的二进制和 `static/admin` 产物组装运行镜像,不再重复执行 `npm build`/`go build`
3. **运行阶段**`debian:bookworm-slim` 精简镜像
Release 路径可确保 Docker 镜像与 release 压缩包使用同一套产物,减少重复构建带来的差异。
容器内启动命令:`/usr/local/bin/ds2api`,默认暴露端口 `5001` 容器内启动命令:`/usr/local/bin/ds2api`,默认暴露端口 `5001`
@@ -160,7 +161,7 @@ Docker Compose 已配置内置健康检查:
```yaml ```yaml
healthcheck: healthcheck:
test: ["CMD", "wget", "-qO-", "http://localhost:${PORT:-5001}/healthz"] test: ["CMD", "/usr/local/bin/busybox", "wget", "-qO-", "http://localhost:${PORT:-5001}/healthz"]
interval: 30s interval: 30s
timeout: 10s timeout: 10s
retries: 3 retries: 3
@@ -174,6 +175,18 @@ healthcheck:
1. **端口是否一致**`PORT` 改成非 `5001` 时,访问地址也要改成对应端口(如 `http://localhost:8080/admin`)。 1. **端口是否一致**`PORT` 改成非 `5001` 时,访问地址也要改成对应端口(如 `http://localhost:8080/admin`)。
2. **开发 compose 的 WebUI 静态文件**`docker-compose.dev.yml` 使用 `go run` 开发镜像,不会在容器内自动安装 Node.js若仓库里没有 `static/admin``/admin` 会返回 404。可先在宿主机构建一次`./scripts/build-webui.sh` 2. **开发 compose 的 WebUI 静态文件**`docker-compose.dev.yml` 使用 `go run` 开发镜像,不会在容器内自动安装 Node.js若仓库里没有 `static/admin``/admin` 会返回 404。可先在宿主机构建一次`./scripts/build-webui.sh`
### 2.7 Zeabur 一键部署Dockerfile
仓库提供 `zeabur.yaml` 模板,可在 Zeabur 上一键部署:
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/L4CFHP)
部署要点:
- **端口**:服务默认监听 `5001`,模板会固定设置 `PORT=5001`
- **配置持久化**:模板挂载卷 `/data`,并设置 `DS2API_CONFIG_PATH=/data/config.json`;在管理台导入配置后,会写入并持久化到该路径。
- **首次登录**:部署完成后访问 `/admin`,使用 Zeabur 环境变量/模板指引中的 `DS2API_ADMIN_KEY` 登录(建议首次登录后自行更换为强密码)。
--- ---
## 三、Vercel 部署 ## 三、Vercel 部署
@@ -341,6 +354,7 @@ No Output Directory named "public" found after the Build completed.
- **触发条件**:仅在 Release `published` 时触发(普通 push 不会构建) - **触发条件**:仅在 Release `published` 时触发(普通 push 不会构建)
- **构建产物**:多平台二进制压缩包 + `sha256sums.txt` - **构建产物**:多平台二进制压缩包 + `sha256sums.txt`
- **容器镜像发布**:仅发布到 GHCR`ghcr.io/cjackhwang/ds2api`
| 平台 | 架构 | 文件格式 | | 平台 | 架构 | 文件格式 |
| --- | --- | --- | | --- | --- | --- |
@@ -378,6 +392,16 @@ cp config.example.json config.json
2. 等待 Actions 工作流 `Release Artifacts` 完成 2. 等待 Actions 工作流 `Release Artifacts` 完成
3. 在 Release 的 Assets 下载对应平台压缩包 3. 在 Release 的 Assets 下载对应平台压缩包
### 拉取 GHCR 镜像(可选)
```bash
# latest
docker pull ghcr.io/cjackhwang/ds2api:latest
# 指定版本(示例)
docker pull ghcr.io/cjackhwang/ds2api:v2.1.2
```
--- ---
## 五、反向代理Nginx ## 五、反向代理Nginx

View File

@@ -8,19 +8,54 @@ RUN npm run build
FROM golang:1.24 AS go-builder FROM golang:1.24 AS go-builder
WORKDIR /app WORKDIR /app
ARG TARGETOS=linux ARG TARGETOS
ARG TARGETARCH=amd64 ARG TARGETARCH
COPY go.mod go.sum* ./ COPY go.mod go.sum* ./
RUN go mod download RUN go mod download
COPY . . COPY . .
RUN CGO_ENABLED=0 GOOS=${TARGETOS} GOARCH=${TARGETARCH} go build -o /out/ds2api ./cmd/ds2api RUN set -eux; \
GOOS="${TARGETOS:-$(go env GOOS)}"; \
GOARCH="${TARGETARCH:-$(go env GOARCH)}"; \
CGO_ENABLED=0 GOOS="${GOOS}" GOARCH="${GOARCH}" go build -o /out/ds2api ./cmd/ds2api
FROM debian:bookworm-slim FROM busybox:1.36.1-musl AS busybox-tools
FROM debian:bookworm-slim AS runtime-base
WORKDIR /app WORKDIR /app
RUN apt-get update && apt-get install -y --no-install-recommends ca-certificates wget && rm -rf /var/lib/apt/lists/* COPY --from=go-builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=busybox-tools /bin/busybox /usr/local/bin/busybox
EXPOSE 5001
CMD ["/usr/local/bin/ds2api"]
FROM runtime-base AS runtime-from-source
COPY --from=go-builder /out/ds2api /usr/local/bin/ds2api COPY --from=go-builder /out/ds2api /usr/local/bin/ds2api
COPY --from=go-builder /app/sha3_wasm_bg.7b9ca65ddd.wasm /app/sha3_wasm_bg.7b9ca65ddd.wasm COPY --from=go-builder /app/sha3_wasm_bg.7b9ca65ddd.wasm /app/sha3_wasm_bg.7b9ca65ddd.wasm
COPY --from=go-builder /app/config.example.json /app/config.example.json COPY --from=go-builder /app/config.example.json /app/config.example.json
COPY --from=webui-builder /app/static/admin /app/static/admin COPY --from=webui-builder /app/static/admin /app/static/admin
EXPOSE 5001
CMD ["/usr/local/bin/ds2api"] FROM busybox-tools AS dist-extract
ARG TARGETARCH
COPY dist/docker-input/linux_amd64.tar.gz /tmp/ds2api_linux_amd64.tar.gz
COPY dist/docker-input/linux_arm64.tar.gz /tmp/ds2api_linux_arm64.tar.gz
RUN set -eux; \
case "${TARGETARCH}" in \
amd64) ARCHIVE="/tmp/ds2api_linux_amd64.tar.gz" ;; \
arm64) ARCHIVE="/tmp/ds2api_linux_arm64.tar.gz" ;; \
*) echo "unsupported TARGETARCH: ${TARGETARCH}" >&2; exit 1 ;; \
esac; \
tar -xzf "${ARCHIVE}" -C /tmp; \
PKG_DIR="$(find /tmp -maxdepth 1 -type d -name "ds2api_*_linux_${TARGETARCH}" | head -n1)"; \
test -n "${PKG_DIR}"; \
mkdir -p /out/static; \
cp "${PKG_DIR}/ds2api" /out/ds2api; \
cp "${PKG_DIR}/sha3_wasm_bg.7b9ca65ddd.wasm" /out/sha3_wasm_bg.7b9ca65ddd.wasm; \
cp "${PKG_DIR}/config.example.json" /out/config.example.json; \
cp -R "${PKG_DIR}/static/admin" /out/static/admin
FROM runtime-base AS runtime-from-dist
COPY --from=dist-extract /out/ds2api /usr/local/bin/ds2api
COPY --from=dist-extract /out/sha3_wasm_bg.7b9ca65ddd.wasm /app/sha3_wasm_bg.7b9ca65ddd.wasm
COPY --from=dist-extract /out/config.example.json /app/config.example.json
COPY --from=dist-extract /out/static/admin /app/static/admin
FROM runtime-from-source AS final

View File

@@ -1,3 +1,7 @@
<p align="center">
<img src="webui/public/ds2api-favicon.svg" width="128" height="128" alt="DS2API icon" />
</p>
# DS2API # DS2API
[![License](https://img.shields.io/github/license/CJackHwang/ds2api.svg)](LICENSE) [![License](https://img.shields.io/github/license/CJackHwang/ds2api.svg)](LICENSE)
@@ -5,6 +9,8 @@
![Forks](https://img.shields.io/github/forks/CJackHwang/ds2api.svg) ![Forks](https://img.shields.io/github/forks/CJackHwang/ds2api.svg)
[![Release](https://img.shields.io/github/v/release/CJackHwang/ds2api?display_name=tag)](https://github.com/CJackHwang/ds2api/releases) [![Release](https://img.shields.io/github/v/release/CJackHwang/ds2api?display_name=tag)](https://github.com/CJackHwang/ds2api/releases)
[![Docker](https://img.shields.io/badge/docker-ready-blue.svg)](DEPLOY.md) [![Docker](https://img.shields.io/badge/docker-ready-blue.svg)](DEPLOY.md)
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/L4CFHP)
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https://github.com/CJackHwang/ds2api)
语言 / Language: [中文](README.MD) | [English](README.en.md) 语言 / Language: [中文](README.MD) | [English](README.en.md)
@@ -162,6 +168,12 @@ docker-compose logs -f
更新镜像:`docker-compose up -d --build` 更新镜像:`docker-compose up -d --build`
#### Zeabur 一键部署Dockerfile
1. 点击上方 “Deploy on Zeabur” 按钮,一键部署。
2. 部署完成后访问 `/admin`,使用 Zeabur 环境变量/模板指引中的 `DS2API_ADMIN_KEY` 登录。
3. 在管理台导入/编辑配置(会写入并持久化到 `/data/config.json`)。
### 方式三Vercel 部署 ### 方式三Vercel 部署
1. Fork 仓库到自己的 GitHub 1. Fork 仓库到自己的 GitHub
@@ -462,6 +474,7 @@ npm ci --prefix webui && npm run build --prefix webui
- **触发条件**:仅在 GitHub Release `published` 时触发(普通 push 不会触发) - **触发条件**:仅在 GitHub Release `published` 时触发(普通 push 不会触发)
- **构建产物**:多平台二进制包(`linux/amd64`、`linux/arm64`、`darwin/amd64`、`darwin/arm64`、`windows/amd64`+ `sha256sums.txt` - **构建产物**:多平台二进制包(`linux/amd64`、`linux/arm64`、`darwin/amd64`、`darwin/arm64`、`windows/amd64`+ `sha256sums.txt`
- **容器镜像发布**:仅推送到 GHCR`ghcr.io/cjackhwang/ds2api`
- **每个压缩包包含**`ds2api` 可执行文件、`static/admin`、WASM 文件、配置示例、README、LICENSE - **每个压缩包包含**`ds2api` 可执行文件、`static/admin`、WASM 文件、配置示例、README、LICENSE
## 免责声明 ## 免责声明

View File

@@ -1,3 +1,7 @@
<p align="center">
<img src="webui/public/ds2api-favicon.svg" width="128" height="128" alt="DS2API icon" />
</p>
# DS2API # DS2API
[![License](https://img.shields.io/github/license/CJackHwang/ds2api.svg)](LICENSE) [![License](https://img.shields.io/github/license/CJackHwang/ds2api.svg)](LICENSE)
@@ -5,6 +9,8 @@
![Forks](https://img.shields.io/github/forks/CJackHwang/ds2api.svg) ![Forks](https://img.shields.io/github/forks/CJackHwang/ds2api.svg)
[![Release](https://img.shields.io/github/v/release/CJackHwang/ds2api?display_name=tag)](https://github.com/CJackHwang/ds2api/releases) [![Release](https://img.shields.io/github/v/release/CJackHwang/ds2api?display_name=tag)](https://github.com/CJackHwang/ds2api/releases)
[![Docker](https://img.shields.io/badge/docker-ready-blue.svg)](DEPLOY.en.md) [![Docker](https://img.shields.io/badge/docker-ready-blue.svg)](DEPLOY.en.md)
[![Deploy on Zeabur](https://zeabur.com/button.svg)](https://zeabur.com/templates/L4CFHP)
[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?repository-url=https://github.com/CJackHwang/ds2api)
Language: [中文](README.MD) | [English](README.en.md) Language: [中文](README.MD) | [English](README.en.md)
@@ -162,6 +168,12 @@ docker-compose logs -f
Rebuild after updates: `docker-compose up -d --build` Rebuild after updates: `docker-compose up -d --build`
#### Zeabur One-Click (Dockerfile)
1. Click the “Deploy on Zeabur” button above to deploy.
2. After deployment, open `/admin` and login with `DS2API_ADMIN_KEY` shown in Zeabur env/template instructions.
3. Import / edit config in Admin UI (it will be written and persisted to `/data/config.json`).
### Option 3: Vercel ### Option 3: Vercel
1. Fork this repo to your GitHub account 1. Fork this repo to your GitHub account
@@ -339,6 +351,7 @@ Queue limit = DS2API_ACCOUNT_MAX_QUEUE (default = recommended concurrency)
When `tools` is present in the request, DS2API performs anti-leak handling: When `tools` is present in the request, DS2API performs anti-leak handling:
1. Toolcall feature matching is enabled only in **non-code-block context** (fenced examples are ignored) 1. Toolcall feature matching is enabled only in **non-code-block context** (fenced examples are ignored)
- In non-code-block context, tool JSON may still be recognized even when mixed with normal prose; surrounding prose can remain as text output.
2. `responses` streaming strictly uses official item lifecycle events (`response.output_item.*`, `response.content_part.*`, `response.function_call_arguments.*`) 2. `responses` streaming strictly uses official item lifecycle events (`response.output_item.*`, `response.content_part.*`, `response.function_call_arguments.*`)
3. Tool names not declared in the `tools` schema are strictly rejected and will not be emitted as valid tool calls 3. Tool names not declared in the `tools` schema are strictly rejected and will not be emitted as valid tool calls
4. `responses` supports and enforces `tool_choice` (`auto`/`none`/`required`/forced function); `required` violations return `422` for non-stream and `response.failed` for stream 4. `responses` supports and enforces `tool_choice` (`auto`/`none`/`required`/forced function); `required` violations return `422` for non-stream and `response.failed` for stream
@@ -462,6 +475,7 @@ Workflow: `.github/workflows/release-artifacts.yml`
- **Trigger**: only on GitHub Release `published` (normal pushes do not trigger builds) - **Trigger**: only on GitHub Release `published` (normal pushes do not trigger builds)
- **Outputs**: multi-platform archives (`linux/amd64`, `linux/arm64`, `darwin/amd64`, `darwin/arm64`, `windows/amd64`) + `sha256sums.txt` - **Outputs**: multi-platform archives (`linux/amd64`, `linux/arm64`, `darwin/amd64`, `darwin/arm64`, `windows/amd64`) + `sha256sums.txt`
- **Container publishing**: GHCR only (`ghcr.io/cjackhwang/ds2api`)
- **Each archive includes**: `ds2api` executable, `static/admin`, WASM file, config template, README, LICENSE - **Each archive includes**: `ds2api` executable, `static/admin`, WASM file, config template, README, LICENSE
## Disclaimer ## Disclaimer

View File

@@ -51,7 +51,7 @@ DS2API 提供两个层级的测试:
1. **Preflight 检查** 1. **Preflight 检查**
- `go test ./... -count=1`(单元测试) - `go test ./... -count=1`(单元测试)
- `./tests/scripts/check-node-split-syntax.sh`Node 拆分模块语法门禁) - `./tests/scripts/check-node-split-syntax.sh`Node 拆分模块语法门禁)
- `node --test api/helpers/stream-tool-sieve.test.js api/chat-stream.test.js api/compat/js_compat_test.js`Node 流式拦截 + compat 单测 - `node --test`(如仓库存在 Node 单测文件时执行;当前默认以 Go 测试 + Node 语法门禁为主
- `npm run build --prefix webui`WebUI 构建检查) - `npm run build --prefix webui`WebUI 构建检查)
2. **隔离启动**:复制 `config.json` 到临时目录,启动独立服务进程 2. **隔离启动**:复制 `config.json` 到临时目录,启动独立服务进程

1
VERSION Normal file
View File

@@ -0,0 +1 @@
0.1.0

View File

@@ -1,18 +1,14 @@
services: services:
ds2api: ds2api:
build: . image: ghcr.io/cjackhwang/ds2api:latest
image: ds2api:latest container_name: ds2api
container_name: ds2api restart: always
ports: ports:
- "${PORT:-5001}:${PORT:-5001}" - "6011:5001"
env_file: volumes:
- .env - ./config.json:/app/config.json # 配置文件
environment: - ./.env:/app/.env # 环境变量
- HOST=0.0.0.0 environment:
restart: unless-stopped - TZ=Asia/Shanghai
healthcheck: - LOG_LEVEL=INFO
test: ["CMD", "wget", "-qO-", "http://localhost:${PORT:-5001}/healthz"] - DS2API_ADMIN_KEY=${DS2API_ADMIN_KEY:-ds2api}
interval: 30s
timeout: 10s
retries: 3
start_period: 10s

View File

@@ -183,6 +183,66 @@ func TestHandleClaudeStreamRealtimeToolSafety(t *testing.T) {
} }
} }
func TestHandleClaudeStreamRealtimeToolDetectionFromThinkingFallback(t *testing.T) {
h := &Handler{}
resp := makeClaudeSSEHTTPResponse(
`data: {"p":"response/thinking_content","v":"{\"tool_calls\":[{\"name\":\"search\""}`,
`data: {"p":"response/thinking_content","v":",\"input\":{\"q\":\"go\"}}]}"}`,
`data: [DONE]`,
)
rec := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodPost, "/anthropic/v1/messages", nil)
h.handleClaudeStreamRealtime(rec, req, resp, "claude-sonnet-4-5", []any{map[string]any{"role": "user", "content": "use tool"}}, true, false, []string{"search"})
frames := parseClaudeFrames(t, rec.Body.String())
foundToolUse := false
for _, f := range findClaudeFrames(frames, "content_block_start") {
contentBlock, _ := f.Payload["content_block"].(map[string]any)
if contentBlock["type"] == "tool_use" && contentBlock["name"] == "search" {
foundToolUse = true
break
}
}
if !foundToolUse {
t.Fatalf("expected tool_use block from thinking fallback, body=%s", rec.Body.String())
}
}
func TestHandleClaudeStreamRealtimeSkipsThinkingFallbackWhenFinalTextExists(t *testing.T) {
h := &Handler{}
resp := makeClaudeSSEHTTPResponse(
`data: {"p":"response/thinking_content","v":"{\"tool_calls\":[{\"name\":\"search\""}`,
`data: {"p":"response/thinking_content","v":",\"input\":{\"q\":\"go\"}}]}"}`,
`data: {"p":"response/content","v":"normal answer"}`,
`data: [DONE]`,
)
rec := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodPost, "/anthropic/v1/messages", nil)
h.handleClaudeStreamRealtime(rec, req, resp, "claude-sonnet-4-5", []any{map[string]any{"role": "user", "content": "use tool"}}, true, false, []string{"search"})
frames := parseClaudeFrames(t, rec.Body.String())
for _, f := range findClaudeFrames(frames, "content_block_start") {
contentBlock, _ := f.Payload["content_block"].(map[string]any)
if contentBlock["type"] == "tool_use" {
t.Fatalf("unexpected tool_use block when final text exists, body=%s", rec.Body.String())
}
}
foundEndTurn := false
for _, f := range findClaudeFrames(frames, "message_delta") {
delta, _ := f.Payload["delta"].(map[string]any)
if delta["stop_reason"] == "end_turn" {
foundEndTurn = true
break
}
}
if !foundEndTurn {
t.Fatalf("expected stop_reason=end_turn, body=%s", rec.Body.String())
}
}
func TestHandleClaudeStreamRealtimeUpstreamErrorEvent(t *testing.T) { func TestHandleClaudeStreamRealtimeUpstreamErrorEvent(t *testing.T) {
h := &Handler{} h := &Handler{}
resp := makeClaudeSSEHTTPResponse( resp := makeClaudeSSEHTTPResponse(

View File

@@ -141,6 +141,34 @@ func TestBuildClaudeToolPromptMultipleTools(t *testing.T) {
} }
} }
func TestBuildClaudeToolPromptSupportsOpenAIStyleFunctionTool(t *testing.T) {
tools := []any{
map[string]any{
"type": "function",
"function": map[string]any{
"name": "search",
"description": "Search via function tool",
"parameters": map[string]any{
"type": "object",
"properties": map[string]any{
"q": map[string]any{"type": "string"},
},
},
},
},
}
prompt := buildClaudeToolPrompt(tools)
if !containsStr(prompt, "Tool: search") {
t.Fatalf("expected OpenAI-style function tool name in prompt, got: %q", prompt)
}
if !containsStr(prompt, "Search via function tool") {
t.Fatalf("expected OpenAI-style function tool description in prompt, got: %q", prompt)
}
if !containsStr(prompt, "\"q\"") {
t.Fatalf("expected parameters schema serialized in prompt, got: %q", prompt)
}
}
func TestBuildClaudeToolPromptSkipsNonMap(t *testing.T) { func TestBuildClaudeToolPromptSkipsNonMap(t *testing.T) {
tools := []any{"not a map"} tools := []any{"not a map"}
prompt := buildClaudeToolPrompt(tools) prompt := buildClaudeToolPrompt(tools)
@@ -237,6 +265,21 @@ func TestExtractClaudeToolNamesNil(t *testing.T) {
} }
} }
func TestExtractClaudeToolNamesSupportsOpenAIStyleFunctionTool(t *testing.T) {
tools := []any{
map[string]any{
"type": "function",
"function": map[string]any{
"name": "search",
},
},
}
names := extractClaudeToolNames(tools)
if len(names) != 1 || names[0] != "search" {
t.Fatalf("expected [search], got %v", names)
}
}
// ─── toMessageMaps ─────────────────────────────────────────────────── // ─── toMessageMaps ───────────────────────────────────────────────────
func TestToMessageMapsNormal(t *testing.T) { func TestToMessageMapsNormal(t *testing.T) {

View File

@@ -46,9 +46,8 @@ func buildClaudeToolPrompt(tools []any) string {
if !ok { if !ok {
continue continue
} }
name, _ := m["name"].(string) name, desc, schemaObj := extractClaudeToolMeta(m)
desc, _ := m["description"].(string) schema, _ := json.Marshal(schemaObj)
schema, _ := json.Marshal(m["input_schema"])
parts = append(parts, fmt.Sprintf("Tool: %s\nDescription: %s\nParameters: %s", name, desc, schema)) parts = append(parts, fmt.Sprintf("Tool: %s\nDescription: %s\nParameters: %s", name, desc, schema))
} }
parts = append(parts, parts = append(parts,
@@ -98,13 +97,43 @@ func extractClaudeToolNames(tools []any) []string {
if !ok { if !ok {
continue continue
} }
if name, ok := m["name"].(string); ok && name != "" { name, _, _ := extractClaudeToolMeta(m)
if name != "" {
out = append(out, name) out = append(out, name)
} }
} }
return out return out
} }
func extractClaudeToolMeta(m map[string]any) (string, string, any) {
name, _ := m["name"].(string)
desc, _ := m["description"].(string)
schemaObj := m["input_schema"]
if schemaObj == nil {
schemaObj = m["parameters"]
}
if fn, ok := m["function"].(map[string]any); ok {
if strings.TrimSpace(name) == "" {
name, _ = fn["name"].(string)
}
if strings.TrimSpace(desc) == "" {
desc, _ = fn["description"].(string)
}
if schemaObj == nil {
if v, ok := fn["input_schema"]; ok {
schemaObj = v
}
}
if schemaObj == nil {
if v, ok := fn["parameters"]; ok {
schemaObj = v
}
}
}
return strings.TrimSpace(name), strings.TrimSpace(desc), schemaObj
}
func toMessageMaps(v any) []map[string]any { func toMessageMaps(v any) []map[string]any {
arr, ok := v.([]any) arr, ok := v.([]any)
if !ok { if !ok {

View File

@@ -46,6 +46,9 @@ func (s *claudeStreamRuntime) finalize(stopReason string) {
if s.bufferToolContent { if s.bufferToolContent {
detected := util.ParseToolCalls(finalText, s.toolNames) detected := util.ParseToolCalls(finalText, s.toolNames)
if len(detected) == 0 && finalText == "" && finalThinking != "" {
detected = util.ParseToolCalls(finalThinking, s.toolNames)
}
if len(detected) > 0 { if len(detected) > 0 {
stopReason = "tool_use" stopReason = "tool_use"
for i, tc := range detected { for i, tc := range detected {

View File

@@ -99,7 +99,7 @@ func TestGeminiRoutesRegistered(t *testing.T) {
func TestGenerateContentReturnsFunctionCallParts(t *testing.T) { func TestGenerateContentReturnsFunctionCallParts(t *testing.T) {
upstream := makeGeminiUpstreamResponse( upstream := makeGeminiUpstreamResponse(
`data: {"p":"response/content","v":"我来调用工具\n{\"tool_calls\":[{\"name\":\"eval_javascript\",\"input\":{\"code\":\"1+1\"}}]}"}`, `data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"eval_javascript\",\"input\":{\"code\":\"1+1\"}}]}"}`,
`data: [DONE]`, `data: [DONE]`,
) )
h := &Handler{ h := &Handler{
@@ -143,6 +143,42 @@ func TestGenerateContentReturnsFunctionCallParts(t *testing.T) {
} }
} }
func TestGenerateContentMixedToolSnippetAlsoTriggersFunctionCall(t *testing.T) {
upstream := makeGeminiUpstreamResponse(
`data: {"p":"response/content","v":"我来调用工具\n{\"tool_calls\":[{\"name\":\"eval_javascript\",\"input\":{\"code\":\"1+1\"}}]}"}`,
`data: [DONE]`,
)
h := &Handler{Store: testGeminiConfig{}, Auth: testGeminiAuth{}, DS: testGeminiDS{resp: upstream}}
r := chi.NewRouter()
RegisterRoutes(r, h)
body := `{
"contents":[{"role":"user","parts":[{"text":"call tool"}]}],
"tools":[{"functionDeclarations":[{"name":"eval_javascript","description":"eval","parameters":{"type":"object","properties":{"code":{"type":"string"}}}}]}]
}`
req := httptest.NewRequest(http.MethodPost, "/v1beta/models/gemini-2.5-pro:generateContent", strings.NewReader(body))
req.Header.Set("Authorization", "Bearer direct-token")
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("expected 200, got %d body=%s", rec.Code, rec.Body.String())
}
var out map[string]any
if err := json.Unmarshal(rec.Body.Bytes(), &out); err != nil {
t.Fatalf("decode response failed: %v", err)
}
candidates, _ := out["candidates"].([]any)
c0, _ := candidates[0].(map[string]any)
content, _ := c0["content"].(map[string]any)
parts, _ := content["parts"].([]any)
part0, _ := parts[0].(map[string]any)
functionCall, _ := part0["functionCall"].(map[string]any)
if functionCall["name"] != "eval_javascript" {
t.Fatalf("expected functionCall name eval_javascript for mixed snippet, got %#v", functionCall)
}
}
func TestStreamGenerateContentEmitsSSE(t *testing.T) { func TestStreamGenerateContentEmitsSSE(t *testing.T) {
upstream := makeGeminiUpstreamResponse( upstream := makeGeminiUpstreamResponse(
`data: {"p":"response/content","v":"hello "}`, `data: {"p":"response/content","v":"hello "}`,

View File

@@ -98,7 +98,7 @@ func (s *chatStreamRuntime) sendDone() {
func (s *chatStreamRuntime) finalize(finishReason string) { func (s *chatStreamRuntime) finalize(finishReason string) {
finalThinking := s.thinking.String() finalThinking := s.thinking.String()
finalText := s.text.String() finalText := s.text.String()
detected := util.ParseToolCalls(finalText, s.toolNames) detected := util.ParseStandaloneToolCalls(finalText, s.toolNames)
if len(detected) > 0 && !s.toolCallsDoneEmitted { if len(detected) > 0 && !s.toolCallsDoneEmitted {
finishReason = "tool_calls" finishReason = "tool_calls"
delta := map[string]any{ delta := map[string]any{

View File

@@ -3,6 +3,7 @@ package openai
import ( import (
"context" "context"
"encoding/json" "encoding/json"
"fmt"
"io" "io"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
@@ -210,7 +211,7 @@ func TestHandleNonStreamUnknownToolNotIntercepted(t *testing.T) {
} }
} }
func TestHandleNonStreamEmbeddedToolCallExampleIntercepted(t *testing.T) { func TestHandleNonStreamEmbeddedToolCallExampleRemainsText(t *testing.T) {
h := &Handler{} h := &Handler{}
resp := makeSSEHTTPResponse( resp := makeSSEHTTPResponse(
`data: {"p":"response/content","v":"下面是示例:"}`, `data: {"p":"response/content","v":"下面是示例:"}`,
@@ -228,16 +229,16 @@ func TestHandleNonStreamEmbeddedToolCallExampleIntercepted(t *testing.T) {
out := decodeJSONBody(t, rec.Body.String()) out := decodeJSONBody(t, rec.Body.String())
choices, _ := out["choices"].([]any) choices, _ := out["choices"].([]any)
choice, _ := choices[0].(map[string]any) choice, _ := choices[0].(map[string]any)
if choice["finish_reason"] != "tool_calls" { if choice["finish_reason"] != "stop" {
t.Fatalf("expected finish_reason=tool_calls, got %#v", choice["finish_reason"]) t.Fatalf("expected finish_reason=stop, got %#v", choice["finish_reason"])
} }
msg, _ := choice["message"].(map[string]any) msg, _ := choice["message"].(map[string]any)
toolCalls, _ := msg["tool_calls"].([]any) if _, ok := msg["tool_calls"]; ok {
if len(toolCalls) == 0 { t.Fatalf("did not expect tool_calls field for embedded example: %#v", msg["tool_calls"])
t.Fatalf("expected tool_calls field for embedded example: %#v", msg["tool_calls"])
} }
if msg["content"] != nil { content, _ := msg["content"].(string)
t.Fatalf("expected content nil when tool_calls detected, got %#v", msg["content"]) if !strings.Contains(content, "下面是示例:") || !strings.Contains(content, "请勿执行。") || !strings.Contains(content, `"tool_calls"`) {
t.Fatalf("expected embedded example to remain plain text, got %#v", content)
} }
} }
@@ -315,6 +316,36 @@ func TestHandleStreamToolCallInterceptsWithoutRawContentLeak(t *testing.T) {
} }
} }
func TestHandleStreamToolCallLargeArgumentsStillIntercepted(t *testing.T) {
h := &Handler{}
large := strings.Repeat("a", 9000)
payload := fmt.Sprintf(`{"tool_calls":[{"name":"search","input":{"q":"%s"}}]}`, large)
splitAt := len(payload) / 2
resp := makeSSEHTTPResponse(
fmt.Sprintf(`data: {"p":"response/content","v":%q}`, payload[:splitAt]),
fmt.Sprintf(`data: {"p":"response/content","v":%q}`, payload[splitAt:]),
`data: [DONE]`,
)
rec := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
h.handleStream(rec, req, resp, "cid3-large", "deepseek-chat", "prompt", false, false, []string{"search"})
frames, done := parseSSEDataFrames(t, rec.Body.String())
if !done {
t.Fatalf("expected [DONE], body=%s", rec.Body.String())
}
if !streamHasToolCallsDelta(frames) {
t.Fatalf("expected tool_calls delta, body=%s", rec.Body.String())
}
if streamHasRawToolJSONContent(frames) {
t.Fatalf("raw tool_calls JSON leaked in content delta: %s", rec.Body.String())
}
if streamFinishReason(frames) != "tool_calls" {
t.Fatalf("expected finish_reason=tool_calls, body=%s", rec.Body.String())
}
}
func TestHandleStreamReasonerToolCallInterceptsWithoutRawContentLeak(t *testing.T) { func TestHandleStreamReasonerToolCallInterceptsWithoutRawContentLeak(t *testing.T) {
h := &Handler{} h := &Handler{}
resp := makeSSEHTTPResponse( resp := makeSSEHTTPResponse(
@@ -375,7 +406,7 @@ func TestHandleStreamReasonerToolCallInterceptsWithoutRawContentLeak(t *testing.
} }
} }
func TestHandleStreamUnknownToolNotIntercepted(t *testing.T) { func TestHandleStreamUnknownToolDoesNotLeakRawPayload(t *testing.T) {
h := &Handler{} h := &Handler{}
resp := makeSSEHTTPResponse( resp := makeSSEHTTPResponse(
`data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"not_in_schema\",\"input\":{\"q\":\"go\"}}]}"}`, `data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"not_in_schema\",\"input\":{\"q\":\"go\"}}]}"}`,
@@ -393,8 +424,34 @@ func TestHandleStreamUnknownToolNotIntercepted(t *testing.T) {
if streamHasToolCallsDelta(frames) { if streamHasToolCallsDelta(frames) {
t.Fatalf("did not expect tool_calls delta for unknown schema name, body=%s", rec.Body.String()) t.Fatalf("did not expect tool_calls delta for unknown schema name, body=%s", rec.Body.String())
} }
if !streamHasRawToolJSONContent(frames) { if streamHasRawToolJSONContent(frames) {
t.Fatalf("expected raw tool_calls json to remain in content for unknown schema name: %s", rec.Body.String()) t.Fatalf("did not expect raw tool_calls json leak for unknown schema name: %s", rec.Body.String())
}
if streamFinishReason(frames) != "stop" {
t.Fatalf("expected finish_reason=stop, body=%s", rec.Body.String())
}
}
func TestHandleStreamUnknownToolNoArgsDoesNotLeakRawPayload(t *testing.T) {
h := &Handler{}
resp := makeSSEHTTPResponse(
`data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"not_in_schema\"}]}"}`,
`data: [DONE]`,
)
rec := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
h.handleStream(rec, req, resp, "cid5b", "deepseek-chat", "prompt", false, false, []string{"search"})
frames, done := parseSSEDataFrames(t, rec.Body.String())
if !done {
t.Fatalf("expected [DONE], body=%s", rec.Body.String())
}
if streamHasToolCallsDelta(frames) {
t.Fatalf("did not expect tool_calls delta for unknown schema name (no args), body=%s", rec.Body.String())
}
if streamHasRawToolJSONContent(frames) {
t.Fatalf("did not expect raw tool_calls json leak for unknown schema name (no args): %s", rec.Body.String())
} }
if streamFinishReason(frames) != "stop" { if streamFinishReason(frames) != "stop" {
t.Fatalf("expected finish_reason=stop, body=%s", rec.Body.String()) t.Fatalf("expected finish_reason=stop, body=%s", rec.Body.String())
@@ -474,15 +531,12 @@ func TestHandleStreamToolCallMixedWithPlainTextSegments(t *testing.T) {
if !strings.Contains(got, "下面是示例:") || !strings.Contains(got, "请勿执行。") { if !strings.Contains(got, "下面是示例:") || !strings.Contains(got, "请勿执行。") {
t.Fatalf("expected pre/post plain text to pass sieve, got=%q", got) t.Fatalf("expected pre/post plain text to pass sieve, got=%q", got)
} }
if strings.Contains(strings.ToLower(got), `"tool_calls"`) {
t.Fatalf("expected no raw tool_calls json leak in content, got=%q", got)
}
if streamFinishReason(frames) != "tool_calls" { if streamFinishReason(frames) != "tool_calls" {
t.Fatalf("expected finish_reason=tool_calls for mixed prose, body=%s", rec.Body.String()) t.Fatalf("expected finish_reason=tool_calls for mixed prose, body=%s", rec.Body.String())
} }
} }
func TestHandleStreamToolCallAfterLeadingTextStillIntercepted(t *testing.T) { func TestHandleStreamToolCallAfterLeadingTextRemainsText(t *testing.T) {
h := &Handler{} h := &Handler{}
resp := makeSSEHTTPResponse( resp := makeSSEHTTPResponse(
`data: {"p":"response/content","v":"我将调用工具。"}`, `data: {"p":"response/content","v":"我将调用工具。"}`,
@@ -516,15 +570,13 @@ func TestHandleStreamToolCallAfterLeadingTextStillIntercepted(t *testing.T) {
if !strings.Contains(got, "我将调用工具。") { if !strings.Contains(got, "我将调用工具。") {
t.Fatalf("expected leading text to keep streaming, got=%q", got) t.Fatalf("expected leading text to keep streaming, got=%q", got)
} }
if strings.Contains(strings.ToLower(got), "tool_calls") {
t.Fatalf("unexpected raw tool json leak, got=%q", got)
}
if streamFinishReason(frames) != "tool_calls" { if streamFinishReason(frames) != "tool_calls" {
t.Fatalf("expected finish_reason=tool_calls, body=%s", rec.Body.String()) t.Fatalf("expected finish_reason=tool_calls, body=%s", rec.Body.String())
} }
} }
func TestHandleStreamToolCallWithSameChunkTrailingTextStillIntercepted(t *testing.T) { func TestHandleStreamToolCallWithSameChunkTrailingTextRemainsText(t *testing.T) {
h := &Handler{} h := &Handler{}
resp := makeSSEHTTPResponse( resp := makeSSEHTTPResponse(
`data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"search\",\"input\":{\"q\":\"go\"}}]}接下来我会继续说明。"}`, `data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"search\",\"input\":{\"q\":\"go\"}}]}接下来我会继续说明。"}`,
@@ -557,15 +609,52 @@ func TestHandleStreamToolCallWithSameChunkTrailingTextStillIntercepted(t *testin
if !strings.Contains(got, "接下来我会继续说明。") { if !strings.Contains(got, "接下来我会继续说明。") {
t.Fatalf("expected trailing plain text to be preserved, got=%q", got) t.Fatalf("expected trailing plain text to be preserved, got=%q", got)
} }
if strings.Contains(strings.ToLower(got), "tool_calls") {
t.Fatalf("unexpected raw tool json leak, got=%q", got)
}
if streamFinishReason(frames) != "tool_calls" { if streamFinishReason(frames) != "tool_calls" {
t.Fatalf("expected finish_reason=tool_calls, body=%s", rec.Body.String()) t.Fatalf("expected finish_reason=tool_calls, body=%s", rec.Body.String())
} }
} }
func TestHandleStreamToolCallKeyAppearsLateStillNoPrefixLeak(t *testing.T) { func TestHandleStreamFencedToolCallSnippetRemainsText(t *testing.T) {
h := &Handler{}
resp := makeSSEHTTPResponse(
fmt.Sprintf(`data: {"p":"response/content","v":%q}`, "下面是调用示例:\n```json\n"),
fmt.Sprintf(`data: {"p":"response/content","v":%q}`, "{\"tool_calls\":[{\"name\":\"search\",\"input\":{\"q\":\"go\"}}]}\n```\n仅示例不要执行。"),
`data: [DONE]`,
)
rec := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil)
h.handleStream(rec, req, resp, "cid7f", "deepseek-chat", "prompt", false, false, []string{"search"})
frames, done := parseSSEDataFrames(t, rec.Body.String())
if !done {
t.Fatalf("expected [DONE], body=%s", rec.Body.String())
}
if streamHasToolCallsDelta(frames) {
t.Fatalf("did not expect tool_calls delta for fenced snippet, body=%s", rec.Body.String())
}
content := strings.Builder{}
for _, frame := range frames {
choices, _ := frame["choices"].([]any)
for _, item := range choices {
choice, _ := item.(map[string]any)
delta, _ := choice["delta"].(map[string]any)
if c, ok := delta["content"].(string); ok {
content.WriteString(c)
}
}
}
got := content.String()
if !strings.Contains(got, "```json") || !strings.Contains(strings.ToLower(got), "tool_calls") {
t.Fatalf("expected fenced tool snippet in content, got=%q", got)
}
if streamFinishReason(frames) != "stop" {
t.Fatalf("expected finish_reason=stop, body=%s", rec.Body.String())
}
}
func TestHandleStreamToolCallKeyAppearsLateRemainsText(t *testing.T) {
h := &Handler{} h := &Handler{}
spaces := strings.Repeat(" ", 200) spaces := strings.Repeat(" ", 200)
resp := makeSSEHTTPResponse( resp := makeSSEHTTPResponse(
@@ -586,9 +675,6 @@ func TestHandleStreamToolCallKeyAppearsLateStillNoPrefixLeak(t *testing.T) {
if !streamHasToolCallsDelta(frames) { if !streamHasToolCallsDelta(frames) {
t.Fatalf("expected tool_calls delta, body=%s", rec.Body.String()) t.Fatalf("expected tool_calls delta, body=%s", rec.Body.String())
} }
if streamHasRawToolJSONContent(frames) {
t.Fatalf("raw tool_calls JSON leaked in content delta: %s", rec.Body.String())
}
content := strings.Builder{} content := strings.Builder{}
for _, frame := range frames { for _, frame := range frames {
choices, _ := frame["choices"].([]any) choices, _ := frame["choices"].([]any)
@@ -601,9 +687,6 @@ func TestHandleStreamToolCallKeyAppearsLateStillNoPrefixLeak(t *testing.T) {
} }
} }
got := content.String() got := content.String()
if strings.Contains(got, "{") {
t.Fatalf("unexpected suspicious prefix leak in content: %q", got)
}
if !strings.Contains(got, "后置正文C。") { if !strings.Contains(got, "后置正文C。") {
t.Fatalf("expected stream to continue after tool json convergence, got=%q", got) t.Fatalf("expected stream to continue after tool json convergence, got=%q", got)
} }
@@ -686,7 +769,7 @@ func TestHandleStreamIncompleteCapturedToolJSONFlushesAsTextOnFinalize(t *testin
} }
} }
func TestHandleStreamToolCallArgumentsEmitIncrementally(t *testing.T) { func TestHandleStreamToolCallArgumentsEmitAsSingleCompletedChunk(t *testing.T) {
h := &Handler{} h := &Handler{}
resp := makeSSEHTTPResponse( resp := makeSSEHTTPResponse(
`data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"search\",\"input\":{\"q\":\"go"}`, `data: {"p":"response/content","v":"{\"tool_calls\":[{\"name\":\"search\",\"input\":{\"q\":\"go"}`,
@@ -709,8 +792,8 @@ func TestHandleStreamToolCallArgumentsEmitIncrementally(t *testing.T) {
t.Fatalf("raw tool_calls JSON leaked in content delta: %s", rec.Body.String()) t.Fatalf("raw tool_calls JSON leaked in content delta: %s", rec.Body.String())
} }
argChunks := streamToolCallArgumentChunks(frames) argChunks := streamToolCallArgumentChunks(frames)
if len(argChunks) < 2 { if len(argChunks) == 0 {
t.Fatalf("expected incremental arguments chunks, got=%v body=%s", argChunks, rec.Body.String()) t.Fatalf("expected tool call arguments chunk, got=%v body=%s", argChunks, rec.Body.String())
} }
joined := strings.Join(argChunks, "") joined := strings.Join(argChunks, "")
if !strings.Contains(joined, `"q":"golang"`) || !strings.Contains(joined, `"page":1`) { if !strings.Contains(joined, `"q":"golang"`) || !strings.Contains(joined, `"page":1`) {

View File

@@ -3,7 +3,6 @@ package openai
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"strings" "strings"
"ds2api/internal/config" "ds2api/internal/config"
@@ -34,9 +33,9 @@ func normalizeOpenAIMessagesForPrompt(raw []any, traceID string) []map[string]an
"role": "user", "role": "user",
"content": formatToolResultForPrompt(msg), "content": formatToolResultForPrompt(msg),
}) })
case "user", "system": case "user", "system", "developer":
out = append(out, map[string]any{ out = append(out, map[string]any{
"role": role, "role": normalizeOpenAIRoleForPrompt(role),
"content": normalizeOpenAIContentForPrompt(msg["content"]), "content": normalizeOpenAIContentForPrompt(msg["content"]),
}) })
default: default:
@@ -48,7 +47,7 @@ func normalizeOpenAIMessagesForPrompt(raw []any, traceID string) []map[string]an
role = "user" role = "user"
} }
out = append(out, map[string]any{ out = append(out, map[string]any{
"role": role, "role": normalizeOpenAIRoleForPrompt(role),
"content": content, "content": content,
}) })
} }
@@ -175,30 +174,11 @@ func normalizeToolArgumentString(raw string) string {
if trimmed == "" { if trimmed == "" {
return "" return ""
} }
if !looksLikeConcatenatedJSON(trimmed) { if looksLikeConcatenatedJSON(trimmed) {
return trimmed // Keep original payload to avoid silent argument rewrites.
return raw
} }
dec := json.NewDecoder(strings.NewReader(trimmed)) return trimmed
values := make([]any, 0, 2)
for {
var v any
if err := dec.Decode(&v); err != nil {
if err == io.EOF {
break
}
return trimmed
}
values = append(values, v)
}
if len(values) < 2 {
return trimmed
}
last := values[len(values)-1]
b, err := json.Marshal(last)
if err != nil || len(b) == 0 {
return trimmed
}
return string(b)
} }
func marshalToPromptString(v any) string { func marshalToPromptString(v any) string {
@@ -209,6 +189,14 @@ func marshalToPromptString(v any) string {
return string(b) return string(b)
} }
func normalizeOpenAIRoleForPrompt(role string) string {
role = strings.ToLower(strings.TrimSpace(role))
if role == "developer" {
return "system"
}
return role
}
func asString(v any) string { func asString(v any) string {
if s, ok := v.(string); ok { if s, ok := v.(string); ok {
return s return s

View File

@@ -168,7 +168,7 @@ func TestNormalizeOpenAIMessagesForPrompt_AssistantMultipleToolCallsRemainSepara
} }
} }
func TestNormalizeOpenAIMessagesForPrompt_RepairsConcatenatedToolArguments(t *testing.T) { func TestNormalizeOpenAIMessagesForPrompt_PreservesConcatenatedToolArguments(t *testing.T) {
raw := []any{ raw := []any{
map[string]any{ map[string]any{
"role": "assistant", "role": "assistant",
@@ -189,10 +189,21 @@ func TestNormalizeOpenAIMessagesForPrompt_RepairsConcatenatedToolArguments(t *te
t.Fatalf("expected one normalized message, got %d", len(normalized)) t.Fatalf("expected one normalized message, got %d", len(normalized))
} }
content, _ := normalized[0]["content"].(string) content, _ := normalized[0]["content"].(string)
if !strings.Contains(content, `function.arguments: {"query":"测试工具调用"}`) { if !strings.Contains(content, `function.arguments: {}{"query":"测试工具调用"}`) {
t.Fatalf("expected repaired arguments in tool history, got %q", content) t.Fatalf("expected original concatenated arguments in tool history, got %q", content)
} }
if strings.Contains(content, `{}{"query":"测试工具调用"}`) { }
t.Fatalf("expected concatenated JSON to be repaired, got %q", content)
func TestNormalizeOpenAIMessagesForPrompt_DeveloperRoleMapsToSystem(t *testing.T) {
raw := []any{
map[string]any{"role": "developer", "content": "必须先走工具调用"},
map[string]any{"role": "user", "content": "你好"},
}
normalized := normalizeOpenAIMessagesForPrompt(raw, "")
if len(normalized) != 2 {
t.Fatalf("expected 2 normalized messages, got %d", len(normalized))
}
if normalized[0]["role"] != "system" {
t.Fatalf("expected developer role converted to system, got %#v", normalized[0]["role"])
} }
} }

View File

@@ -135,7 +135,7 @@ func TestNormalizeResponsesInputAsMessagesFunctionCallItem(t *testing.T) {
} }
} }
func TestNormalizeResponsesInputAsMessagesFunctionCallItemRepairsConcatenatedArguments(t *testing.T) { func TestNormalizeResponsesInputAsMessagesFunctionCallItemPreservesConcatenatedArguments(t *testing.T) {
msgs := normalizeResponsesInputAsMessages([]any{ msgs := normalizeResponsesInputAsMessages([]any{
map[string]any{ map[string]any{
"type": "function_call", "type": "function_call",
@@ -151,8 +151,8 @@ func TestNormalizeResponsesInputAsMessagesFunctionCallItemRepairsConcatenatedArg
toolCalls, _ := m["tool_calls"].([]any) toolCalls, _ := m["tool_calls"].([]any)
call, _ := toolCalls[0].(map[string]any) call, _ := toolCalls[0].(map[string]any)
fn, _ := call["function"].(map[string]any) fn, _ := call["function"].(map[string]any)
if fn["arguments"] != `{"q":"golang"}` { if fn["arguments"] != `{}{"q":"golang"}` {
t.Fatalf("expected concatenated call arguments repaired, got %#v", fn["arguments"]) t.Fatalf("expected original concatenated call arguments preserved, got %#v", fn["arguments"])
} }
} }

View File

@@ -113,15 +113,10 @@ func (h *Handler) handleResponsesNonStream(w http.ResponseWriter, resp *http.Res
return return
} }
result := sse.CollectStream(resp, thinkingEnabled, true) result := sse.CollectStream(resp, thinkingEnabled, true)
textParsed := util.ParseToolCallsDetailed(result.Text, toolNames) textParsed := util.ParseStandaloneToolCallsDetailed(result.Text, toolNames)
thinkingParsed := util.ParseToolCallsDetailed(result.Thinking, toolNames)
logResponsesToolPolicyRejection(traceID, toolChoice, textParsed, "text") logResponsesToolPolicyRejection(traceID, toolChoice, textParsed, "text")
logResponsesToolPolicyRejection(traceID, toolChoice, thinkingParsed, "thinking")
callCount := len(textParsed.Calls) callCount := len(textParsed.Calls)
if callCount == 0 {
callCount = len(thinkingParsed.Calls)
}
if toolChoice.IsRequired() && callCount == 0 { if toolChoice.IsRequired() && callCount == 0 {
writeOpenAIErrorWithCode(w, http.StatusUnprocessableEntity, "tool_choice requires at least one valid tool call.", "tool_choice_violation") writeOpenAIErrorWithCode(w, http.StatusUnprocessableEntity, "tool_choice requires at least one valid tool call.", "tool_choice_violation")
return return

View File

@@ -29,7 +29,7 @@ func normalizeResponsesInputItemWithState(m map[string]any, callNameByID map[str
return nil return nil
} }
return map[string]any{ return map[string]any{
"role": role, "role": normalizeOpenAIRoleForPrompt(role),
"content": content, "content": content,
} }
} }
@@ -51,7 +51,7 @@ func normalizeResponsesInputItemWithState(m map[string]any, callNameByID map[str
role = "user" role = "user"
} }
return map[string]any{ return map[string]any{
"role": role, "role": normalizeOpenAIRoleForPrompt(role),
"content": content, "content": content,
} }
case "function_call_output", "tool_result": case "function_call_output", "tool_result":

View File

@@ -102,16 +102,11 @@ func (s *responsesStreamRuntime) finalize() {
if s.bufferToolContent { if s.bufferToolContent {
s.processToolStreamEvents(flushToolSieve(&s.sieve, s.toolNames), true) s.processToolStreamEvents(flushToolSieve(&s.sieve, s.toolNames), true)
s.processToolStreamEvents(flushToolSieve(&s.thinkingSieve, s.toolNames), false)
} }
textParsed := util.ParseToolCallsDetailed(finalText, s.toolNames) textParsed := util.ParseStandaloneToolCallsDetailed(finalText, s.toolNames)
thinkingParsed := util.ParseToolCallsDetailed(finalThinking, s.toolNames)
detected := textParsed.Calls detected := textParsed.Calls
if len(detected) == 0 { s.logToolPolicyRejections(textParsed)
detected = thinkingParsed.Calls
}
s.logToolPolicyRejections(textParsed, thinkingParsed)
if len(detected) > 0 { if len(detected) > 0 {
s.toolCallsEmitted = true s.toolCallsEmitted = true
@@ -157,7 +152,7 @@ func (s *responsesStreamRuntime) finalize() {
s.sendDone() s.sendDone()
} }
func (s *responsesStreamRuntime) logToolPolicyRejections(textParsed, thinkingParsed util.ToolCallParseResult) { func (s *responsesStreamRuntime) logToolPolicyRejections(textParsed util.ToolCallParseResult) {
logRejected := func(parsed util.ToolCallParseResult, channel string) { logRejected := func(parsed util.ToolCallParseResult, channel string) {
rejected := filteredRejectedToolNamesForLog(parsed.RejectedToolNames) rejected := filteredRejectedToolNamesForLog(parsed.RejectedToolNames)
if !parsed.RejectedByPolicy || len(rejected) == 0 { if !parsed.RejectedByPolicy || len(rejected) == 0 {
@@ -172,7 +167,6 @@ func (s *responsesStreamRuntime) logToolPolicyRejections(textParsed, thinkingPar
) )
} }
logRejected(textParsed, "text") logRejected(textParsed, "text")
logRejected(thinkingParsed, "thinking")
} }
func (s *responsesStreamRuntime) hasFunctionCallDone() bool { func (s *responsesStreamRuntime) hasFunctionCallDone() bool {
@@ -207,9 +201,6 @@ func (s *responsesStreamRuntime) onParsed(parsed sse.LineResult) streamengine.Pa
} }
s.thinking.WriteString(p.Text) s.thinking.WriteString(p.Text)
s.sendEvent("response.reasoning.delta", openaifmt.BuildResponsesReasoningDeltaPayload(s.responseID, p.Text)) s.sendEvent("response.reasoning.delta", openaifmt.BuildResponsesReasoningDeltaPayload(s.responseID, p.Text))
if s.bufferToolContent {
s.processToolStreamEvents(processToolSieveChunk(&s.thinkingSieve, p.Text, s.toolNames), false)
}
continue continue
} }

View File

@@ -99,9 +99,6 @@ func TestHandleResponsesStreamUsesOfficialOutputItemEvents(t *testing.T) {
if !strings.Contains(body, "event: response.output_item.done") { if !strings.Contains(body, "event: response.output_item.done") {
t.Fatalf("expected response.output_item.done event, body=%s", body) t.Fatalf("expected response.output_item.done event, body=%s", body)
} }
if !strings.Contains(body, "event: response.function_call_arguments.delta") {
t.Fatalf("expected response.function_call_arguments.delta event, body=%s", body)
}
if !strings.Contains(body, "event: response.function_call_arguments.done") { if !strings.Contains(body, "event: response.function_call_arguments.done") {
t.Fatalf("expected response.function_call_arguments.done event, body=%s", body) t.Fatalf("expected response.function_call_arguments.done event, body=%s", body)
} }
@@ -266,7 +263,7 @@ func TestHandleResponsesStreamOutputTextDeltaCarriesItemIndexes(t *testing.T) {
} }
} }
func TestHandleResponsesStreamThinkingTextAndToolUseDistinctOutputIndexes(t *testing.T) { func TestHandleResponsesStreamThinkingAndMixedToolExampleRemainMessageOnly(t *testing.T) {
h := &Handler{} h := &Handler{}
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil) req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil)
rec := httptest.NewRecorder() rec := httptest.NewRecorder()
@@ -291,23 +288,8 @@ func TestHandleResponsesStreamThinkingTextAndToolUseDistinctOutputIndexes(t *tes
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-reasoner", "prompt", true, false, []string{"read_file"}, util.DefaultToolChoicePolicy(), "") h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-reasoner", "prompt", true, false, []string{"read_file"}, util.DefaultToolChoicePolicy(), "")
addedPayloads := extractAllSSEEventPayloads(rec.Body.String(), "response.output_item.added") addedPayloads := extractAllSSEEventPayloads(rec.Body.String(), "response.output_item.added")
if len(addedPayloads) < 2 { if len(addedPayloads) < 1 {
t.Fatalf("expected message + function_call output_item.added events, got %d body=%s", len(addedPayloads), rec.Body.String()) t.Fatalf("expected at least one output_item.added event, got %d body=%s", len(addedPayloads), rec.Body.String())
}
indexes := map[int]struct{}{}
typeByIndex := map[int]string{}
addedIDs := map[string]string{}
for _, payload := range addedPayloads {
item, _ := payload["item"].(map[string]any)
itemType := strings.TrimSpace(asString(item["type"]))
outputIndex := int(asFloat(payload["output_index"]))
if _, exists := indexes[outputIndex]; exists {
t.Fatalf("found duplicated output_index=%d for item types=%q and %q payload=%#v", outputIndex, typeByIndex[outputIndex], itemType, payload)
}
indexes[outputIndex] = struct{}{}
typeByIndex[outputIndex] = itemType
addedIDs[itemType] = strings.TrimSpace(asString(payload["item_id"]))
} }
completedPayload, ok := extractSSEEventPayload(rec.Body.String(), "response.completed") completedPayload, ok := extractSSEEventPayload(rec.Body.String(), "response.completed")
@@ -316,20 +298,21 @@ func TestHandleResponsesStreamThinkingTextAndToolUseDistinctOutputIndexes(t *tes
} }
responseObj, _ := completedPayload["response"].(map[string]any) responseObj, _ := completedPayload["response"].(map[string]any)
output, _ := responseObj["output"].([]any) output, _ := responseObj["output"].([]any)
found := map[string]bool{} hasMessage := false
for _, item := range output { for _, item := range output {
m, _ := item.(map[string]any) m, _ := item.(map[string]any)
itemType := strings.TrimSpace(asString(m["type"])) if m == nil {
itemID := strings.TrimSpace(asString(m["id"]))
if itemType == "" || itemID == "" {
continue continue
} }
if wantID := strings.TrimSpace(addedIDs[itemType]); wantID != "" && wantID == itemID { if asString(m["type"]) == "message" {
found[itemType] = true hasMessage = true
}
if asString(m["type"]) == "function_call" {
t.Fatalf("did not expect function_call output for mixed prose tool example, output=%#v", output)
} }
} }
if !found["message"] || !found["function_call"] { if !hasMessage {
t.Fatalf("expected completed output to contain streamed message/function_call item ids, found=%#v output=%#v", found, output) t.Fatalf("expected message output for mixed prose tool example, output=%#v", output)
} }
} }
@@ -360,7 +343,7 @@ func TestHandleResponsesStreamToolChoiceNoneRejectsFunctionCall(t *testing.T) {
} }
} }
func TestHandleResponsesStreamMalformedToolJSONClosesInProgressFunctionItem(t *testing.T) { func TestHandleResponsesStreamMalformedToolJSONFallsBackToText(t *testing.T) {
h := &Handler{} h := &Handler{}
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil) req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil)
rec := httptest.NewRecorder() rec := httptest.NewRecorder()
@@ -373,7 +356,7 @@ func TestHandleResponsesStreamMalformedToolJSONClosesInProgressFunctionItem(t *t
return "data: " + string(b) + "\n" return "data: " + string(b) + "\n"
} }
// invalid JSON (NaN) can still trigger incremental tool deltas before final parse rejects it // invalid JSON (NaN) should remain plain text in strict mode.
streamBody := sseLine(`{"tool_calls":[{"name":"read_file","input":{"path":"README.MD"},"x":NaN}]}`) + "data: [DONE]\n" streamBody := sseLine(`{"tool_calls":[{"name":"read_file","input":{"path":"README.MD"},"x":NaN}]}`) + "data: [DONE]\n"
resp := &http.Response{ resp := &http.Response{
StatusCode: http.StatusOK, StatusCode: http.StatusOK,
@@ -382,14 +365,11 @@ func TestHandleResponsesStreamMalformedToolJSONClosesInProgressFunctionItem(t *t
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-chat", "prompt", false, false, []string{"read_file"}, util.DefaultToolChoicePolicy(), "") h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-chat", "prompt", false, false, []string{"read_file"}, util.DefaultToolChoicePolicy(), "")
body := rec.Body.String() body := rec.Body.String()
if !strings.Contains(body, "event: response.function_call_arguments.delta") { if strings.Contains(body, "event: response.function_call_arguments.delta") || strings.Contains(body, "event: response.function_call_arguments.done") {
t.Fatalf("expected response.function_call_arguments.delta event for malformed payload, body=%s", body) t.Fatalf("did not expect function_call events for malformed payload in strict mode, body=%s", body)
} }
if !strings.Contains(body, "event: response.function_call_arguments.done") { if !strings.Contains(body, "event: response.output_text.delta") {
t.Fatalf("expected runtime to close in-progress function_call with done event, body=%s", body) t.Fatalf("expected response.output_text.delta for malformed payload, body=%s", body)
}
if !strings.Contains(body, "event: response.output_item.done") {
t.Fatalf("expected runtime to close function output item, body=%s", body)
} }
if !strings.Contains(body, "event: response.completed") { if !strings.Contains(body, "event: response.completed") {
t.Fatalf("expected response.completed event, body=%s", body) t.Fatalf("expected response.completed event, body=%s", body)
@@ -430,6 +410,42 @@ func TestHandleResponsesStreamRequiredToolChoiceFailure(t *testing.T) {
} }
} }
func TestHandleResponsesStreamRequiredToolChoiceIgnoresThinkingToolPayload(t *testing.T) {
h := &Handler{}
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil)
rec := httptest.NewRecorder()
sseLine := func(path, value string) string {
b, _ := json.Marshal(map[string]any{
"p": path,
"v": value,
})
return "data: " + string(b) + "\n"
}
streamBody := sseLine("response/thinking_content", `{"tool_calls":[{"name":"read_file","input":{"path":"README.MD"}}]}`) +
sseLine("response/content", "plain text only") +
"data: [DONE]\n"
resp := &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(strings.NewReader(streamBody)),
}
policy := util.ToolChoicePolicy{
Mode: util.ToolChoiceRequired,
Allowed: map[string]struct{}{"read_file": {}},
}
h.handleResponsesStream(rec, req, resp, "owner-a", "resp_test", "deepseek-chat", "prompt", true, false, []string{"read_file"}, policy, "")
body := rec.Body.String()
if !strings.Contains(body, "event: response.failed") {
t.Fatalf("expected response.failed event for required tool_choice violation, body=%s", body)
}
if strings.Contains(body, "event: response.completed") {
t.Fatalf("did not expect response.completed after failure, body=%s", body)
}
}
func TestHandleResponsesStreamRequiredMalformedToolPayloadFails(t *testing.T) { func TestHandleResponsesStreamRequiredMalformedToolPayloadFails(t *testing.T) {
h := &Handler{} h := &Handler{}
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil) req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil)
@@ -516,6 +532,33 @@ func TestHandleResponsesNonStreamRequiredToolChoiceViolation(t *testing.T) {
} }
} }
func TestHandleResponsesNonStreamRequiredToolChoiceIgnoresThinkingToolPayload(t *testing.T) {
h := &Handler{}
rec := httptest.NewRecorder()
resp := &http.Response{
StatusCode: http.StatusOK,
Body: io.NopCloser(strings.NewReader(
`data: {"p":"response/thinking_content","v":"{\"tool_calls\":[{\"name\":\"read_file\",\"input\":{\"path\":\"README.MD\"}}]}"}` + "\n" +
`data: {"p":"response/content","v":"plain text only"}` + "\n" +
`data: [DONE]` + "\n",
)),
}
policy := util.ToolChoicePolicy{
Mode: util.ToolChoiceRequired,
Allowed: map[string]struct{}{"read_file": {}},
}
h.handleResponsesNonStream(rec, resp, "owner-a", "resp_test", "deepseek-chat", "prompt", true, []string{"read_file"}, policy, "")
if rec.Code != http.StatusUnprocessableEntity {
t.Fatalf("expected 422 for required tool_choice violation, got %d body=%s", rec.Code, rec.Body.String())
}
out := decodeJSONBody(t, rec.Body.String())
errObj, _ := out["error"].(map[string]any)
if asString(errObj["code"]) != "tool_choice_violation" {
t.Fatalf("expected code=tool_choice_violation, got %#v", out)
}
}
func TestHandleResponsesNonStreamToolChoiceNoneRejectsFunctionCall(t *testing.T) { func TestHandleResponsesNonStreamToolChoiceNoneRejectsFunctionCall(t *testing.T) {
h := &Handler{} h := &Handler{}
rec := httptest.NewRecorder() rec := httptest.NewRecorder()

View File

@@ -167,19 +167,15 @@ func TestResponsesNonStreamMixedProseToolPayloadHandlerPath(t *testing.T) {
t.Fatalf("decode response failed: %v body=%s", err, rec.Body.String()) t.Fatalf("decode response failed: %v body=%s", err, rec.Body.String())
} }
outputText, _ := out["output_text"].(string) outputText, _ := out["output_text"].(string)
if outputText != "" { if outputText == "" {
t.Fatalf("expected output_text hidden for tool call payload, got %q", outputText) t.Fatalf("expected output_text preserved for mixed prose payload")
} }
output, _ := out["output"].([]any) output, _ := out["output"].([]any)
hasFunctionCall := false if len(output) != 1 {
for _, item := range output { t.Fatalf("expected one output item, got %#v", output)
m, _ := item.(map[string]any)
if m != nil && m["type"] == "function_call" {
hasFunctionCall = true
break
}
} }
if !hasFunctionCall { first, _ := output[0].(map[string]any)
t.Fatalf("expected function_call output item, got %#v", output) if first["type"] != "message" {
t.Fatalf("expected message output item, got %#v", output)
} }
} }

View File

@@ -14,6 +14,11 @@ func processToolSieveChunk(state *toolStreamSieveState, chunk string, toolNames
state.pending.WriteString(chunk) state.pending.WriteString(chunk)
} }
events := make([]toolStreamEvent, 0, 2) events := make([]toolStreamEvent, 0, 2)
if len(state.pendingToolCalls) > 0 {
events = append(events, toolStreamEvent{ToolCalls: state.pendingToolCalls})
state.pendingToolRaw = ""
state.pendingToolCalls = nil
}
for { for {
if state.capturing { if state.capturing {
@@ -21,32 +26,30 @@ func processToolSieveChunk(state *toolStreamSieveState, chunk string, toolNames
state.capture.WriteString(state.pending.String()) state.capture.WriteString(state.pending.String())
state.pending.Reset() state.pending.Reset()
} }
if deltas := buildIncrementalToolDeltas(state); len(deltas) > 0 {
events = append(events, toolStreamEvent{ToolCallDeltas: deltas})
}
prefix, calls, suffix, ready := consumeToolCapture(state, toolNames) prefix, calls, suffix, ready := consumeToolCapture(state, toolNames)
if !ready { if !ready {
if state.capture.Len() > toolSieveCaptureLimit {
content := state.capture.String()
state.capture.Reset()
state.capturing = false
state.resetIncrementalToolState()
state.noteText(content)
events = append(events, toolStreamEvent{Content: content})
continue
}
break break
} }
captured := state.capture.String()
state.capture.Reset() state.capture.Reset()
state.capturing = false state.capturing = false
state.resetIncrementalToolState() state.resetIncrementalToolState()
if len(calls) > 0 {
if prefix != "" {
state.noteText(prefix)
events = append(events, toolStreamEvent{Content: prefix})
}
if suffix != "" {
state.pending.WriteString(suffix)
}
_ = captured
state.pendingToolCalls = calls
continue
}
if prefix != "" { if prefix != "" {
state.noteText(prefix) state.noteText(prefix)
events = append(events, toolStreamEvent{Content: prefix}) events = append(events, toolStreamEvent{Content: prefix})
} }
if len(calls) > 0 {
events = append(events, toolStreamEvent{ToolCalls: calls})
}
if suffix != "" { if suffix != "" {
state.pending.WriteString(suffix) state.pending.WriteString(suffix)
} }
@@ -89,6 +92,11 @@ func flushToolSieve(state *toolStreamSieveState, toolNames []string) []toolStrea
return nil return nil
} }
events := processToolSieveChunk(state, "", toolNames) events := processToolSieveChunk(state, "", toolNames)
if len(state.pendingToolCalls) > 0 {
events = append(events, toolStreamEvent{ToolCalls: state.pendingToolCalls})
state.pendingToolRaw = ""
state.pendingToolCalls = nil
}
if state.capturing { if state.capturing {
consumedPrefix, consumedCalls, consumedSuffix, ready := consumeToolCapture(state, toolNames) consumedPrefix, consumedCalls, consumedSuffix, ready := consumeToolCapture(state, toolNames)
if ready { if ready {
@@ -200,9 +208,14 @@ func consumeToolCapture(state *toolStreamSieveState, toolNames []string) (prefix
if insideCodeFence(state.recentTextTail + prefixPart) { if insideCodeFence(state.recentTextTail + prefixPart) {
return captured, nil, "", true return captured, nil, "", true
} }
parsed := util.ParseStandaloneToolCalls(obj, toolNames) parsed := util.ParseStandaloneToolCallsDetailed(obj, toolNames)
if len(parsed) == 0 { if len(parsed.Calls) == 0 {
if parsed.SawToolCallSyntax && parsed.RejectedByPolicy {
// Parsed as tool-call payload but rejected by schema/policy:
// consume it to avoid leaking raw tool_calls JSON to user content.
return prefixPart, nil, suffixPart, true
}
return captured, nil, "", true return captured, nil, "", true
} }
return prefixPart, parsed, suffixPart, true return prefixPart, parsed.Calls, suffixPart, true
} }

View File

@@ -7,17 +7,19 @@ import (
) )
type toolStreamSieveState struct { type toolStreamSieveState struct {
pending strings.Builder pending strings.Builder
capture strings.Builder capture strings.Builder
capturing bool capturing bool
recentTextTail string recentTextTail string
disableDeltas bool pendingToolRaw string
toolNameSent bool pendingToolCalls []util.ParsedToolCall
toolName string disableDeltas bool
toolArgsStart int toolNameSent bool
toolArgsSent int toolName string
toolArgsString bool toolArgsStart int
toolArgsDone bool toolArgsSent int
toolArgsString bool
toolArgsDone bool
} }
type toolStreamEvent struct { type toolStreamEvent struct {
@@ -32,7 +34,6 @@ type toolCallDelta struct {
Arguments string Arguments string
} }
const toolSieveCaptureLimit = 8 * 1024
const toolSieveContextTailLimit = 256 const toolSieveContextTailLimit = 256
func (s *toolStreamSieveState) resetIncrementalToolState() { func (s *toolStreamSieveState) resetIncrementalToolState() {

View File

@@ -16,6 +16,7 @@ type ConfigStore interface {
Accounts() []config.Account Accounts() []config.Account
FindAccount(identifier string) (config.Account, bool) FindAccount(identifier string) (config.Account, bool)
UpdateAccountToken(identifier, token string) error UpdateAccountToken(identifier, token string) error
UpdateAccountTestStatus(identifier, status string) error
Update(mutator func(*config.Config) error) error Update(mutator func(*config.Config) error) error
ExportJSONAndBase64() (string, string, error) ExportJSONAndBase64() (string, string, error)
IsEnvBacked() bool IsEnvBacked() bool

View File

@@ -4,6 +4,7 @@ import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"net/http" "net/http"
"net/url"
"strings" "strings"
"github.com/go-chi/chi/v5" "github.com/go-chi/chi/v5"
@@ -24,8 +25,21 @@ func (h *Handler) listAccounts(w http.ResponseWriter, r *http.Request) {
pageSize = 100 pageSize = 100
} }
accounts := h.Store.Snapshot().Accounts accounts := h.Store.Snapshot().Accounts
total := len(accounts)
reverseAccounts(accounts) reverseAccounts(accounts)
q := strings.TrimSpace(strings.ToLower(r.URL.Query().Get("q")))
if q != "" {
filtered := make([]config.Account, 0, len(accounts))
for _, acc := range accounts {
id := strings.ToLower(acc.Identifier())
if strings.Contains(id, q) ||
strings.Contains(strings.ToLower(acc.Email), q) ||
strings.Contains(strings.ToLower(acc.Mobile), q) {
filtered = append(filtered, acc)
}
}
accounts = filtered
}
total := len(accounts)
totalPages := 1 totalPages := 1
if total > 0 { if total > 0 {
totalPages = (total + pageSize - 1) / pageSize totalPages = (total + pageSize - 1) / pageSize
@@ -56,6 +70,7 @@ func (h *Handler) listAccounts(w http.ResponseWriter, r *http.Request) {
"has_password": acc.Password != "", "has_password": acc.Password != "",
"has_token": token != "", "has_token": token != "",
"token_preview": preview, "token_preview": preview,
"test_status": acc.TestStatus,
}) })
} }
writeJSON(w, http.StatusOK, map[string]any{"items": items, "total": total, "page": page, "page_size": pageSize, "total_pages": totalPages}) writeJSON(w, http.StatusOK, map[string]any{"items": items, "total": total, "page": page, "page_size": pageSize, "total_pages": totalPages})
@@ -70,11 +85,12 @@ func (h *Handler) addAccount(w http.ResponseWriter, r *http.Request) {
return return
} }
err := h.Store.Update(func(c *config.Config) error { err := h.Store.Update(func(c *config.Config) error {
mobileKey := config.CanonicalMobileKey(acc.Mobile)
for _, a := range c.Accounts { for _, a := range c.Accounts {
if acc.Email != "" && a.Email == acc.Email { if acc.Email != "" && a.Email == acc.Email {
return fmt.Errorf("邮箱已存在") return fmt.Errorf("邮箱已存在")
} }
if acc.Mobile != "" && a.Mobile == acc.Mobile { if mobileKey != "" && config.CanonicalMobileKey(a.Mobile) == mobileKey {
return fmt.Errorf("手机号已存在") return fmt.Errorf("手机号已存在")
} }
} }
@@ -91,6 +107,9 @@ func (h *Handler) addAccount(w http.ResponseWriter, r *http.Request) {
func (h *Handler) deleteAccount(w http.ResponseWriter, r *http.Request) { func (h *Handler) deleteAccount(w http.ResponseWriter, r *http.Request) {
identifier := chi.URLParam(r, "identifier") identifier := chi.URLParam(r, "identifier")
if decoded, err := url.PathUnescape(identifier); err == nil {
identifier = decoded
}
err := h.Store.Update(func(c *config.Config) error { err := h.Store.Update(func(c *config.Config) error {
idx := -1 idx := -1
for i, a := range c.Accounts { for i, a := range c.Accounts {

View File

@@ -1,6 +1,7 @@
package admin package admin
import ( import (
"bytes"
"encoding/json" "encoding/json"
"net/http" "net/http"
"net/http/httptest" "net/http/httptest"
@@ -102,6 +103,45 @@ func TestDeleteAccountSupportsMobileAlias(t *testing.T) {
} }
} }
func TestDeleteAccountSupportsEncodedPlusMobile(t *testing.T) {
h := newAdminTestHandler(t, `{
"accounts":[{"mobile":"+8613800138000","password":"pwd"}]
}`)
r := chi.NewRouter()
r.Delete("/admin/accounts/{identifier}", h.deleteAccount)
req := httptest.NewRequest(http.MethodDelete, "/admin/accounts/"+url.PathEscape("+8613800138000"), nil)
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
if got := len(h.Store.Accounts()); got != 0 {
t.Fatalf("expected account removed, remaining=%d", got)
}
}
func TestAddAccountRejectsCanonicalMobileDuplicate(t *testing.T) {
h := newAdminTestHandler(t, `{
"accounts":[{"mobile":"+8613800138000","password":"pwd"}]
}`)
r := chi.NewRouter()
r.Post("/admin/accounts", h.addAccount)
body := []byte(`{"mobile":"13800138000","password":"pwd2"}`)
req := httptest.NewRequest(http.MethodPost, "/admin/accounts", bytes.NewReader(body))
rec := httptest.NewRecorder()
r.ServeHTTP(rec, req)
if rec.Code != http.StatusBadRequest {
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
}
if got := len(h.Store.Accounts()); got != 1 {
t.Fatalf("expected no duplicate insert, got=%d", got)
}
}
func TestFindAccountByIdentifierSupportsMobileAndTokenOnly(t *testing.T) { func TestFindAccountByIdentifierSupportsMobileAndTokenOnly(t *testing.T) {
h := newAdminTestHandler(t, `{ h := newAdminTestHandler(t, `{
"accounts":[ "accounts":[
@@ -117,6 +157,13 @@ func TestFindAccountByIdentifierSupportsMobileAndTokenOnly(t *testing.T) {
if accByMobile.Email != "u@example.com" { if accByMobile.Email != "u@example.com" {
t.Fatalf("unexpected account by mobile: %#v", accByMobile) t.Fatalf("unexpected account by mobile: %#v", accByMobile)
} }
accByMobileWithCountryCode, ok := findAccountByIdentifier(h.Store, "+8613800138000")
if !ok {
t.Fatal("expected find by +86 mobile")
}
if accByMobileWithCountryCode.Email != "u@example.com" {
t.Fatalf("unexpected account by +86 mobile: %#v", accByMobileWithCountryCode)
}
tokenOnlyID := "" tokenOnlyID := ""
for _, acc := range h.Store.Accounts() { for _, acc := range h.Store.Accounts() {

View File

@@ -88,7 +88,15 @@ func runAccountTestsConcurrently(accounts []config.Account, maxConcurrency int,
func (h *Handler) testAccount(ctx context.Context, acc config.Account, model, message string) map[string]any { func (h *Handler) testAccount(ctx context.Context, acc config.Account, model, message string) map[string]any {
start := time.Now() start := time.Now()
result := map[string]any{"account": acc.Identifier(), "success": false, "response_time": 0, "message": "", "model": model} identifier := acc.Identifier()
result := map[string]any{"account": identifier, "success": false, "response_time": 0, "message": "", "model": model}
defer func() {
status := "failed"
if ok, _ := result["success"].(bool); ok {
status = "ok"
}
_ = h.Store.UpdateAccountTestStatus(identifier, status)
}()
token := strings.TrimSpace(acc.Token) token := strings.TrimSpace(acc.Token)
if token == "" { if token == "" {
newToken, err := h.DS.Login(ctx, acc) newToken, err := h.DS.Login(ctx, acc)

View File

@@ -0,0 +1,76 @@
package admin
import (
"context"
"errors"
"net/http"
"strings"
"testing"
"ds2api/internal/auth"
"ds2api/internal/config"
)
type testingDSMock struct {
loginCalls int
createSessionCalls int
getPowCalls int
callCompletionCalls int
}
func (m *testingDSMock) Login(_ context.Context, _ config.Account) (string, error) {
m.loginCalls++
return "new-token", nil
}
func (m *testingDSMock) CreateSession(_ context.Context, _ *auth.RequestAuth, _ int) (string, error) {
m.createSessionCalls++
return "session-id", nil
}
func (m *testingDSMock) GetPow(_ context.Context, _ *auth.RequestAuth, _ int) (string, error) {
m.getPowCalls++
return "", errors.New("should not call GetPow in this test")
}
func (m *testingDSMock) CallCompletion(_ context.Context, _ *auth.RequestAuth, _ map[string]any, _ string, _ int) (*http.Response, error) {
m.callCompletionCalls++
return nil, errors.New("should not call CallCompletion in this test")
}
func TestTestAccount_BatchModeOnlyCreatesSession(t *testing.T) {
t.Setenv("DS2API_CONFIG_JSON", `{"accounts":[{"email":"batch@example.com","password":"pwd","token":""}]}`)
store := config.LoadStore()
ds := &testingDSMock{}
h := &Handler{Store: store, DS: ds}
acc, ok := store.FindAccount("batch@example.com")
if !ok {
t.Fatal("expected test account")
}
result := h.testAccount(context.Background(), acc, "deepseek-chat", "")
if ok, _ := result["success"].(bool); !ok {
t.Fatalf("expected success=true, got %#v", result)
}
msg, _ := result["message"].(string)
if !strings.Contains(msg, "仅会话创建") {
t.Fatalf("expected session-only success message, got %q", msg)
}
if ds.loginCalls != 1 || ds.createSessionCalls != 1 {
t.Fatalf("unexpected Login/CreateSession calls: login=%d createSession=%d", ds.loginCalls, ds.createSessionCalls)
}
if ds.getPowCalls != 0 || ds.callCompletionCalls != 0 {
t.Fatalf("expected no completion flow calls, got getPow=%d callCompletion=%d", ds.getPowCalls, ds.callCompletionCalls)
}
updated, ok := store.FindAccount("batch@example.com")
if !ok {
t.Fatal("expected updated account")
}
if updated.Token != "new-token" {
t.Fatalf("expected refreshed token to be persisted, got %q", updated.Token)
}
if updated.TestStatus != "ok" {
t.Fatalf("expected test status ok, got %q", updated.TestStatus)
}
}

View File

@@ -49,6 +49,7 @@ func (h *Handler) configImport(w http.ResponseWriter, r *http.Request) {
next := c.Clone() next := c.Clone()
if mode == "replace" { if mode == "replace" {
next = incoming.Clone() next = incoming.Clone()
next.Accounts = normalizeAndDedupeAccounts(next.Accounts)
next.VercelSyncHash = c.VercelSyncHash next.VercelSyncHash = c.VercelSyncHash
next.VercelSyncTime = c.VercelSyncTime next.VercelSyncTime = c.VercelSyncTime
importedKeys = len(next.Keys) importedKeys = len(next.Keys)
@@ -73,17 +74,22 @@ func (h *Handler) configImport(w http.ResponseWriter, r *http.Request) {
existingAccounts := map[string]struct{}{} existingAccounts := map[string]struct{}{}
for _, acc := range next.Accounts { for _, acc := range next.Accounts {
existingAccounts[acc.Identifier()] = struct{}{} acc = normalizeAccountForStorage(acc)
key := accountDedupeKey(acc)
if key != "" {
existingAccounts[key] = struct{}{}
}
} }
for _, acc := range incoming.Accounts { for _, acc := range incoming.Accounts {
id := acc.Identifier() acc = normalizeAccountForStorage(acc)
if id == "" { key := accountDedupeKey(acc)
if key == "" {
continue continue
} }
if _, ok := existingAccounts[id]; ok { if _, ok := existingAccounts[key]; ok {
continue continue
} }
existingAccounts[id] = struct{}{} existingAccounts[key] = struct{}{}
next.Accounts = append(next.Accounts, acc) next.Accounts = append(next.Accounts, acc)
importedAccounts++ importedAccounts++
} }

View File

@@ -25,17 +25,28 @@ func (h *Handler) updateConfig(w http.ResponseWriter, r *http.Request) {
if accountsRaw, ok := req["accounts"].([]any); ok { if accountsRaw, ok := req["accounts"].([]any); ok {
existing := map[string]config.Account{} existing := map[string]config.Account{}
for _, a := range old.Accounts { for _, a := range old.Accounts {
existing[a.Identifier()] = a a = normalizeAccountForStorage(a)
key := accountDedupeKey(a)
if key != "" {
existing[key] = a
}
} }
seen := map[string]struct{}{}
accounts := make([]config.Account, 0, len(accountsRaw)) accounts := make([]config.Account, 0, len(accountsRaw))
for _, item := range accountsRaw { for _, item := range accountsRaw {
m, ok := item.(map[string]any) m, ok := item.(map[string]any)
if !ok { if !ok {
continue continue
} }
acc := toAccount(m) acc := normalizeAccountForStorage(toAccount(m))
id := acc.Identifier() key := accountDedupeKey(acc)
if prev, ok := existing[id]; ok { if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
if prev, ok := existing[key]; ok {
if strings.TrimSpace(acc.Password) == "" { if strings.TrimSpace(acc.Password) == "" {
acc.Password = prev.Password acc.Password = prev.Password
} }
@@ -43,6 +54,7 @@ func (h *Handler) updateConfig(w http.ResponseWriter, r *http.Request) {
acc.Token = prev.Token acc.Token = prev.Token
} }
} }
seen[key] = struct{}{}
accounts = append(accounts, acc) accounts = append(accounts, acc)
} }
c.Accounts = accounts c.Accounts = accounts
@@ -138,20 +150,24 @@ func (h *Handler) batchImport(w http.ResponseWriter, r *http.Request) {
if accounts, ok := req["accounts"].([]any); ok { if accounts, ok := req["accounts"].([]any); ok {
existing := map[string]bool{} existing := map[string]bool{}
for _, a := range c.Accounts { for _, a := range c.Accounts {
existing[a.Identifier()] = true a = normalizeAccountForStorage(a)
key := accountDedupeKey(a)
if key != "" {
existing[key] = true
}
} }
for _, item := range accounts { for _, item := range accounts {
m, ok := item.(map[string]any) m, ok := item.(map[string]any)
if !ok { if !ok {
continue continue
} }
acc := toAccount(m) acc := normalizeAccountForStorage(toAccount(m))
id := acc.Identifier() key := accountDedupeKey(acc)
if id == "" || existing[id] { if key == "" || existing[key] {
continue continue
} }
c.Accounts = append(c.Accounts, acc) c.Accounts = append(c.Accounts, acc)
existing[id] = true existing[key] = true
importedAccounts++ importedAccounts++
} }
} }

View File

@@ -265,3 +265,57 @@ func TestConfigImportRejectsMergedRuntimeConflict(t *testing.T) {
t.Fatalf("runtime should remain unchanged, runtime=%+v", snap.Runtime) t.Fatalf("runtime should remain unchanged, runtime=%+v", snap.Runtime)
} }
} }
func TestConfigImportMergeDedupesMobileAliases(t *testing.T) {
h := newAdminTestHandler(t, `{
"keys":["k1"],
"accounts":[{"mobile":"+8613800138000","password":"p1"}]
}`)
merge := map[string]any{
"mode": "merge",
"config": map[string]any{
"accounts": []any{
map[string]any{"mobile": "13800138000", "password": "p2"},
},
},
}
b, _ := json.Marshal(merge)
req := httptest.NewRequest(http.MethodPost, "/admin/config/import?mode=merge", bytes.NewReader(b))
rec := httptest.NewRecorder()
h.configImport(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status=%d body=%s", rec.Code, rec.Body.String())
}
if got := len(h.Store.Accounts()); got != 1 {
t.Fatalf("expected merge dedupe by canonical mobile, got=%d", got)
}
}
func TestUpdateConfigDedupesMobileAliases(t *testing.T) {
h := newAdminTestHandler(t, `{
"keys":["k1"],
"accounts":[{"mobile":"+8613800138000","password":"old"}]
}`)
reqBody := map[string]any{
"accounts": []any{
map[string]any{"mobile": "+8613800138000"},
map[string]any{"mobile": "13800138000"},
},
}
b, _ := json.Marshal(reqBody)
req := httptest.NewRequest(http.MethodPost, "/admin/config", bytes.NewReader(b))
rec := httptest.NewRecorder()
h.updateConfig(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("status=%d body=%s", rec.Code, rec.Body.String())
}
accounts := h.Store.Accounts()
if len(accounts) != 1 {
t.Fatalf("expected update dedupe by canonical mobile, got=%d", len(accounts))
}
if accounts[0].Identifier() != "+8613800138000" {
t.Fatalf("unexpected identifier: %q", accounts[0].Identifier())
}
}

View File

@@ -59,9 +59,11 @@ func toStringSlice(v any) ([]string, bool) {
} }
func toAccount(m map[string]any) config.Account { func toAccount(m map[string]any) config.Account {
email := fieldString(m, "email")
mobile := config.NormalizeMobileForStorage(fieldString(m, "mobile"))
return config.Account{ return config.Account{
Email: fieldString(m, "email"), Email: email,
Mobile: fieldString(m, "mobile"), Mobile: mobile,
Password: fieldString(m, "password"), Password: fieldString(m, "password"),
Token: fieldString(m, "token"), Token: fieldString(m, "token"),
} }
@@ -90,12 +92,52 @@ func accountMatchesIdentifier(acc config.Account, identifier string) bool {
if strings.TrimSpace(acc.Email) == id { if strings.TrimSpace(acc.Email) == id {
return true return true
} }
if strings.TrimSpace(acc.Mobile) == id { if mobileKey := config.CanonicalMobileKey(id); mobileKey != "" && mobileKey == config.CanonicalMobileKey(acc.Mobile) {
return true return true
} }
return acc.Identifier() == id return acc.Identifier() == id
} }
func normalizeAccountForStorage(acc config.Account) config.Account {
acc.Email = strings.TrimSpace(acc.Email)
acc.Mobile = config.NormalizeMobileForStorage(acc.Mobile)
return acc
}
func accountDedupeKey(acc config.Account) string {
if email := strings.TrimSpace(acc.Email); email != "" {
return "email:" + email
}
if mobile := config.CanonicalMobileKey(acc.Mobile); mobile != "" {
return "mobile:" + mobile
}
if id := strings.TrimSpace(acc.Identifier()); id != "" {
return "id:" + id
}
return ""
}
func normalizeAndDedupeAccounts(accounts []config.Account) []config.Account {
if len(accounts) == 0 {
return nil
}
out := make([]config.Account, 0, len(accounts))
seen := make(map[string]struct{}, len(accounts))
for _, acc := range accounts {
acc = normalizeAccountForStorage(acc)
key := accountDedupeKey(acc)
if key == "" {
continue
}
if _, ok := seen[key]; ok {
continue
}
seen[key] = struct{}{}
out = append(out, acc)
}
return out
}
func findAccountByIdentifier(store ConfigStore, identifier string) (config.Account, bool) { func findAccountByIdentifier(store ConfigStore, identifier string) (config.Account, bool) {
id := strings.TrimSpace(identifier) id := strings.TrimSpace(identifier)
if id == "" { if id == "" {

View File

@@ -182,7 +182,7 @@ func TestToAccountAllFields(t *testing.T) {
if acc.Email != "user@test.com" { if acc.Email != "user@test.com" {
t.Fatalf("unexpected email: %q", acc.Email) t.Fatalf("unexpected email: %q", acc.Email)
} }
if acc.Mobile != "13800138000" { if acc.Mobile != "+8613800138000" {
t.Fatalf("unexpected mobile: %q", acc.Mobile) t.Fatalf("unexpected mobile: %q", acc.Mobile)
} }
if acc.Password != "secret" { if acc.Password != "secret" {

View File

@@ -5,6 +5,7 @@ import (
"os" "os"
"path/filepath" "path/filepath"
"reflect" "reflect"
"strings"
"testing" "testing"
"ds2api/internal/sse" "ds2api/internal/sse"
@@ -67,6 +68,7 @@ func TestGoCompatToolcallFixtures(t *testing.T) {
var fixture struct { var fixture struct {
Text string `json:"text"` Text string `json:"text"`
ToolNames []string `json:"tool_names"` ToolNames []string `json:"tool_names"`
Mode string `json:"mode"`
} }
mustLoadJSON(t, fixturePath, &fixture) mustLoadJSON(t, fixturePath, &fixture)
@@ -75,7 +77,13 @@ func TestGoCompatToolcallFixtures(t *testing.T) {
} }
mustLoadJSON(t, expectedPath, &expected) mustLoadJSON(t, expectedPath, &expected)
got := util.ParseToolCalls(fixture.Text, fixture.ToolNames) var got []util.ParsedToolCall
switch strings.ToLower(strings.TrimSpace(fixture.Mode)) {
case "standalone":
got = util.ParseStandaloneToolCalls(fixture.Text, fixture.ToolNames)
default:
got = util.ParseToolCalls(fixture.Text, fixture.ToolNames)
}
if len(got) == 0 && len(expected.Calls) == 0 { if len(got) == 0 && len(expected.Calls) == 0 {
continue continue
} }

View File

@@ -10,8 +10,8 @@ func (a Account) Identifier() string {
if strings.TrimSpace(a.Email) != "" { if strings.TrimSpace(a.Email) != "" {
return strings.TrimSpace(a.Email) return strings.TrimSpace(a.Email)
} }
if strings.TrimSpace(a.Mobile) != "" { if mobile := NormalizeMobileForStorage(a.Mobile); mobile != "" {
return strings.TrimSpace(a.Mobile) return mobile
} }
// Backward compatibility: old configs may contain token-only accounts. // Backward compatibility: old configs may contain token-only accounts.
// Use a stable non-sensitive synthetic id so they can still join the pool. // Use a stable non-sensitive synthetic id so they can still join the pool.

View File

@@ -18,10 +18,11 @@ type Config struct {
} }
type Account struct { type Account struct {
Email string `json:"email,omitempty"` Email string `json:"email,omitempty"`
Mobile string `json:"mobile,omitempty"` Mobile string `json:"mobile,omitempty"`
Password string `json:"password,omitempty"` Password string `json:"password,omitempty"`
Token string `json:"token,omitempty"` Token string `json:"token,omitempty"`
TestStatus string `json:"test_status,omitempty"`
} }
type CompatConfig struct { type CompatConfig struct {

View File

@@ -202,7 +202,7 @@ func TestConfigCloneNilMaps(t *testing.T) {
func TestAccountIdentifierPreferenceMobileOverToken(t *testing.T) { func TestAccountIdentifierPreferenceMobileOverToken(t *testing.T) {
acc := Account{Mobile: "13800138000", Token: "tok"} acc := Account{Mobile: "13800138000", Token: "tok"}
if acc.Identifier() != "13800138000" { if acc.Identifier() != "+8613800138000" {
t.Fatalf("expected mobile identifier, got %q", acc.Identifier()) t.Fatalf("expected mobile identifier, got %q", acc.Identifier())
} }
} }

82
internal/config/mobile.go Normal file
View File

@@ -0,0 +1,82 @@
package config
import "strings"
// NormalizeMobileForStorage normalizes user input to a stable storage format.
// It keeps existing country codes and auto-prefixes mainland China numbers with +86.
func NormalizeMobileForStorage(raw string) string {
digits, hasPlus := extractMobileDigits(raw)
if digits == "" {
return ""
}
if hasPlus {
return "+" + digits
}
if isChinaMobileWithCountryCode(digits) {
return "+86" + digits[2:]
}
if isChinaMainlandMobileDigits(digits) {
return "+86" + digits
}
// For non-China numbers without a leading +, preserve semantics by adding it.
return "+" + digits
}
// CanonicalMobileKey returns the comparison key used by dedupe/matching logic.
func CanonicalMobileKey(raw string) string {
return NormalizeMobileForStorage(raw)
}
func extractMobileDigits(raw string) (digits string, hasPlus bool) {
s := strings.TrimSpace(raw)
if s == "" {
return "", false
}
for _, r := range s {
switch {
case r >= '0' && r <= '9':
goto collect
case isMobileSeparator(r):
continue
case r == '+':
hasPlus = true
goto collect
default:
goto collect
}
}
collect:
var b strings.Builder
b.Grow(len(s))
for _, r := range s {
if r >= '0' && r <= '9' {
b.WriteRune(r)
}
}
return b.String(), hasPlus
}
func isChinaMainlandMobileDigits(digits string) bool {
if len(digits) != 11 || digits[0] != '1' {
return false
}
return digits[1] >= '3' && digits[1] <= '9'
}
func isChinaMobileWithCountryCode(digits string) bool {
if len(digits) != 13 || !strings.HasPrefix(digits, "86") {
return false
}
return isChinaMainlandMobileDigits(digits[2:])
}
func isMobileSeparator(r rune) bool {
switch r {
case ' ', '\t', '\n', '\r', '-', '(', ')', '.', '/':
return true
default:
return false
}
}

View File

@@ -0,0 +1,36 @@
package config
import "testing"
func TestNormalizeMobileForStorageChinaMainlandAddsPlus86(t *testing.T) {
if got := NormalizeMobileForStorage("13800138000"); got != "+8613800138000" {
t.Fatalf("got %q", got)
}
}
func TestNormalizeMobileForStorageChinaWithCountryCode(t *testing.T) {
if got := NormalizeMobileForStorage("8613800138000"); got != "+8613800138000" {
t.Fatalf("got %q", got)
}
}
func TestNormalizeMobileForStorageKeepsExistingCountryCode(t *testing.T) {
if got := NormalizeMobileForStorage(" +1 (415) 555-2671 "); got != "+14155552671" {
t.Fatalf("got %q", got)
}
}
func TestCanonicalMobileKeyMatchesChinaAliases(t *testing.T) {
a := CanonicalMobileKey("+8613800138000")
b := CanonicalMobileKey("13800138000")
c := CanonicalMobileKey("86 13800138000")
if a == "" || a != b || b != c {
t.Fatalf("alias mismatch: a=%q b=%q c=%q", a, b, c)
}
}
func TestCanonicalMobileKeyEmptyForInvalidInput(t *testing.T) {
if got := CanonicalMobileKey("() --"); got != "" {
t.Fatalf("got %q", got)
}
}

View File

@@ -97,6 +97,18 @@ func (s *Store) FindAccount(identifier string) (Account, bool) {
return Account{}, false return Account{}, false
} }
func (s *Store) UpdateAccountTestStatus(identifier, status string) error {
identifier = strings.TrimSpace(identifier)
s.mu.Lock()
defer s.mu.Unlock()
idx, ok := s.findAccountIndexLocked(identifier)
if !ok {
return errors.New("account not found")
}
s.cfg.Accounts[idx].TestStatus = status
return s.saveLocked()
}
func (s *Store) UpdateAccountToken(identifier, token string) error { func (s *Store) UpdateAccountToken(identifier, token string) error {
identifier = strings.TrimSpace(identifier) identifier = strings.TrimSpace(identifier)
s.mu.Lock() s.mu.Lock()

View File

@@ -6,6 +6,7 @@ import (
"fmt" "fmt"
"net/http" "net/http"
"strings" "strings"
"unicode"
"ds2api/internal/auth" "ds2api/internal/auth"
"ds2api/internal/config" "ds2api/internal/config"
@@ -20,8 +21,9 @@ func (c *Client) Login(ctx context.Context, acc config.Account) (string, error)
if email := strings.TrimSpace(acc.Email); email != "" { if email := strings.TrimSpace(acc.Email); email != "" {
payload["email"] = email payload["email"] = email
} else if mobile := strings.TrimSpace(acc.Mobile); mobile != "" { } else if mobile := strings.TrimSpace(acc.Mobile); mobile != "" {
payload["mobile"] = mobile loginMobile, areaCode := normalizeMobileForLogin(mobile)
payload["area_code"] = nil payload["mobile"] = loginMobile
payload["area_code"] = areaCode
} else { } else {
return "", errors.New("missing email/mobile") return "", errors.New("missing email/mobile")
} }
@@ -151,3 +153,26 @@ func isTokenInvalid(status int, code int, msg string) bool {
} }
return strings.Contains(msg, "token") || strings.Contains(msg, "unauthorized") return strings.Contains(msg, "token") || strings.Contains(msg, "unauthorized")
} }
func normalizeMobileForLogin(raw string) (mobile string, areaCode any) {
s := strings.TrimSpace(raw)
if s == "" {
return "", nil
}
hasPlus := strings.HasPrefix(s, "+")
var b strings.Builder
b.Grow(len(s))
for _, r := range s {
if unicode.IsDigit(r) {
b.WriteRune(r)
}
}
digits := b.String()
if digits == "" {
return "", nil
}
if (hasPlus || strings.HasPrefix(digits, "86")) && strings.HasPrefix(digits, "86") && len(digits) == 13 {
return digits[2:], nil
}
return digits, nil
}

View File

@@ -0,0 +1,33 @@
package deepseek
import "testing"
func TestNormalizeMobileForLogin_ChinaWithPlus86(t *testing.T) {
mobile, areaCode := normalizeMobileForLogin("+8613800138000")
if mobile != "13800138000" {
t.Fatalf("unexpected mobile: %q", mobile)
}
if areaCode != nil {
t.Fatalf("expected nil areaCode, got %#v", areaCode)
}
}
func TestNormalizeMobileForLogin_ChinaWith86Prefix(t *testing.T) {
mobile, areaCode := normalizeMobileForLogin("8613800138000")
if mobile != "13800138000" {
t.Fatalf("unexpected mobile: %q", mobile)
}
if areaCode != nil {
t.Fatalf("expected nil areaCode, got %#v", areaCode)
}
}
func TestNormalizeMobileForLogin_KeepPlainDigits(t *testing.T) {
mobile, areaCode := normalizeMobileForLogin("13800138000")
if mobile != "13800138000" {
t.Fatalf("unexpected mobile: %q", mobile)
}
if areaCode != nil {
t.Fatalf("expected nil areaCode, got %#v", areaCode)
}
}

View File

@@ -9,6 +9,9 @@ import (
func BuildMessageResponse(messageID, model string, normalizedMessages []any, finalThinking, finalText string, toolNames []string) map[string]any { func BuildMessageResponse(messageID, model string, normalizedMessages []any, finalThinking, finalText string, toolNames []string) map[string]any {
detected := util.ParseToolCalls(finalText, toolNames) detected := util.ParseToolCalls(finalText, toolNames)
if len(detected) == 0 && finalText == "" && finalThinking != "" {
detected = util.ParseToolCalls(finalThinking, toolNames)
}
content := make([]map[string]any, 0, 4) content := make([]map[string]any, 0, 4)
if finalThinking != "" { if finalThinking != "" {
content = append(content, map[string]any{"type": "thinking", "thinking": finalThinking}) content = append(content, map[string]any{"type": "thinking", "thinking": finalThinking})

View File

@@ -0,0 +1,62 @@
package claude
import "testing"
func TestBuildMessageResponseDetectsToolCallsFromThinkingFallback(t *testing.T) {
resp := BuildMessageResponse(
"msg_1",
"claude-sonnet-4-5",
[]any{map[string]any{"role": "user", "content": "hi"}},
`{"tool_calls":[{"name":"search","input":{"q":"go"}}]}`,
"",
[]string{"search"},
)
if resp["stop_reason"] != "tool_use" {
t.Fatalf("expected stop_reason=tool_use, got=%#v", resp["stop_reason"])
}
content, _ := resp["content"].([]map[string]any)
if len(content) < 2 {
t.Fatalf("expected thinking + tool_use content blocks, got=%#v", resp["content"])
}
last := content[len(content)-1]
if last["type"] != "tool_use" {
t.Fatalf("expected last content block tool_use, got=%#v", last["type"])
}
if last["name"] != "search" {
t.Fatalf("expected tool name search, got=%#v", last["name"])
}
}
func TestBuildMessageResponseSkipsThinkingFallbackWhenFinalTextExists(t *testing.T) {
resp := BuildMessageResponse(
"msg_1",
"claude-sonnet-4-5",
[]any{map[string]any{"role": "user", "content": "hi"}},
`{"tool_calls":[{"name":"search","input":{"q":"go"}}]}`,
"normal answer",
[]string{"search"},
)
if resp["stop_reason"] != "end_turn" {
t.Fatalf("expected stop_reason=end_turn, got=%#v", resp["stop_reason"])
}
content, _ := resp["content"].([]map[string]any)
foundText := false
foundTool := false
for _, block := range content {
if block["type"] == "text" && block["text"] == "normal answer" {
foundText = true
}
if block["type"] == "tool_use" {
foundTool = true
}
}
if !foundText {
t.Fatalf("expected text block with finalText, got=%#v", resp["content"])
}
if foundTool {
t.Fatalf("unexpected tool_use block when finalText exists, got=%#v", resp["content"])
}
}

View File

@@ -8,7 +8,7 @@ import (
) )
func BuildChatCompletion(completionID, model, finalPrompt, finalThinking, finalText string, toolNames []string) map[string]any { func BuildChatCompletion(completionID, model, finalPrompt, finalThinking, finalText string, toolNames []string) map[string]any {
detected := util.ParseToolCalls(finalText, toolNames) detected := util.ParseStandaloneToolCalls(finalText, toolNames)
finishReason := "stop" finishReason := "stop"
messageObj := map[string]any{"role": "assistant", "content": finalText} messageObj := map[string]any{"role": "assistant", "content": finalText}
if strings.TrimSpace(finalThinking) != "" { if strings.TrimSpace(finalThinking) != "" {

View File

@@ -11,12 +11,9 @@ import (
) )
func BuildResponseObject(responseID, model, finalPrompt, finalThinking, finalText string, toolNames []string) map[string]any { func BuildResponseObject(responseID, model, finalPrompt, finalThinking, finalText string, toolNames []string) map[string]any {
// Align responses tool-call semantics with chat/completions: // Strict mode: only standalone, structured tool-call payloads are treated
// mixed prose + tool_call payloads should still be interpreted as tool calls. // as executable tool calls.
detected := util.ParseToolCalls(finalText, toolNames) detected := util.ParseStandaloneToolCalls(finalText, toolNames)
if len(detected) == 0 && strings.TrimSpace(finalThinking) != "" {
detected = util.ParseToolCalls(finalThinking, toolNames)
}
exposedOutputText := finalText exposedOutputText := finalText
output := make([]any, 0, 2) output := make([]any, 0, 2)
if len(detected) > 0 { if len(detected) > 0 {

View File

@@ -45,7 +45,7 @@ func TestBuildResponseObjectToolCallsFollowChatShape(t *testing.T) {
} }
} }
func TestBuildResponseObjectTreatsMixedProseToolPayloadAsToolCall(t *testing.T) { func TestBuildResponseObjectTreatsMixedProseToolPayloadAsText(t *testing.T) {
obj := BuildResponseObject( obj := BuildResponseObject(
"resp_test", "resp_test",
"gpt-4o", "gpt-4o",
@@ -56,17 +56,16 @@ func TestBuildResponseObjectTreatsMixedProseToolPayloadAsToolCall(t *testing.T)
) )
outputText, _ := obj["output_text"].(string) outputText, _ := obj["output_text"].(string)
if outputText != "" { if outputText == "" {
t.Fatalf("expected output_text hidden once tool calls are detected, got %q", outputText) t.Fatalf("expected output_text preserved for mixed prose payload")
} }
output, _ := obj["output"].([]any) output, _ := obj["output"].([]any)
if len(output) != 1 { if len(output) != 1 {
t.Fatalf("expected function_call output only, got %#v", obj["output"]) t.Fatalf("expected one message output item, got %#v", obj["output"])
} }
first, _ := output[0].(map[string]any) first, _ := output[0].(map[string]any)
if first["type"] != "function_call" { if first["type"] != "message" {
t.Fatalf("expected first output type function_call, got %#v", first["type"]) t.Fatalf("expected message output type, got %#v", first["type"])
} }
} }
@@ -127,7 +126,7 @@ func TestBuildResponseObjectReasoningOnlyFallsBackToOutputText(t *testing.T) {
} }
} }
func TestBuildResponseObjectDetectsToolCallFromThinkingChannel(t *testing.T) { func TestBuildResponseObjectIgnoresToolCallFromThinkingChannel(t *testing.T) {
obj := BuildResponseObject( obj := BuildResponseObject(
"resp_test", "resp_test",
"gpt-4o", "gpt-4o",
@@ -139,10 +138,10 @@ func TestBuildResponseObjectDetectsToolCallFromThinkingChannel(t *testing.T) {
output, _ := obj["output"].([]any) output, _ := obj["output"].([]any)
if len(output) != 1 { if len(output) != 1 {
t.Fatalf("expected function_call output only, got %#v", obj["output"]) t.Fatalf("expected one message output item, got %#v", obj["output"])
} }
first, _ := output[0].(map[string]any) first, _ := output[0].(map[string]any)
if first["type"] != "function_call" { if first["type"] != "message" {
t.Fatalf("expected output function_call, got %#v", first["type"]) t.Fatalf("expected output message, got %#v", first["type"])
} }
} }

View File

@@ -10,8 +10,10 @@ const {
} = require('./sse_parse'); } = require('./sse_parse');
const { const {
resolveToolcallPolicy, resolveToolcallPolicy,
formatIncrementalToolCallDeltas,
normalizePreparedToolNames, normalizePreparedToolNames,
boolDefaultTrue, boolDefaultTrue,
filterIncrementalToolCallDeltasByAllowed,
} = require('./toolcall_policy'); } = require('./toolcall_policy');
const { const {
estimateTokens, estimateTokens,
@@ -82,7 +84,9 @@ module.exports.__test = {
shouldSkipPath, shouldSkipPath,
asString, asString,
resolveToolcallPolicy, resolveToolcallPolicy,
formatIncrementalToolCallDeltas,
normalizePreparedToolNames, normalizePreparedToolNames,
boolDefaultTrue, boolDefaultTrue,
filterIncrementalToolCallDeltasByAllowed,
estimateTokens, estimateTokens,
}; };

View File

@@ -68,6 +68,47 @@ function formatIncrementalToolCallDeltas(deltas, idStore) {
return out; return out;
} }
function filterIncrementalToolCallDeltasByAllowed(deltas, allowedNames, seenNames) {
if (!Array.isArray(deltas) || deltas.length === 0) {
return [];
}
const seen = seenNames instanceof Map ? seenNames : new Map();
const allowed = new Set((allowedNames || []).filter((name) => asString(name) !== ''));
if (allowed.size === 0) {
for (const d of deltas) {
if (d && typeof d === 'object' && asString(d.name)) {
const index = Number.isInteger(d.index) ? d.index : 0;
seen.set(index, '__blocked__');
}
}
return [];
}
const out = [];
for (const d of deltas) {
if (!d || typeof d !== 'object') {
continue;
}
const index = Number.isInteger(d.index) ? d.index : 0;
const name = asString(d.name);
if (name) {
if (!allowed.has(name)) {
seen.set(index, '__blocked__');
continue;
}
seen.set(index, name);
out.push(d);
continue;
}
const existing = asString(seen.get(index));
if (!existing || existing === '__blocked__') {
continue;
}
out.push(d);
}
return out;
}
function ensureStreamToolCallID(idStore, index) { function ensureStreamToolCallID(idStore, index) {
const key = Number.isInteger(index) ? index : 0; const key = Number.isInteger(index) ? index : 0;
const existing = idStore.get(key); const existing = idStore.get(key);
@@ -104,4 +145,5 @@ module.exports = {
normalizePreparedToolNames, normalizePreparedToolNames,
boolDefaultTrue, boolDefaultTrue,
formatIncrementalToolCallDeltas, formatIncrementalToolCallDeltas,
filterIncrementalToolCallDeltasByAllowed,
}; };

View File

@@ -5,7 +5,7 @@ const {
createToolSieveState, createToolSieveState,
processToolSieveChunk, processToolSieveChunk,
flushToolSieve, flushToolSieve,
parseToolCalls, parseStandaloneToolCalls,
formatOpenAIStreamToolCalls, formatOpenAIStreamToolCalls,
} = require('../helpers/stream-tool-sieve'); } = require('../helpers/stream-tool-sieve');
const { const {
@@ -24,7 +24,6 @@ const {
} = require('./token_usage'); } = require('./token_usage');
const { const {
resolveToolcallPolicy, resolveToolcallPolicy,
formatIncrementalToolCallDeltas,
} = require('./toolcall_policy'); } = require('./toolcall_policy');
const { const {
createChatCompletionEmitter, createChatCompletionEmitter,
@@ -130,7 +129,6 @@ async function handleVercelStream(req, res, rawBody, payload) {
let thinkingText = ''; let thinkingText = '';
let outputText = ''; let outputText = '';
const toolSieveEnabled = toolPolicy.toolSieveEnabled; const toolSieveEnabled = toolPolicy.toolSieveEnabled;
const emitEarlyToolDeltas = toolPolicy.emitEarlyToolDeltas;
const toolSieveState = createToolSieveState(); const toolSieveState = createToolSieveState();
let toolCallsEmitted = false; let toolCallsEmitted = false;
const streamToolCallIDs = new Map(); const streamToolCallIDs = new Map();
@@ -155,13 +153,18 @@ async function handleVercelStream(req, res, rawBody, payload) {
await releaseLease(); await releaseLease();
return; return;
} }
const detected = parseToolCalls(outputText, toolNames); const detected = parseStandaloneToolCalls(outputText, toolNames);
if (detected.length > 0 && !toolCallsEmitted) { if (detected.length > 0 && !toolCallsEmitted) {
toolCallsEmitted = true; toolCallsEmitted = true;
sendDeltaFrame({ tool_calls: formatOpenAIStreamToolCalls(detected) }); sendDeltaFrame({ tool_calls: formatOpenAIStreamToolCalls(detected, streamToolCallIDs) });
} else if (toolSieveEnabled) { } else if (toolSieveEnabled) {
const tailEvents = flushToolSieve(toolSieveState, toolNames); const tailEvents = flushToolSieve(toolSieveState, toolNames);
for (const evt of tailEvents) { for (const evt of tailEvents) {
if (evt.type === 'tool_calls' && Array.isArray(evt.calls) && evt.calls.length > 0) {
toolCallsEmitted = true;
sendDeltaFrame({ tool_calls: formatOpenAIStreamToolCalls(evt.calls, streamToolCallIDs) });
continue;
}
if (evt.text) { if (evt.text) {
sendDeltaFrame({ content: evt.text }); sendDeltaFrame({ content: evt.text });
} }
@@ -252,17 +255,9 @@ async function handleVercelStream(req, res, rawBody, payload) {
} }
const events = processToolSieveChunk(toolSieveState, p.text, toolNames); const events = processToolSieveChunk(toolSieveState, p.text, toolNames);
for (const evt of events) { for (const evt of events) {
if (evt.type === 'tool_call_deltas' && Array.isArray(evt.deltas) && evt.deltas.length > 0) {
if (!emitEarlyToolDeltas) {
continue;
}
toolCallsEmitted = true;
sendDeltaFrame({ tool_calls: formatIncrementalToolCallDeltas(evt.deltas, streamToolCallIDs) });
continue;
}
if (evt.type === 'tool_calls') { if (evt.type === 'tool_calls') {
toolCallsEmitted = true; toolCallsEmitted = true;
sendDeltaFrame({ tool_calls: formatOpenAIStreamToolCalls(evt.calls) }); sendDeltaFrame({ tool_calls: formatOpenAIStreamToolCalls(evt.calls, streamToolCallIDs) });
continue; continue;
} }
if (evt.text) { if (evt.text) {

View File

@@ -2,13 +2,13 @@
const crypto = require('crypto'); const crypto = require('crypto');
function formatOpenAIStreamToolCalls(calls) { function formatOpenAIStreamToolCalls(calls, idStore) {
if (!Array.isArray(calls) || calls.length === 0) { if (!Array.isArray(calls) || calls.length === 0) {
return []; return [];
} }
return calls.map((c, idx) => ({ return calls.map((c, idx) => ({
index: idx, index: idx,
id: `call_${newCallID()}`, id: ensureStreamToolCallID(idStore, idx),
type: 'function', type: 'function',
function: { function: {
name: c.name, name: c.name,
@@ -17,6 +17,20 @@ function formatOpenAIStreamToolCalls(calls) {
})); }));
} }
function ensureStreamToolCallID(idStore, index) {
if (!(idStore instanceof Map)) {
return `call_${newCallID()}`;
}
const key = Number.isInteger(index) ? index : 0;
const existing = idStore.get(key);
if (existing) {
return existing;
}
const next = `call_${newCallID()}`;
idStore.set(key, next);
return next;
}
function newCallID() { function newCallID() {
if (typeof crypto.randomUUID === 'function') { if (typeof crypto.randomUUID === 'function') {
return crypto.randomUUID().replace(/-/g, ''); return crypto.randomUUID().replace(/-/g, '');

View File

@@ -1,226 +0,0 @@
'use strict';
const {
looksLikeToolExampleContext,
insideCodeFence,
} = require('./state');
const {
findObjectFieldValueStart,
parseJSONStringLiteral,
skipSpaces,
} = require('./jsonscan');
function buildIncrementalToolDeltas(state) {
const captured = state.capture || '';
if (!captured) {
return [];
}
if (looksLikeToolExampleContext(state.recentTextTail)) {
return [];
}
const lower = captured.toLowerCase();
const keyIdx = lower.indexOf('tool_calls');
if (keyIdx < 0) {
return [];
}
const start = captured.slice(0, keyIdx).lastIndexOf('{');
if (start < 0) {
return [];
}
if (insideCodeFence((state.recentTextTail || '') + captured.slice(0, start))) {
return [];
}
const callStart = findFirstToolCallObjectStart(captured, keyIdx);
if (callStart < 0) {
return [];
}
const deltas = [];
if (!state.toolName) {
const name = extractToolCallName(captured, callStart);
if (!name) {
return [];
}
state.toolName = name;
}
if (state.toolArgsStart < 0) {
const args = findToolCallArgsStart(captured, callStart);
if (args) {
state.toolArgsString = Boolean(args.stringMode);
state.toolArgsStart = state.toolArgsString ? args.start + 1 : args.start;
state.toolArgsSent = state.toolArgsStart;
}
}
if (!state.toolNameSent) {
if (state.toolArgsStart < 0) {
return [];
}
state.toolNameSent = true;
deltas.push({ index: 0, name: state.toolName });
}
if (state.toolArgsStart < 0 || state.toolArgsDone) {
return deltas;
}
const progress = scanToolCallArgsProgress(captured, state.toolArgsStart, state.toolArgsString);
if (!progress) {
return deltas;
}
if (progress.end > state.toolArgsSent) {
deltas.push({
index: 0,
arguments: captured.slice(state.toolArgsSent, progress.end),
});
state.toolArgsSent = progress.end;
}
if (progress.complete) {
state.toolArgsDone = true;
}
return deltas;
}
function findFirstToolCallObjectStart(text, keyIdx) {
const arrStart = findToolCallsArrayStart(text, keyIdx);
if (arrStart < 0) {
return -1;
}
const i = skipSpaces(text, arrStart + 1);
if (i >= text.length || text[i] !== '{') {
return -1;
}
return i;
}
function findToolCallsArrayStart(text, keyIdx) {
let i = keyIdx + 'tool_calls'.length;
while (i < text.length && text[i] !== ':') {
i += 1;
}
if (i >= text.length) {
return -1;
}
i = skipSpaces(text, i + 1);
if (i >= text.length || text[i] !== '[') {
return -1;
}
return i;
}
function extractToolCallName(text, callStart) {
let valueStart = findObjectFieldValueStart(text, callStart, ['name']);
if (valueStart < 0 || text[valueStart] !== '"') {
const fnStart = findFunctionObjectStart(text, callStart);
if (fnStart < 0) {
return '';
}
valueStart = findObjectFieldValueStart(text, fnStart, ['name']);
if (valueStart < 0 || text[valueStart] !== '"') {
return '';
}
}
const parsed = parseJSONStringLiteral(text, valueStart);
if (!parsed) {
return '';
}
return parsed.value;
}
function findToolCallArgsStart(text, callStart) {
const keys = ['input', 'arguments', 'args', 'parameters', 'params'];
let valueStart = findObjectFieldValueStart(text, callStart, keys);
if (valueStart < 0) {
const fnStart = findFunctionObjectStart(text, callStart);
if (fnStart < 0) {
return null;
}
valueStart = findObjectFieldValueStart(text, fnStart, keys);
if (valueStart < 0) {
return null;
}
}
if (valueStart >= text.length) {
return null;
}
const ch = text[valueStart];
if (ch === '{' || ch === '[') {
return { start: valueStart, stringMode: false };
}
if (ch === '"') {
return { start: valueStart, stringMode: true };
}
return null;
}
function scanToolCallArgsProgress(text, start, stringMode) {
if (start < 0 || start > text.length) {
return null;
}
if (stringMode) {
let escaped = false;
for (let i = start; i < text.length; i += 1) {
const ch = text[i];
if (escaped) {
escaped = false;
continue;
}
if (ch === '\\') {
escaped = true;
continue;
}
if (ch === '"') {
return { end: i, complete: true };
}
}
return { end: text.length, complete: false };
}
if (start >= text.length || (text[start] !== '{' && text[start] !== '[')) {
return null;
}
let depth = 0;
let quote = '';
let escaped = false;
for (let i = start; i < text.length; i += 1) {
const ch = text[i];
if (quote) {
if (escaped) {
escaped = false;
continue;
}
if (ch === '\\') {
escaped = true;
continue;
}
if (ch === quote) {
quote = '';
}
continue;
}
if (ch === '"' || ch === "'") {
quote = ch;
continue;
}
if (ch === '{' || ch === '[') {
depth += 1;
continue;
}
if (ch === '}' || ch === ']') {
depth -= 1;
if (depth === 0) {
return { end: i + 1, complete: true };
}
}
}
return { end: text.length, complete: false };
}
function findFunctionObjectStart(text, callStart) {
const valueStart = findObjectFieldValueStart(text, callStart, ['function']);
if (valueStart < 0 || valueStart >= text.length || text[valueStart] !== '{') {
return -1;
}
return valueStart;
}
module.exports = {
buildIncrementalToolDeltas,
};

View File

@@ -10,7 +10,9 @@ const {
const { const {
extractToolNames, extractToolNames,
parseToolCalls, parseToolCalls,
parseToolCallsDetailed,
parseStandaloneToolCalls, parseStandaloneToolCalls,
parseStandaloneToolCallsDetailed,
} = require('./parse'); } = require('./parse');
const { const {
formatOpenAIStreamToolCalls, formatOpenAIStreamToolCalls,
@@ -22,6 +24,8 @@ module.exports = {
processToolSieveChunk, processToolSieveChunk,
flushToolSieve, flushToolSieve,
parseToolCalls, parseToolCalls,
parseToolCallsDetailed,
parseStandaloneToolCalls, parseStandaloneToolCalls,
parseStandaloneToolCallsDetailed,
formatOpenAIStreamToolCalls, formatOpenAIStreamToolCalls,
}; };

View File

@@ -1,14 +1,14 @@
'use strict'; 'use strict';
const TOOL_CALL_PATTERN = /\{\s*["']tool_calls["']\s*:\s*\[(.*?)\]\s*\}/s;
const { const {
toStringSafe, toStringSafe,
looksLikeToolExampleContext, looksLikeToolExampleContext,
} = require('./state'); } = require('./state');
const { const {
extractJSONObjectFrom, stripFencedCodeBlocks,
} = require('./jsonscan'); buildToolCallCandidates,
parseToolCallsPayload,
} = require('./parse_payload');
function extractToolNames(tools) { function extractToolNames(tools) {
if (!Array.isArray(tools) || tools.length === 0) { if (!Array.isArray(tools) || tools.length === 0) {
@@ -29,253 +29,144 @@ function extractToolNames(tools) {
} }
function parseToolCalls(text, toolNames) { function parseToolCalls(text, toolNames) {
return parseToolCallsDetailed(text, toolNames).calls;
}
function parseToolCallsDetailed(text, toolNames) {
const result = emptyParseResult();
if (!toStringSafe(text)) { if (!toStringSafe(text)) {
return []; return result;
} }
const sanitized = stripFencedCodeBlocks(text); const sanitized = stripFencedCodeBlocks(text);
if (!toStringSafe(sanitized)) { if (!toStringSafe(sanitized)) {
return []; return result;
} }
result.sawToolCallSyntax = sanitized.toLowerCase().includes('tool_calls');
const candidates = buildToolCallCandidates(sanitized); const candidates = buildToolCallCandidates(sanitized);
let parsed = []; let parsed = [];
for (const c of candidates) { for (const c of candidates) {
parsed = parseToolCallsPayload(c); parsed = parseToolCallsPayload(c);
if (parsed.length > 0) { if (parsed.length > 0) {
result.sawToolCallSyntax = true;
break; break;
} }
} }
if (parsed.length === 0) { if (parsed.length === 0) {
return []; return result;
} }
return filterToolCalls(parsed, toolNames);
}
function stripFencedCodeBlocks(text) { const filtered = filterToolCallsDetailed(parsed, toolNames);
const t = typeof text === 'string' ? text : ''; result.calls = filtered.calls;
if (!t) { result.rejectedToolNames = filtered.rejectedToolNames;
return ''; result.rejectedByPolicy = filtered.rejectedToolNames.length > 0 && filtered.calls.length === 0;
} return result;
return t.replace(/```[\s\S]*?```/g, ' ');
} }
function parseStandaloneToolCalls(text, toolNames) { function parseStandaloneToolCalls(text, toolNames) {
return parseStandaloneToolCallsDetailed(text, toolNames).calls;
}
function parseStandaloneToolCallsDetailed(text, toolNames) {
const result = emptyParseResult();
const trimmed = toStringSafe(text); const trimmed = toStringSafe(text);
if (!trimmed) { if (!trimmed) {
return []; return result;
}
if ((trimmed.startsWith('```') && trimmed.endsWith('```')) || trimmed.includes('```')) {
return [];
} }
if (looksLikeToolExampleContext(trimmed)) { if (looksLikeToolExampleContext(trimmed)) {
return []; return result;
} }
const candidates = [trimmed]; result.sawToolCallSyntax = trimmed.toLowerCase().includes('tool_calls');
if (trimmed.startsWith('```') && trimmed.endsWith('```')) { if (!trimmed.startsWith('{') && !trimmed.startsWith('[')) {
const m = trimmed.match(/```(?:json)?\s*([\s\S]*?)\s*```/i); return result;
if (m && m[1]) {
candidates.push(toStringSafe(m[1]));
}
} }
for (const candidate of candidates) {
const c = toStringSafe(candidate); const parsed = parseToolCallsPayload(trimmed);
if (!c) { if (parsed.length === 0) {
continue; return result;
}
if (!c.startsWith('{') && !c.startsWith('[')) {
continue;
}
const parsed = parseToolCallsPayload(c);
if (parsed.length > 0) {
return filterToolCalls(parsed, toolNames);
}
} }
return [];
result.sawToolCallSyntax = true;
const filtered = filterToolCallsDetailed(parsed, toolNames);
result.calls = filtered.calls;
result.rejectedToolNames = filtered.rejectedToolNames;
result.rejectedByPolicy = filtered.rejectedToolNames.length > 0 && filtered.calls.length === 0;
return result;
} }
function buildToolCallCandidates(text) { function emptyParseResult() {
const trimmed = toStringSafe(text);
const candidates = [trimmed];
const fenced = trimmed.match(/```(?:json)?\s*([\s\S]*?)\s*```/gi) || [];
for (const block of fenced) {
const m = block.match(/```(?:json)?\s*([\s\S]*?)\s*```/i);
if (m && m[1]) {
candidates.push(toStringSafe(m[1]));
}
}
for (const candidate of extractToolCallObjects(trimmed)) {
candidates.push(toStringSafe(candidate));
}
const first = trimmed.indexOf('{');
const last = trimmed.lastIndexOf('}');
if (first >= 0 && last > first) {
candidates.push(toStringSafe(trimmed.slice(first, last + 1)));
}
const m = trimmed.match(TOOL_CALL_PATTERN);
if (m && m[1]) {
candidates.push(`{"tool_calls":[${m[1]}]}`);
}
return [...new Set(candidates.filter(Boolean))];
}
function extractToolCallObjects(text) {
const raw = toStringSafe(text);
if (!raw) {
return [];
}
const lower = raw.toLowerCase();
const out = [];
let offset = 0;
// eslint-disable-next-line no-constant-condition
while (true) {
let idx = lower.indexOf('tool_calls', offset);
if (idx < 0) {
break;
}
let start = raw.slice(0, idx).lastIndexOf('{');
while (start >= 0) {
const obj = extractJSONObjectFrom(raw, start);
if (obj.ok) {
out.push(raw.slice(start, obj.end).trim());
offset = obj.end;
idx = -1;
break;
}
start = raw.slice(0, start).lastIndexOf('{');
}
if (idx >= 0) {
offset = idx + 'tool_calls'.length;
}
}
return out;
}
function parseToolCallsPayload(payload) {
let decoded;
try {
decoded = JSON.parse(payload);
} catch (_err) {
return [];
}
if (Array.isArray(decoded)) {
return parseToolCallList(decoded);
}
if (!decoded || typeof decoded !== 'object') {
return [];
}
if (decoded.tool_calls) {
return parseToolCallList(decoded.tool_calls);
}
const one = parseToolCallItem(decoded);
return one ? [one] : [];
}
function parseToolCallList(v) {
if (!Array.isArray(v)) {
return [];
}
const out = [];
for (const item of v) {
if (!item || typeof item !== 'object') {
continue;
}
const one = parseToolCallItem(item);
if (one) {
out.push(one);
}
}
return out;
}
function parseToolCallItem(m) {
let name = toStringSafe(m.name);
let inputRaw = m.input;
let hasInput = Object.prototype.hasOwnProperty.call(m, 'input');
const fn = m.function && typeof m.function === 'object' ? m.function : null;
if (fn) {
if (!name) {
name = toStringSafe(fn.name);
}
if (!hasInput && Object.prototype.hasOwnProperty.call(fn, 'arguments')) {
inputRaw = fn.arguments;
hasInput = true;
}
}
if (!hasInput) {
for (const k of ['arguments', 'args', 'parameters', 'params']) {
if (Object.prototype.hasOwnProperty.call(m, k)) {
inputRaw = m[k];
hasInput = true;
break;
}
}
}
if (!name) {
return null;
}
return { return {
name, calls: [],
input: parseToolCallInput(inputRaw), sawToolCallSyntax: false,
rejectedByPolicy: false,
rejectedToolNames: [],
}; };
} }
function parseToolCallInput(v) { function filterToolCallsDetailed(parsed, toolNames) {
if (v == null) { const sourceNames = Array.isArray(toolNames) ? toolNames : [];
return {}; const allowed = new Set();
} const allowedCanonical = new Map();
if (typeof v === 'string') { for (const item of sourceNames) {
const raw = toStringSafe(v); const name = toStringSafe(item);
if (!raw) { if (!name) {
return {}; continue;
} }
try { allowed.add(name);
const parsed = JSON.parse(raw); const lower = name.toLowerCase();
if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) { if (!allowedCanonical.has(lower)) {
return parsed; allowedCanonical.set(lower, name);
}
return { _raw: raw };
} catch (_err) {
return { _raw: raw };
} }
} }
if (typeof v === 'object' && !Array.isArray(v)) {
return v;
}
try {
const parsed = JSON.parse(JSON.stringify(v));
if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) {
return parsed;
}
} catch (_err) {
return {};
}
return {};
}
function filterToolCalls(parsed, toolNames) { if (allowed.size === 0) {
const allowed = new Set((toolNames || []).filter(Boolean)); const rejected = [];
const out = []; const seen = new Set();
for (const tc of parsed) {
if (!tc || !tc.name) {
continue;
}
if (allowed.size > 0 && !allowed.has(tc.name)) {
continue;
}
out.push({ name: tc.name, input: tc.input || {} });
}
if (out.length === 0 && parsed.length > 0) {
for (const tc of parsed) { for (const tc of parsed) {
if (!tc || !tc.name) { if (!tc || !tc.name) {
continue; continue;
} }
out.push({ name: tc.name, input: tc.input || {} }); if (seen.has(tc.name)) {
continue;
}
seen.add(tc.name);
rejected.push(tc.name);
} }
return { calls: [], rejectedToolNames: rejected };
} }
return out;
const calls = [];
const rejected = [];
const seenRejected = new Set();
for (const tc of parsed) {
if (!tc || !tc.name) {
continue;
}
let matchedName = '';
if (allowed.has(tc.name)) {
matchedName = tc.name;
} else {
matchedName = allowedCanonical.get(tc.name.toLowerCase()) || '';
}
if (!matchedName) {
if (!seenRejected.has(tc.name)) {
seenRejected.add(tc.name);
rejected.push(tc.name);
}
continue;
}
calls.push({
name: matchedName,
input: tc.input && typeof tc.input === 'object' && !Array.isArray(tc.input) ? tc.input : {},
});
}
return { calls, rejectedToolNames: rejected };
} }
module.exports = { module.exports = {
extractToolNames, extractToolNames,
parseToolCalls, parseToolCalls,
parseToolCallsDetailed,
parseStandaloneToolCalls, parseStandaloneToolCalls,
parseStandaloneToolCallsDetailed,
}; };

View File

@@ -0,0 +1,196 @@
'use strict';
const TOOL_CALL_PATTERN = /\{\s*["']tool_calls["']\s*:\s*\[(.*?)\]\s*\}/s;
const {
toStringSafe,
} = require('./state');
const {
extractJSONObjectFrom,
} = require('./jsonscan');
function stripFencedCodeBlocks(text) {
const t = typeof text === 'string' ? text : '';
if (!t) {
return '';
}
return t.replace(/```[\s\S]*?```/g, ' ');
}
function buildToolCallCandidates(text) {
const trimmed = toStringSafe(text);
const candidates = [trimmed];
const fenced = trimmed.match(/```(?:json)?\s*([\s\S]*?)\s*```/gi) || [];
for (const block of fenced) {
const m = block.match(/```(?:json)?\s*([\s\S]*?)\s*```/i);
if (m && m[1]) {
candidates.push(toStringSafe(m[1]));
}
}
for (const candidate of extractToolCallObjects(trimmed)) {
candidates.push(toStringSafe(candidate));
}
const first = trimmed.indexOf('{');
const last = trimmed.lastIndexOf('}');
if (first >= 0 && last > first) {
candidates.push(toStringSafe(trimmed.slice(first, last + 1)));
}
const m = trimmed.match(TOOL_CALL_PATTERN);
if (m && m[1]) {
candidates.push(`{"tool_calls":[${m[1]}]}`);
}
return [...new Set(candidates.filter(Boolean))];
}
function extractToolCallObjects(text) {
const raw = toStringSafe(text);
if (!raw) {
return [];
}
const lower = raw.toLowerCase();
const out = [];
let offset = 0;
// eslint-disable-next-line no-constant-condition
while (true) {
let idx = lower.indexOf('tool_calls', offset);
if (idx < 0) {
break;
}
let start = raw.slice(0, idx).lastIndexOf('{');
while (start >= 0) {
const obj = extractJSONObjectFrom(raw, start);
if (obj.ok) {
out.push(raw.slice(start, obj.end).trim());
offset = obj.end;
idx = -1;
break;
}
start = raw.slice(0, start).lastIndexOf('{');
}
if (idx >= 0) {
offset = idx + 'tool_calls'.length;
}
}
return out;
}
function parseToolCallsPayload(payload) {
let decoded;
try {
decoded = JSON.parse(payload);
} catch (_err) {
return [];
}
if (Array.isArray(decoded)) {
return parseToolCallList(decoded);
}
if (!decoded || typeof decoded !== 'object') {
return [];
}
if (decoded.tool_calls) {
return parseToolCallList(decoded.tool_calls);
}
const one = parseToolCallItem(decoded);
return one ? [one] : [];
}
function parseToolCallList(v) {
if (!Array.isArray(v)) {
return [];
}
const out = [];
for (const item of v) {
if (!item || typeof item !== 'object') {
continue;
}
const one = parseToolCallItem(item);
if (one) {
out.push(one);
}
}
return out;
}
function parseToolCallItem(m) {
let name = toStringSafe(m.name);
let inputRaw = m.input;
let hasInput = Object.prototype.hasOwnProperty.call(m, 'input');
const fn = m.function && typeof m.function === 'object' ? m.function : null;
if (fn) {
if (!name) {
name = toStringSafe(fn.name);
}
if (!hasInput && Object.prototype.hasOwnProperty.call(fn, 'arguments')) {
inputRaw = fn.arguments;
hasInput = true;
}
}
if (!hasInput) {
for (const k of ['arguments', 'args', 'parameters', 'params']) {
if (Object.prototype.hasOwnProperty.call(m, k)) {
inputRaw = m[k];
hasInput = true;
break;
}
}
}
if (!name) {
return null;
}
return {
name,
input: parseToolCallInput(inputRaw),
};
}
function parseToolCallInput(v) {
if (v == null) {
return {};
}
if (typeof v === 'string') {
const raw = toStringSafe(v);
if (!raw) {
return {};
}
try {
const parsed = JSON.parse(raw);
if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) {
return parsed;
}
return { _raw: raw };
} catch (_err) {
return { _raw: raw };
}
}
if (typeof v === 'object' && !Array.isArray(v)) {
return v;
}
try {
const parsed = JSON.parse(JSON.stringify(v));
if (parsed && typeof parsed === 'object' && !Array.isArray(parsed)) {
return parsed;
}
} catch (_err) {
return {};
}
return {};
}
module.exports = {
stripFencedCodeBlocks,
buildToolCallCandidates,
parseToolCallsPayload,
};

View File

@@ -1,16 +1,12 @@
'use strict'; 'use strict';
const { const {
TOOL_SIEVE_CAPTURE_LIMIT,
resetIncrementalToolState, resetIncrementalToolState,
noteText, noteText,
insideCodeFence, insideCodeFence,
} = require('./state'); } = require('./state');
const { const {
buildIncrementalToolDeltas, parseStandaloneToolCallsDetailed,
} = require('./incremental');
const {
parseStandaloneToolCalls,
} = require('./parse'); } = require('./parse');
const { const {
extractJSONObjectFrom, extractJSONObjectFrom,
@@ -24,6 +20,21 @@ function processToolSieveChunk(state, chunk, toolNames) {
state.pending += chunk; state.pending += chunk;
} }
const events = []; const events = [];
if (Array.isArray(state.pendingToolCalls) && state.pendingToolCalls.length > 0) {
const pending = state.pending || '';
if (pending.trim() !== '') {
const content = (state.pendingToolRaw || '') + pending;
state.pending = '';
state.pendingToolRaw = '';
state.pendingToolCalls = [];
noteText(state, content);
events.push({ type: 'text', text: content });
} else {
return events;
}
}
// eslint-disable-next-line no-constant-condition // eslint-disable-next-line no-constant-condition
while (true) { while (true) {
if (state.capturing) { if (state.capturing) {
@@ -31,57 +42,50 @@ function processToolSieveChunk(state, chunk, toolNames) {
state.capture += state.pending; state.capture += state.pending;
state.pending = ''; state.pending = '';
} }
const deltas = buildIncrementalToolDeltas(state);
if (deltas.length > 0) {
events.push({ type: 'tool_call_deltas', deltas });
}
const consumed = consumeToolCapture(state, toolNames); const consumed = consumeToolCapture(state, toolNames);
if (!consumed.ready) { if (!consumed.ready) {
if (state.capture.length > TOOL_SIEVE_CAPTURE_LIMIT) {
noteText(state, state.capture);
events.push({ type: 'text', text: state.capture });
state.capture = '';
state.capturing = false;
resetIncrementalToolState(state);
continue;
}
break; break;
} }
const captured = state.capture;
state.capture = ''; state.capture = '';
state.capturing = false; state.capturing = false;
resetIncrementalToolState(state); resetIncrementalToolState(state);
if (Array.isArray(consumed.calls) && consumed.calls.length > 0) {
state.pendingToolRaw = captured;
state.pendingToolCalls = consumed.calls;
continue;
}
if (consumed.prefix) { if (consumed.prefix) {
noteText(state, consumed.prefix); noteText(state, consumed.prefix);
events.push({ type: 'text', text: consumed.prefix }); events.push({ type: 'text', text: consumed.prefix });
} }
if (Array.isArray(consumed.calls) && consumed.calls.length > 0) {
events.push({ type: 'tool_calls', calls: consumed.calls });
}
if (consumed.suffix) { if (consumed.suffix) {
state.pending += consumed.suffix; state.pending += consumed.suffix;
} }
continue; continue;
} }
if (!state.pending) { const pending = state.pending || '';
if (!pending) {
break; break;
} }
const start = findToolSegmentStart(state.pending); const start = findToolSegmentStart(pending);
if (start >= 0) { if (start >= 0) {
const prefix = state.pending.slice(0, start); const prefix = pending.slice(0, start);
if (prefix) { if (prefix) {
noteText(state, prefix); noteText(state, prefix);
events.push({ type: 'text', text: prefix }); events.push({ type: 'text', text: prefix });
} }
state.capture = state.pending.slice(start);
state.pending = ''; state.pending = '';
state.capture += pending.slice(start);
state.capturing = true; state.capturing = true;
resetIncrementalToolState(state); resetIncrementalToolState(state);
continue; continue;
} }
const [safe, hold] = splitSafeContentForToolDetection(state.pending); const [safe, hold] = splitSafeContentForToolDetection(pending);
if (!safe) { if (!safe) {
break; break;
} }
@@ -97,6 +101,13 @@ function flushToolSieve(state, toolNames) {
return []; return [];
} }
const events = processToolSieveChunk(state, '', toolNames); const events = processToolSieveChunk(state, '', toolNames);
if (Array.isArray(state.pendingToolCalls) && state.pendingToolCalls.length > 0) {
events.push({ type: 'tool_calls', calls: state.pendingToolCalls });
state.pendingToolRaw = '';
state.pendingToolCalls = [];
}
if (state.capturing) { if (state.capturing) {
const consumed = consumeToolCapture(state, toolNames); const consumed = consumeToolCapture(state, toolNames);
if (consumed.ready) { if (consumed.ready) {
@@ -119,11 +130,13 @@ function flushToolSieve(state, toolNames) {
state.capturing = false; state.capturing = false;
resetIncrementalToolState(state); resetIncrementalToolState(state);
} }
if (state.pending) { if (state.pending) {
noteText(state, state.pending); noteText(state, state.pending);
events.push({ type: 'text', text: state.pending }); events.push({ type: 'text', text: state.pending });
state.pending = ''; state.pending = '';
} }
return events; return events;
} }
@@ -163,11 +176,10 @@ function findToolSegmentStart(s) {
let offset = 0; let offset = 0;
// eslint-disable-next-line no-constant-condition // eslint-disable-next-line no-constant-condition
while (true) { while (true) {
const keyRel = lower.indexOf('tool_calls', offset); const keyIdx = lower.indexOf('tool_calls', offset);
if (keyRel < 0) { if (keyIdx < 0) {
return -1; return -1;
} }
const keyIdx = keyRel;
const start = s.slice(0, keyIdx).lastIndexOf('{'); const start = s.slice(0, keyIdx).lastIndexOf('{');
const candidateStart = start >= 0 ? start : keyIdx; const candidateStart = start >= 0 ? start : keyIdx;
if (!insideCodeFence(s.slice(0, candidateStart))) { if (!insideCodeFence(s.slice(0, candidateStart))) {
@@ -178,7 +190,7 @@ function findToolSegmentStart(s) {
} }
function consumeToolCapture(state, toolNames) { function consumeToolCapture(state, toolNames) {
const captured = state.capture; const captured = state.capture || '';
if (!captured) { if (!captured) {
return { ready: false, prefix: '', calls: [], suffix: '' }; return { ready: false, prefix: '', calls: [], suffix: '' };
} }
@@ -195,8 +207,10 @@ function consumeToolCapture(state, toolNames) {
if (!obj.ok) { if (!obj.ok) {
return { ready: false, prefix: '', calls: [], suffix: '' }; return { ready: false, prefix: '', calls: [], suffix: '' };
} }
const prefixPart = captured.slice(0, start); const prefixPart = captured.slice(0, start);
const suffixPart = captured.slice(obj.end); const suffixPart = captured.slice(obj.end);
if (insideCodeFence((state.recentTextTail || '') + prefixPart)) { if (insideCodeFence((state.recentTextTail || '') + prefixPart)) {
return { return {
ready: true, ready: true,
@@ -205,9 +219,19 @@ function consumeToolCapture(state, toolNames) {
suffix: '', suffix: '',
}; };
} }
const parsed = parseStandaloneToolCalls(captured.slice(start, obj.end), toolNames);
if (parsed.length === 0) { if ((state.recentTextTail || '').trim() !== '' || prefixPart.trim() !== '' || suffixPart.trim() !== '') {
if (state.toolNameSent) { return {
ready: true,
prefix: captured,
calls: [],
suffix: '',
};
}
const parsed = parseStandaloneToolCallsDetailed(captured.slice(start, obj.end), toolNames);
if (!Array.isArray(parsed.calls) || parsed.calls.length === 0) {
if (parsed.sawToolCallSyntax && parsed.rejectedByPolicy) {
return { return {
ready: true, ready: true,
prefix: prefixPart, prefix: prefixPart,
@@ -222,26 +246,11 @@ function consumeToolCapture(state, toolNames) {
suffix: '', suffix: '',
}; };
} }
if (state.toolNameSent) {
if (parsed.length > 1) {
return {
ready: true,
prefix: prefixPart,
calls: parsed.slice(1),
suffix: suffixPart,
};
}
return {
ready: true,
prefix: prefixPart,
calls: [],
suffix: suffixPart,
};
}
return { return {
ready: true, ready: true,
prefix: prefixPart, prefix: prefixPart,
calls: parsed, calls: parsed.calls,
suffix: suffixPart, suffix: suffixPart,
}; };
} }

View File

@@ -1,6 +1,5 @@
'use strict'; 'use strict';
const TOOL_SIEVE_CAPTURE_LIMIT = 8 * 1024;
const TOOL_SIEVE_CONTEXT_TAIL_LIMIT = 256; const TOOL_SIEVE_CONTEXT_TAIL_LIMIT = 256;
function createToolSieveState() { function createToolSieveState() {
@@ -9,6 +8,9 @@ function createToolSieveState() {
capture: '', capture: '',
capturing: false, capturing: false,
recentTextTail: '', recentTextTail: '',
pendingToolRaw: '',
pendingToolCalls: [],
disableDeltas: false,
toolNameSent: false, toolNameSent: false,
toolName: '', toolName: '',
toolArgsStart: -1, toolArgsStart: -1,
@@ -19,6 +21,7 @@ function createToolSieveState() {
} }
function resetIncrementalToolState(state) { function resetIncrementalToolState(state) {
state.disableDeltas = false;
state.toolNameSent = false; state.toolNameSent = false;
state.toolName = ''; state.toolName = '';
state.toolArgsStart = -1; state.toolArgsStart = -1;
@@ -78,7 +81,6 @@ function toStringSafe(v) {
} }
module.exports = { module.exports = {
TOOL_SIEVE_CAPTURE_LIMIT,
TOOL_SIEVE_CONTEXT_TAIL_LIMIT, TOOL_SIEVE_CONTEXT_TAIL_LIMIT,
createToolSieveState, createToolSieveState,
resetIncrementalToolState, resetIncrementalToolState,

View File

@@ -57,16 +57,20 @@ func NewApp() *App {
r.Use(cors) r.Use(cors)
r.Use(timeout(0)) r.Use(timeout(0))
r.Get("/healthz", func(w http.ResponseWriter, _ *http.Request) { healthzHandler := func(w http.ResponseWriter, _ *http.Request) {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{"status":"ok"}`)) _, _ = w.Write([]byte(`{"status":"ok"}`))
}) }
r.Get("/readyz", func(w http.ResponseWriter, _ *http.Request) { readyzHandler := func(w http.ResponseWriter, _ *http.Request) {
w.Header().Set("Content-Type", "application/json") w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK) w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{"status":"ready"}`)) _, _ = w.Write([]byte(`{"status":"ready"}`))
}) }
r.Get("/healthz", healthzHandler)
r.Head("/healthz", healthzHandler)
r.Get("/readyz", readyzHandler)
r.Head("/readyz", readyzHandler)
openai.RegisterRoutes(r, openaiHandler) openai.RegisterRoutes(r, openaiHandler)
claude.RegisterRoutes(r, claudeHandler) claude.RegisterRoutes(r, claudeHandler)
gemini.RegisterRoutes(r, geminiHandler) gemini.RegisterRoutes(r, geminiHandler)

View File

@@ -0,0 +1,20 @@
package server
import (
"net/http"
"net/http/httptest"
"testing"
)
func TestHealthEndpointsSupportHEAD(t *testing.T) {
app := NewApp()
for _, path := range []string{"/healthz", "/readyz"} {
req := httptest.NewRequest(http.MethodHead, path, nil)
rec := httptest.NewRecorder()
app.Router.ServeHTTP(rec, req)
if rec.Code != http.StatusOK {
t.Fatalf("expected %s HEAD status 200, got %d", path, rec.Code)
}
}
}

View File

@@ -17,6 +17,12 @@ func (r *Runner) caseHealthz(ctx context.Context, cc *caseContext) error {
var m map[string]any var m map[string]any
_ = json.Unmarshal(resp.Body, &m) _ = json.Unmarshal(resp.Body, &m)
cc.assert("status_ok", asString(m["status"]) == "ok", fmt.Sprintf("body=%s", string(resp.Body))) cc.assert("status_ok", asString(m["status"]) == "ok", fmt.Sprintf("body=%s", string(resp.Body)))
headResp, headErr := cc.request(ctx, requestSpec{Method: http.MethodHead, Path: "/healthz", Retryable: true})
if headErr != nil {
return headErr
}
cc.assert("head_status_200", headResp.StatusCode == http.StatusOK, fmt.Sprintf("status=%d", headResp.StatusCode))
return nil return nil
} }
@@ -29,6 +35,12 @@ func (r *Runner) caseReadyz(ctx context.Context, cc *caseContext) error {
var m map[string]any var m map[string]any
_ = json.Unmarshal(resp.Body, &m) _ = json.Unmarshal(resp.Body, &m)
cc.assert("status_ready", asString(m["status"]) == "ready", fmt.Sprintf("body=%s", string(resp.Body))) cc.assert("status_ready", asString(m["status"]) == "ready", fmt.Sprintf("body=%s", string(resp.Body)))
headResp, headErr := cc.request(ctx, requestSpec{Method: http.MethodHead, Path: "/readyz", Retryable: true})
if headErr != nil {
return headErr
}
cc.assert("head_status_200", headResp.StatusCode == http.StatusOK, fmt.Sprintf("status=%d", headResp.StatusCode))
return nil return nil
} }

View File

@@ -2,9 +2,12 @@ package util
import ( import (
"encoding/json" "encoding/json"
"regexp"
"strings" "strings"
) )
var toolNameLoosePattern = regexp.MustCompile(`[^a-z0-9]+`)
type ParsedToolCall struct { type ParsedToolCall struct {
Name string `json:"name"` Name string `json:"name"`
Input map[string]any `json:"input"` Input map[string]any `json:"input"`
@@ -89,8 +92,17 @@ func ParseStandaloneToolCallsDetailed(text string, availableToolNames []string)
func filterToolCallsDetailed(parsed []ParsedToolCall, availableToolNames []string) ([]ParsedToolCall, []string) { func filterToolCallsDetailed(parsed []ParsedToolCall, availableToolNames []string) ([]ParsedToolCall, []string) {
allowed := map[string]struct{}{} allowed := map[string]struct{}{}
allowedCanonical := map[string]string{}
for _, name := range availableToolNames { for _, name := range availableToolNames {
allowed[name] = struct{}{} trimmed := strings.TrimSpace(name)
if trimmed == "" {
continue
}
allowed[trimmed] = struct{}{}
lower := strings.ToLower(trimmed)
if _, exists := allowedCanonical[lower]; !exists {
allowedCanonical[lower] = trimmed
}
} }
if len(allowed) == 0 { if len(allowed) == 0 {
rejectedSet := map[string]struct{}{} rejectedSet := map[string]struct{}{}
@@ -112,10 +124,12 @@ func filterToolCallsDetailed(parsed []ParsedToolCall, availableToolNames []strin
if tc.Name == "" { if tc.Name == "" {
continue continue
} }
if _, ok := allowed[tc.Name]; !ok { matchedName := resolveAllowedToolName(tc.Name, allowed, allowedCanonical)
if matchedName == "" {
rejectedSet[tc.Name] = struct{}{} rejectedSet[tc.Name] = struct{}{}
continue continue
} }
tc.Name = matchedName
if tc.Input == nil { if tc.Input == nil {
tc.Input = map[string]any{} tc.Input = map[string]any{}
} }
@@ -128,6 +142,31 @@ func filterToolCallsDetailed(parsed []ParsedToolCall, availableToolNames []strin
return out, rejected return out, rejected
} }
func resolveAllowedToolName(name string, allowed map[string]struct{}, allowedCanonical map[string]string) string {
if _, ok := allowed[name]; ok {
return name
}
lower := strings.ToLower(strings.TrimSpace(name))
if canonical, ok := allowedCanonical[lower]; ok {
return canonical
}
if idx := strings.LastIndex(lower, "."); idx >= 0 && idx < len(lower)-1 {
if canonical, ok := allowedCanonical[lower[idx+1:]]; ok {
return canonical
}
}
loose := toolNameLoosePattern.ReplaceAllString(lower, "")
if loose == "" {
return ""
}
for candidateLower, canonical := range allowedCanonical {
if toolNameLoosePattern.ReplaceAllString(candidateLower, "") == loose {
return canonical
}
}
return ""
}
func parseToolCallsPayload(payload string) []ParsedToolCall { func parseToolCallsPayload(payload string) []ParsedToolCall {
var decoded any var decoded any
if err := json.Unmarshal([]byte(payload), &decoded); err != nil { if err := json.Unmarshal([]byte(payload), &decoded); err != nil {

View File

@@ -46,6 +46,17 @@ func TestParseToolCallsRejectsUnknownToolName(t *testing.T) {
} }
} }
func TestParseToolCallsAllowsCaseInsensitiveToolNameAndCanonicalizes(t *testing.T) {
text := `{"tool_calls":[{"name":"Bash","input":{"command":"ls -al"}}]}`
calls := ParseToolCalls(text, []string{"bash"})
if len(calls) != 1 {
t.Fatalf("expected 1 call, got %#v", calls)
}
if calls[0].Name != "bash" {
t.Fatalf("expected canonical tool name bash, got %q", calls[0].Name)
}
}
func TestParseToolCallsDetailedMarksPolicyRejection(t *testing.T) { func TestParseToolCallsDetailedMarksPolicyRejection(t *testing.T) {
text := `{"tool_calls":[{"name":"unknown","input":{}}]}` text := `{"tool_calls":[{"name":"unknown","input":{}}]}`
res := ParseToolCallsDetailed(text, []string{"search"}) res := ParseToolCallsDetailed(text, []string{"search"})
@@ -104,3 +115,25 @@ func TestParseStandaloneToolCallsIgnoresFencedCodeBlock(t *testing.T) {
t.Fatalf("expected fenced tool_call example to be ignored, got %#v", calls) t.Fatalf("expected fenced tool_call example to be ignored, got %#v", calls)
} }
} }
func TestParseToolCallsAllowsQualifiedToolName(t *testing.T) {
text := `{"tool_calls":[{"name":"mcp.search_web","input":{"q":"golang"}}]}`
calls := ParseToolCalls(text, []string{"search_web"})
if len(calls) != 1 {
t.Fatalf("expected 1 call, got %#v", calls)
}
if calls[0].Name != "search_web" {
t.Fatalf("expected canonical tool name search_web, got %q", calls[0].Name)
}
}
func TestParseToolCallsAllowsPunctuationVariantToolName(t *testing.T) {
text := `{"tool_calls":[{"name":"read-file","input":{"path":"README.md"}}]}`
calls := ParseToolCalls(text, []string{"read_file"})
if len(calls) != 1 {
t.Fatalf("expected 1 call, got %#v", calls)
}
if calls[0].Name != "read_file" {
t.Fatalf("expected canonical tool name read_file, got %q", calls[0].Name)
}
}

View File

@@ -16,7 +16,6 @@ internal/js/helpers/stream-tool-sieve.js
internal/js/helpers/stream-tool-sieve/index.js internal/js/helpers/stream-tool-sieve/index.js
internal/js/helpers/stream-tool-sieve/state.js internal/js/helpers/stream-tool-sieve/state.js
internal/js/helpers/stream-tool-sieve/sieve.js internal/js/helpers/stream-tool-sieve/sieve.js
internal/js/helpers/stream-tool-sieve/incremental.js
internal/js/helpers/stream-tool-sieve/jsonscan.js internal/js/helpers/stream-tool-sieve/jsonscan.js
internal/js/helpers/stream-tool-sieve/parse.js internal/js/helpers/stream-tool-sieve/parse.js
internal/js/helpers/stream-tool-sieve/format.js internal/js/helpers/stream-tool-sieve/format.js

View File

@@ -105,7 +105,6 @@ internal/js/helpers/stream-tool-sieve.js
internal/js/helpers/stream-tool-sieve/index.js internal/js/helpers/stream-tool-sieve/index.js
internal/js/helpers/stream-tool-sieve/state.js internal/js/helpers/stream-tool-sieve/state.js
internal/js/helpers/stream-tool-sieve/sieve.js internal/js/helpers/stream-tool-sieve/sieve.js
internal/js/helpers/stream-tool-sieve/incremental.js
internal/js/helpers/stream-tool-sieve/jsonscan.js internal/js/helpers/stream-tool-sieve/jsonscan.js
internal/js/helpers/stream-tool-sieve/parse.js internal/js/helpers/stream-tool-sieve/parse.js
internal/js/helpers/stream-tool-sieve/format.js internal/js/helpers/stream-tool-sieve/format.js

566
start.mjs Normal file
View File

@@ -0,0 +1,566 @@
#!/usr/bin/env node
/**
* DS2API 启动脚本 - 交互式菜单
*
* 使用方法:
* node start.mjs # 显示交互式菜单
* node start.mjs dev # 开发模式(后端 + 前端热重载)
* node start.mjs prod # 生产模式(编译后运行)
* node start.mjs build # 编译后端二进制
* node start.mjs webui # 构建前端静态文件
* node start.mjs install # 安装前端依赖
* node start.mjs stop # 停止所有服务
* node start.mjs status # 查看服务状态
*/
import { spawn, execSync } from 'child_process';
import { createInterface } from 'readline';
import { existsSync } from 'fs';
import { fileURLToPath } from 'url';
import { dirname, join } from 'path';
const __filename = fileURLToPath(import.meta.url);
const __dirname = dirname(__filename);
// 判断是否为 Windows
const isWindows = process.platform === 'win32';
// 编译产物路径
const BINARY = join(__dirname, isWindows ? 'ds2api.exe' : 'ds2api');
// 配置(从环境变量读取,与 Go 主程序保持一致)
const CONFIG = {
port: process.env.PORT || '5001',
frontendPort: 5173,
logLevel: process.env.LOG_LEVEL || 'INFO',
adminKey: process.env.DS2API_ADMIN_KEY || 'admin',
webuiDir: join(__dirname, 'webui'),
staticAdminDir: process.env.DS2API_STATIC_ADMIN_DIR || join(__dirname, 'static', 'admin'),
};
// 国内镜像配置
const MIRRORS = {
goproxy: process.env.GOPROXY || 'https://goproxy.cn,direct',
npm: process.env.NPM_REGISTRY || 'https://registry.npmmirror.com',
};
// 存储子进程
const processes = [];
// 颜色输出
const colors = {
reset: '\x1b[0m',
bright: '\x1b[1m',
dim: '\x1b[2m',
red: '\x1b[31m',
green: '\x1b[32m',
yellow: '\x1b[33m',
blue: '\x1b[34m',
magenta: '\x1b[35m',
cyan: '\x1b[36m',
};
const log = {
info: (msg) => console.log(`${colors.cyan}[INFO]${colors.reset} ${msg}`),
success: (msg) => console.log(`${colors.green}[OK]${colors.reset} ${msg}`),
warn: (msg) => console.log(`${colors.yellow}[WARN]${colors.reset} ${msg}`),
error: (msg) => console.log(`${colors.red}[ERROR]${colors.reset} ${msg}`),
title: (msg) => console.log(`\n${colors.bright}${colors.magenta}${msg}${colors.reset}`),
};
// 清理并退出
function cleanup() {
console.log('\n');
log.info('正在关闭所有服务...');
processes.forEach(proc => {
if (proc && !proc.killed) {
proc.kill('SIGTERM');
}
});
log.success('已退出');
process.exit(0);
}
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
// 检查命令是否存在
function commandExists(cmd) {
try {
execSync(`${isWindows ? 'where' : 'which'} ${cmd}`, { stdio: 'ignore' });
return true;
} catch {
return false;
}
}
// 检查 Go 是否安装
function checkGo() {
return commandExists('go');
}
// 获取 Go 版本
function getGoVersion() {
try {
return execSync('go version', { encoding: 'utf-8' }).trim();
} catch {
return null;
}
}
// 检查前端依赖是否已安装
function checkFrontendDeps() {
if (!existsSync(CONFIG.webuiDir)) return null;
return existsSync(join(CONFIG.webuiDir, 'node_modules'));
}
// 检查前端是否已构建
function checkWebuiBuilt() {
return existsSync(join(CONFIG.staticAdminDir, 'index.html'));
}
// 检查后端二进制是否存在
function binaryExists() {
return existsSync(BINARY);
}
// 查找占用端口的进程 PID
function findPidByPort(port) {
try {
if (isWindows) {
const output = execSync(`netstat -ano | findstr :${port} | findstr LISTENING`, {
encoding: 'utf-8',
shell: true,
stdio: ['pipe', 'pipe', 'ignore'],
});
const pids = new Set();
for (const line of output.trim().split('\n')) {
const parts = line.trim().split(/\s+/);
const pid = parts[parts.length - 1];
if (pid && pid !== '0') pids.add(pid);
}
return [...pids];
} else {
const output = execSync(`lsof -ti :${port}`, {
encoding: 'utf-8',
stdio: ['pipe', 'pipe', 'ignore'],
});
return output.trim().split('\n').filter(Boolean);
}
} catch {
return [];
}
}
// 获取运行中的服务状态
function getRunningStatus() {
const backendPids = findPidByPort(CONFIG.port);
const frontendPids = findPidByPort(CONFIG.frontendPort);
return {
backend: backendPids,
frontend: frontendPids,
isRunning: backendPids.length > 0 || frontendPids.length > 0,
};
}
// 停止服务
async function stopServices() {
const running = getRunningStatus();
if (!running.isRunning) {
log.warn('没有检测到正在运行的服务');
return;
}
log.title('========== 停止服务 ==========');
const killProcess = async (pid) => {
try {
if (isWindows) {
try {
execSync(`taskkill /PID ${pid}`, { stdio: 'ignore', shell: true });
} catch {
execSync(`taskkill /F /T /PID ${pid}`, { stdio: 'ignore', shell: true });
}
} else {
execSync(`kill -15 ${pid}`, { stdio: 'ignore' });
await new Promise(r => setTimeout(r, 500));
try {
execSync(`kill -0 ${pid}`, { stdio: 'ignore' });
execSync(`kill -9 ${pid}`, { stdio: 'ignore' });
} catch { /* 进程已退出 */ }
}
} catch { /* 进程可能已退出 */ }
};
if (running.backend.length > 0) {
log.info(`停止后端服务 (端口 ${CONFIG.port}, PID: ${running.backend.join(', ')})...`);
for (const pid of running.backend) await killProcess(pid);
log.success('后端服务已停止');
}
if (running.frontend.length > 0) {
log.info(`停止前端服务 (端口 ${CONFIG.frontendPort}, PID: ${running.frontend.join(', ')})...`);
for (const pid of running.frontend) await killProcess(pid);
log.success('前端服务已停止');
}
}
// 安装前端依赖
async function installFrontendDeps() {
if (!existsSync(CONFIG.webuiDir)) {
log.warn('webui 目录不存在,跳过前端依赖安装');
return;
}
log.info(`安装前端依赖 (npm ci, registry: ${MIRRORS.npm})...`);
return new Promise((resolve, reject) => {
const proc = spawn('npm', ['ci', '--registry', MIRRORS.npm], {
cwd: CONFIG.webuiDir,
stdio: 'inherit',
shell: true,
});
proc.on('close', code => code === 0 ? resolve() : reject(new Error('前端依赖安装失败')));
});
}
// 确保前端依赖已安装
async function ensureFrontendDeps() {
if (checkFrontendDeps() === false) {
log.warn('检测到前端依赖未安装,正在安装...');
await installFrontendDeps();
}
}
// 编译后端二进制
async function buildBackend() {
if (!checkGo()) throw new Error('未找到 Go请先安装 Go (https://go.dev/dl/)');
log.info(`编译后端二进制 (GOPROXY: ${MIRRORS.goproxy})...`);
return new Promise((resolve, reject) => {
const proc = spawn('go', ['build', '-o', BINARY, './cmd/ds2api'], {
cwd: __dirname,
stdio: 'inherit',
shell: true,
env: { ...process.env, GOPROXY: MIRRORS.goproxy },
});
proc.on('close', code => code === 0 ? resolve() : reject(new Error('后端编译失败')));
});
}
// 构建前端静态文件
async function buildWebui() {
if (!existsSync(CONFIG.webuiDir)) {
log.warn('webui 目录不存在');
return;
}
await ensureFrontendDeps();
log.info('构建前端静态文件...');
return new Promise((resolve, reject) => {
const proc = spawn(
'npm', ['run', 'build', '--', '--outDir', CONFIG.staticAdminDir, '--emptyOutDir'],
{ cwd: CONFIG.webuiDir, stdio: 'inherit', shell: true }
);
proc.on('close', code => code === 0 ? resolve() : reject(new Error('前端构建失败')));
});
}
// 启动后端开发模式go run无需预编译
async function startBackendDev() {
if (!checkGo()) throw new Error('未找到 Go请先安装 Go (https://go.dev/dl/)');
log.info(`启动后端go run... http://localhost:${CONFIG.port}`);
const proc = spawn('go', ['run', './cmd/ds2api'], {
cwd: __dirname,
stdio: 'inherit',
shell: true,
env: {
...process.env,
PORT: CONFIG.port,
LOG_LEVEL: CONFIG.logLevel,
DS2API_ADMIN_KEY: CONFIG.adminKey,
GOPROXY: MIRRORS.goproxy,
},
});
processes.push(proc);
return proc;
}
// 启动后端(生产模式:运行编译好的二进制)
async function startBackendProd() {
if (!binaryExists()) {
log.warn('未找到编译产物,正在编译...');
await buildBackend();
}
log.info(`启动后端(二进制)... http://localhost:${CONFIG.port}`);
const proc = spawn(BINARY, [], {
cwd: __dirname,
stdio: 'inherit',
shell: false,
env: {
...process.env,
PORT: CONFIG.port,
LOG_LEVEL: CONFIG.logLevel,
DS2API_ADMIN_KEY: CONFIG.adminKey,
},
});
processes.push(proc);
return proc;
}
// 启动前端开发服务器
async function startFrontend() {
if (!existsSync(CONFIG.webuiDir)) {
log.warn('webui 目录不存在,跳过前端启动');
return null;
}
await ensureFrontendDeps();
log.info(`启动前端开发服务器... http://localhost:${CONFIG.frontendPort}`);
const proc = spawn('npm', ['run', 'dev'], {
cwd: CONFIG.webuiDir,
stdio: 'inherit',
shell: true,
});
processes.push(proc);
return proc;
}
// 显示状态信息
function showStatus() {
console.log('\n' + '─'.repeat(50));
log.success(`后端 API: http://localhost:${CONFIG.port}`);
log.success(`管理界面: http://localhost:${CONFIG.port}/admin`);
if (existsSync(CONFIG.webuiDir)) {
log.success(`前端 Dev: http://localhost:${CONFIG.frontendPort}`);
}
console.log('─'.repeat(50));
log.info('按 Ctrl+C 停止所有服务\n');
}
// 等待进程退出
function waitForProcesses() {
return new Promise(resolve => {
const check = setInterval(() => {
const activeCount = processes.filter(proc => proc.exitCode === null && proc.signalCode === null).length;
if (activeCount === 0) {
clearInterval(check);
resolve();
}
}, 1000);
});
}
// 交互式菜单
async function showMenu() {
const rl = createInterface({ input: process.stdin, output: process.stdout });
const question = (prompt) => new Promise(resolve => rl.question(prompt, resolve));
console.clear();
log.title('╔══════════════════════════════════════════╗');
log.title('║ DS2API 启动脚本 (Go) ║');
log.title('╚══════════════════════════════════════════╝');
// 环境状态
const goVersion = getGoVersion();
const frontendDeps = checkFrontendDeps();
const webuiBuilt = checkWebuiBuilt();
const hasBinary = binaryExists();
const running = getRunningStatus();
const ok = (v) => v ? `${colors.green}${colors.reset}` : `${colors.yellow}${colors.reset}`;
console.log(`\n${colors.bright}环境状态:${colors.reset}`);
console.log(` Go: ${goVersion ? `${colors.green}${goVersion}${colors.reset}` : `${colors.red}未安装${colors.reset}`}`);
console.log(` 前端依赖: ${frontendDeps === null ? `${colors.dim}N/A${colors.reset}` : frontendDeps ? `${colors.green}已安装${colors.reset}` : `${colors.yellow}未安装${colors.reset}`}`);
console.log(` 前端构建: ${ok(webuiBuilt)} ${webuiBuilt ? `(${CONFIG.staticAdminDir})` : '未构建'}`);
console.log(` 后端二进制: ${ok(hasBinary)} ${hasBinary ? BINARY : '未编译'}`);
console.log(`\n${colors.bright}服务状态:${colors.reset}`);
console.log(` 后端 (:${CONFIG.port}): ${running.backend.length > 0 ? `${colors.green}运行中${colors.reset} (PID: ${running.backend.join(', ')})` : `${colors.dim}未运行${colors.reset}`}`);
console.log(` 前端 (:${CONFIG.frontendPort}): ${running.frontend.length > 0 ? `${colors.green}运行中${colors.reset} (PID: ${running.frontend.join(', ')})` : `${colors.dim}未运行${colors.reset}`}`);
console.log(`\n${colors.bright}环境变量:${colors.reset}`);
console.log(` PORT: ${colors.cyan}${CONFIG.port}${colors.reset}`);
console.log(` LOG_LEVEL: ${colors.cyan}${CONFIG.logLevel}${colors.reset}`);
console.log(` DS2API_ADMIN_KEY: ${colors.cyan}${CONFIG.adminKey}${colors.reset}`);
console.log(` GOPROXY: ${colors.cyan}${MIRRORS.goproxy}${colors.reset}`);
console.log(` NPM_REGISTRY: ${colors.cyan}${MIRRORS.npm}${colors.reset}`);
console.log(`${colors.dim} 自定义: DS2API_ADMIN_KEY=密钥 PORT=5001 node start.mjs${colors.reset}`);
console.log(`
${colors.bright}请选择操作:${colors.reset}
${colors.cyan}1.${colors.reset} 开发模式 (go run + 前端热重载)
${colors.cyan}2.${colors.reset} 仅后端 (go run无需编译)
${colors.cyan}3.${colors.reset} 仅前端 (npm dev)
${colors.cyan}4.${colors.reset} 生产模式 (编译后运行,前端已嵌入)
${colors.cyan}5.${colors.reset} 编译后端 (go build)
${colors.cyan}6.${colors.reset} 构建前端 (npm build → static/admin)
${colors.cyan}7.${colors.reset} 安装前端依赖 (npm ci)
${colors.red}8.${colors.reset} 停止所有服务
${colors.cyan}0.${colors.reset} 退出
`);
const choice = await question(`${colors.yellow}请输入选项 [1]: ${colors.reset}`);
rl.close();
switch (choice.trim() || '1') {
case '1':
log.title('========== 开发模式 ==========');
await startBackendDev();
await new Promise(r => setTimeout(r, 1500));
await startFrontend();
showStatus();
await waitForProcesses();
break;
case '2':
log.title('========== 仅后端 (go run) ==========');
await startBackendDev();
showStatus();
await waitForProcesses();
break;
case '3':
log.title('========== 仅前端 ==========');
await startFrontend();
showStatus();
await waitForProcesses();
break;
case '4':
log.title('========== 生产模式 ==========');
await startBackendProd();
showStatus();
await waitForProcesses();
break;
case '5':
log.title('========== 编译后端 ==========');
await buildBackend();
log.success(`编译完成:${BINARY}`);
break;
case '6':
log.title('========== 构建前端 ==========');
await buildWebui();
log.success('前端构建完成!');
break;
case '7':
log.title('========== 安装前端依赖 ==========');
await installFrontendDeps();
log.success('前端依赖安装完成!');
break;
case '8':
await stopServices();
break;
case '0':
log.info('再见!');
process.exit(0);
break;
default:
log.warn('无效选项');
await showMenu();
}
}
// 命令行参数处理
async function main() {
const cmd = process.argv[2];
if (!checkGo() && !['install', 'webui', 'stop', 'status', 'help', '-h', '--help'].includes(cmd)) {
log.error('未找到 Go请先安装 Go: https://go.dev/dl/');
if (!cmd) {
// 无 Go 时仍允许进入菜单(可以只操作前端)
} else {
process.exit(1);
}
}
switch (cmd) {
case 'dev':
log.title('========== 开发模式 ==========');
await startBackendDev();
await new Promise(r => setTimeout(r, 1500));
await startFrontend();
showStatus();
await waitForProcesses();
break;
case 'prod':
log.title('========== 生产模式 ==========');
await startBackendProd();
showStatus();
await waitForProcesses();
break;
case 'build':
await buildBackend();
log.success(`编译完成:${BINARY}`);
break;
case 'webui':
await buildWebui();
log.success('前端构建完成!');
break;
case 'install':
await installFrontendDeps();
log.success('前端依赖安装完成!');
break;
case 'stop':
await stopServices();
break;
case 'status': {
const status = getRunningStatus();
const goVer = getGoVersion();
console.log(`\n${colors.bright}环境:${colors.reset}`);
console.log(` Go: ${goVer || `${colors.red}未安装${colors.reset}`}`);
console.log(`\n${colors.bright}服务状态:${colors.reset}`);
console.log(` 后端 (:${CONFIG.port}): ${status.backend.length > 0 ? `${colors.green}运行中${colors.reset} (PID: ${status.backend.join(', ')})` : `${colors.dim}未运行${colors.reset}`}`);
console.log(` 前端 (:${CONFIG.frontendPort}): ${status.frontend.length > 0 ? `${colors.green}运行中${colors.reset} (PID: ${status.frontend.join(', ')})` : `${colors.dim}未运行${colors.reset}`}\n`);
break;
}
case 'help':
case '-h':
case '--help':
console.log(`
${colors.bright}DS2API 启动脚本 (Go)${colors.reset}
${colors.cyan}使用方法:${colors.reset}
node start.mjs 显示交互式菜单
node start.mjs dev 开发模式 (go run + 前端热重载)
node start.mjs prod 生产模式 (编译产物,前端已嵌入)
node start.mjs build 编译后端二进制 (go build)
node start.mjs webui 构建前端静态文件
node start.mjs install 安装前端依赖 (npm ci)
node start.mjs stop 停止所有服务
node start.mjs status 查看服务状态
${colors.cyan}常用环境变量:${colors.reset}
PORT 后端端口 (默认: 5001)
LOG_LEVEL 日志级别: DEBUG|INFO|WARN|ERROR (默认: INFO)
DS2API_ADMIN_KEY 管理员密钥 (默认: admin)
DS2API_CONFIG_PATH 配置文件路径 (默认: config.json)
GOPROXY Go 模块代理 (默认: https://goproxy.cn,direct)
NPM_REGISTRY npm 镜像源 (默认: https://registry.npmmirror.com)
${colors.cyan}示例:${colors.reset}
DS2API_ADMIN_KEY=mykey PORT=8080 node start.mjs dev
GOPROXY=off NPM_REGISTRY=https://registry.npmjs.org node start.mjs dev
`);
break;
default:
await showMenu();
}
}
main().catch(e => {
log.error(e.message);
process.exit(1);
});

View File

@@ -0,0 +1,3 @@
{
"calls": []
}

View File

@@ -0,0 +1,10 @@
{
"calls": [
{
"name": "read_file",
"input": {
"path": "README.MD"
}
}
]
}

View File

@@ -0,0 +1,3 @@
{
"calls": []
}

View File

@@ -0,0 +1,3 @@
{
"calls": []
}

View File

@@ -0,0 +1,10 @@
{
"calls": [
{
"name": "read_file",
"input": {
"path": "README.MD"
}
}
]
}

View File

@@ -0,0 +1,4 @@
{
"text": "{\"tool_calls\":[{\"name\":\"unknown_tool\",\"input\":{\"x\":1}}]}",
"tool_names": []
}

View File

@@ -0,0 +1,4 @@
{
"text": "{\"tool_calls\":[{\"name\":\"Read_File\",\"input\":{\"path\":\"README.MD\"}}]}",
"tool_names": ["read_file"]
}

View File

@@ -0,0 +1,5 @@
{
"mode": "standalone",
"text": "```json\n{\"tool_calls\":[{\"name\":\"read_file\",\"input\":{\"path\":\"README.MD\"}}]}\n```",
"tool_names": ["read_file"]
}

View File

@@ -0,0 +1,5 @@
{
"mode": "standalone",
"text": "下面是示例:{\"tool_calls\":[{\"name\":\"read_file\",\"input\":{\"path\":\"README.MD\"}}]}请勿执行。",
"tool_names": ["read_file"]
}

View File

@@ -0,0 +1,5 @@
{
"mode": "standalone",
"text": "{\"tool_calls\":[{\"name\":\"read_file\",\"input\":{\"path\":\"README.MD\"}}]}",
"tool_names": ["read_file"]
}

View File

@@ -13,8 +13,10 @@ const {
const { const {
parseChunkForContent, parseChunkForContent,
resolveToolcallPolicy, resolveToolcallPolicy,
formatIncrementalToolCallDeltas,
normalizePreparedToolNames, normalizePreparedToolNames,
boolDefaultTrue, boolDefaultTrue,
filterIncrementalToolCallDeltasByAllowed,
} = handler.__test; } = handler.__test;
test('chat-stream exposes parser test hooks', () => { test('chat-stream exposes parser test hooks', () => {
@@ -56,6 +58,46 @@ test('boolDefaultTrue keeps false only when explicitly false', () => {
assert.equal(boolDefaultTrue(undefined), true); assert.equal(boolDefaultTrue(undefined), true);
}); });
test('filterIncrementalToolCallDeltasByAllowed blocks unknown name and follow-up args', () => {
const seen = new Map();
const filtered = filterIncrementalToolCallDeltasByAllowed(
[
{ index: 0, name: 'not_in_schema' },
{ index: 0, arguments: '{"x":1}' },
],
['read_file'],
seen,
);
assert.deepEqual(filtered, []);
assert.equal(seen.get(0), '__blocked__');
});
test('filterIncrementalToolCallDeltasByAllowed keeps allowed name and args', () => {
const seen = new Map();
const filtered = filterIncrementalToolCallDeltasByAllowed(
[
{ index: 0, name: 'read_file' },
{ index: 0, arguments: '{"path":"README.MD"}' },
],
['read_file'],
seen,
);
assert.deepEqual(filtered, [
{ index: 0, name: 'read_file' },
{ index: 0, arguments: '{"path":"README.MD"}' },
]);
});
test('incremental and final tool formatting share stable id via idStore', () => {
const idStore = new Map();
const incremental = formatIncrementalToolCallDeltas([{ index: 0, name: 'read_file' }], idStore);
const { formatOpenAIStreamToolCalls } = require('../../internal/js/helpers/stream-tool-sieve.js');
const finalCalls = formatOpenAIStreamToolCalls([{ name: 'read_file', input: { path: 'README.MD' } }], idStore);
assert.equal(incremental.length, 1);
assert.equal(finalCalls.length, 1);
assert.equal(incremental[0].id, finalCalls[0].id);
});
test('parseChunkForContent keeps split response/content fragments inside response array', () => { test('parseChunkForContent keeps split response/content fragments inside response array', () => {
const chunk = { const chunk = {
p: 'response', p: 'response',

View File

@@ -6,7 +6,7 @@ const fs = require('node:fs');
const path = require('node:path'); const path = require('node:path');
const chatStream = require('../../api/chat-stream.js'); const chatStream = require('../../api/chat-stream.js');
const { parseToolCalls } = require('../../internal/js/helpers/stream-tool-sieve.js'); const { parseToolCalls, parseStandaloneToolCalls } = require('../../internal/js/helpers/stream-tool-sieve.js');
const { parseChunkForContent, estimateTokens } = chatStream.__test; const { parseChunkForContent, estimateTokens } = chatStream.__test;
@@ -41,12 +41,14 @@ test('js compat: toolcall fixtures', () => {
for (const file of files) { for (const file of files) {
const name = file.replace(/\.json$/i, ''); const name = file.replace(/\.json$/i, '');
const fixture = readJSON(path.join(fixtureDir, file)); const fixture = readJSON(path.join(fixtureDir, file));
const expected = readJSON(path.join(expectedDir, `toolcalls_${name}.json`)); const expected = readJSON(path.join(expectedDir, `toolcalls_${name}.json`));
const got = parseToolCalls(fixture.text, fixture.tool_names || []); const mode = typeof fixture.mode === 'string' ? fixture.mode.trim().toLowerCase() : '';
assert.deepEqual(got, expected.calls, `${name}: calls mismatch`); const parser = mode === 'standalone' ? parseStandaloneToolCalls : parseToolCalls;
} const got = parser(fixture.text, fixture.tool_names || []);
}); assert.deepEqual(got, expected.calls, `${name}: calls mismatch`);
}
});
test('js compat: token fixtures', () => { test('js compat: token fixtures', () => {
const fixture = readJSON(path.join(compatRoot, 'fixtures', 'token_cases.json')); const fixture = readJSON(path.join(compatRoot, 'fixtures', 'token_cases.json'));

View File

@@ -9,7 +9,9 @@ const {
processToolSieveChunk, processToolSieveChunk,
flushToolSieve, flushToolSieve,
parseToolCalls, parseToolCalls,
parseToolCallsDetailed,
parseStandaloneToolCalls, parseStandaloneToolCalls,
formatOpenAIStreamToolCalls,
} = require('../../internal/js/helpers/stream-tool-sieve.js'); } = require('../../internal/js/helpers/stream-tool-sieve.js');
function runSieve(chunks, toolNames) { function runSieve(chunks, toolNames) {
@@ -52,13 +54,33 @@ test('parseToolCalls keeps non-object argument strings as _raw (Go parity)', ()
]); ]);
}); });
test('parseToolCalls still intercepts unknown schema names to avoid leaks', () => { test('parseToolCalls drops unknown schema names when toolNames is provided', () => {
const payload = JSON.stringify({ const payload = JSON.stringify({
tool_calls: [{ name: 'not_in_schema', input: { q: 'go' } }], tool_calls: [{ name: 'not_in_schema', input: { q: 'go' } }],
}); });
const calls = parseToolCalls(payload, ['search']); const calls = parseToolCalls(payload, ['search']);
assert.equal(calls.length, 1); assert.equal(calls.length, 0);
assert.equal(calls[0].name, 'not_in_schema'); });
test('parseToolCalls matches tool name case-insensitively and canonicalizes', () => {
const payload = JSON.stringify({
tool_calls: [{ name: 'Read_File', input: { path: 'README.MD' } }],
});
const calls = parseToolCalls(payload, ['read_file']);
assert.deepEqual(calls, [{ name: 'read_file', input: { path: 'README.MD' } }]);
});
test('parseToolCalls rejects all names when toolNames is empty (Go strict parity)', () => {
const payload = JSON.stringify({
tool_calls: [{ name: 'not_in_schema', input: { q: 'go' } }],
});
const calls = parseToolCalls(payload, []);
assert.equal(calls.length, 0);
const detailed = parseToolCallsDetailed(payload, []);
assert.equal(detailed.sawToolCallSyntax, true);
assert.equal(detailed.rejectedByPolicy, true);
assert.deepEqual(detailed.rejectedToolNames, ['not_in_schema']);
}); });
test('parseToolCalls supports fenced json and function.arguments string payload', () => { test('parseToolCalls supports fenced json and function.arguments string payload', () => {
@@ -87,7 +109,7 @@ test('parseStandaloneToolCalls ignores fenced code block tool_call examples', ()
assert.equal(calls.length, 0); assert.equal(calls.length, 0);
}); });
test('sieve emits tool_calls and does not leak suspicious prefix on late key convergence', () => { test('sieve keeps late key convergence payload as plain text in strict mode', () => {
const events = runSieve( const events = runSieve(
[ [
'{"', '{"',
@@ -99,9 +121,9 @@ test('sieve emits tool_calls and does not leak suspicious prefix on late key con
const leakedText = collectText(events); const leakedText = collectText(events);
const hasToolCall = events.some((evt) => evt.type === 'tool_calls' && Array.isArray(evt.calls) && evt.calls.length > 0); const hasToolCall = events.some((evt) => evt.type === 'tool_calls' && Array.isArray(evt.calls) && evt.calls.length > 0);
const hasToolDelta = events.some((evt) => evt.type === 'tool_call_deltas' && Array.isArray(evt.deltas) && evt.deltas.length > 0); const hasToolDelta = events.some((evt) => evt.type === 'tool_call_deltas' && Array.isArray(evt.deltas) && evt.deltas.length > 0);
assert.equal(hasToolCall || hasToolDelta, true); assert.equal(hasToolCall || hasToolDelta, false);
assert.equal(leakedText.includes('{'), false); assert.equal(leakedText.includes('{'), true);
assert.equal(leakedText.toLowerCase().includes('tool_calls'), false); assert.equal(leakedText.toLowerCase().includes('tool_calls'), true);
assert.equal(leakedText.includes('后置正文C。'), true); assert.equal(leakedText.includes('后置正文C。'), true);
}); });
@@ -133,6 +155,20 @@ test('sieve flushes incomplete captured tool json as text on stream finalize', (
assert.equal(leakedText.includes('{'), true); assert.equal(leakedText.includes('{'), true);
}); });
test('sieve still intercepts large tool json payloads over previous capture limit', () => {
const large = 'a'.repeat(9000);
const payload = `{"tool_calls":[{"name":"read_file","input":{"path":"${large}"}}]}`;
const events = runSieve(
[payload.slice(0, 3000), payload.slice(3000, 7000), payload.slice(7000)],
['read_file'],
);
const leakedText = collectText(events);
const hasToolCall = events.some((evt) => evt.type === 'tool_calls' && evt.calls?.length > 0);
const hasToolDelta = events.some((evt) => evt.type === 'tool_call_deltas' && evt.deltas?.length > 0);
assert.equal(hasToolCall || hasToolDelta, true);
assert.equal(leakedText.toLowerCase().includes('tool_calls'), false);
});
test('sieve keeps plain text intact in tool mode when no tool call appears', () => { test('sieve keeps plain text intact in tool mode when no tool call appears', () => {
const events = runSieve( const events = runSieve(
['你好,', '这是普通文本回复。', '请继续。'], ['你好,', '这是普通文本回复。', '请继续。'],
@@ -144,7 +180,21 @@ test('sieve keeps plain text intact in tool mode when no tool call appears', ()
assert.equal(leakedText, '你好,这是普通文本回复。请继续。'); assert.equal(leakedText, '你好,这是普通文本回复。请继续。');
}); });
test('sieve emits incremental tool_call_deltas for split arguments payload', () => { test('sieve intercepts rejected unknown tool payload (no args) without raw leak', () => {
const events = runSieve(
['{"tool_calls":[{"name":"not_in_schema"}]}', '后置正文G。'],
['read_file'],
);
const leakedText = collectText(events);
const hasToolCall = events.some((evt) => evt.type === 'tool_calls' && Array.isArray(evt.calls) && evt.calls.length > 0);
const hasToolDelta = events.some((evt) => evt.type === 'tool_call_deltas' && Array.isArray(evt.deltas) && evt.deltas.length > 0);
assert.equal(hasToolCall, false);
assert.equal(hasToolDelta, false);
assert.equal(leakedText.toLowerCase().includes('tool_calls'), false);
assert.equal(leakedText.includes('后置正文G。'), true);
});
test('sieve emits final tool_calls for split arguments payload without incremental deltas', () => {
const state = createToolSieveState(); const state = createToolSieveState();
const first = processToolSieveChunk( const first = processToolSieveChunk(
state, state,
@@ -159,37 +209,43 @@ test('sieve emits incremental tool_call_deltas for split arguments payload', ()
const tail = flushToolSieve(state, ['read_file']); const tail = flushToolSieve(state, ['read_file']);
const events = [...first, ...second, ...tail]; const events = [...first, ...second, ...tail];
const deltaEvents = events.filter((evt) => evt.type === 'tool_call_deltas'); const deltaEvents = events.filter((evt) => evt.type === 'tool_call_deltas');
assert.equal(deltaEvents.length > 0, true); assert.equal(deltaEvents.length, 0);
const merged = deltaEvents.flatMap((evt) => evt.deltas || []); const finalCalls = events.filter((evt) => evt.type === 'tool_calls').flatMap((evt) => evt.calls || []);
const hasName = merged.some((d) => d.name === 'read_file'); assert.equal(finalCalls.length, 1);
const argsJoined = merged assert.equal(finalCalls[0].name, 'read_file');
.map((d) => d.arguments || '') assert.deepEqual(finalCalls[0].input, { path: 'README.MD', mode: 'head' });
.join('');
assert.equal(hasName, true);
assert.equal(argsJoined.includes('"path":"README.MD"'), true);
assert.equal(argsJoined.includes('"mode":"head"'), true);
}); });
test('sieve still intercepts tool call after leading plain text without suffix', () => { test('sieve keeps tool json as text when leading prose exists (strict mode)', () => {
const events = runSieve( const events = runSieve(
['我将调用工具。', '{"tool_calls":[{"name":"read_file","input":{"path":"README.MD"}}]}'], ['我将调用工具。', '{"tool_calls":[{"name":"read_file","input":{"path":"README.MD"}}]}'],
['read_file'], ['read_file'],
); );
const hasTool = events.some((evt) => (evt.type === 'tool_calls' && evt.calls?.length > 0) || (evt.type === 'tool_call_deltas' && evt.deltas?.length > 0)); const hasTool = events.some((evt) => (evt.type === 'tool_calls' && evt.calls?.length > 0) || (evt.type === 'tool_call_deltas' && evt.deltas?.length > 0));
const leakedText = collectText(events); const leakedText = collectText(events);
assert.equal(hasTool, true); assert.equal(hasTool, false);
assert.equal(leakedText.includes('我将调用工具。'), true); assert.equal(leakedText.includes('我将调用工具。'), true);
assert.equal(leakedText.toLowerCase().includes('tool_calls'), false); assert.equal(leakedText.toLowerCase().includes('tool_calls'), true);
}); });
test('sieve intercepts tool call and preserves trailing same-chunk text', () => { test('sieve keeps same-chunk trailing prose payload as text in strict mode', () => {
const events = runSieve( const events = runSieve(
['{"tool_calls":[{"name":"read_file","input":{"path":"README.MD"}}]}然后继续解释。'], ['{"tool_calls":[{"name":"read_file","input":{"path":"README.MD"}}]}然后继续解释。'],
['read_file'], ['read_file'],
); );
const hasTool = events.some((evt) => (evt.type === 'tool_calls' && evt.calls?.length > 0) || (evt.type === 'tool_call_deltas' && evt.deltas?.length > 0)); const hasTool = events.some((evt) => (evt.type === 'tool_calls' && evt.calls?.length > 0) || (evt.type === 'tool_call_deltas' && evt.deltas?.length > 0));
const leakedText = collectText(events); const leakedText = collectText(events);
assert.equal(hasTool, true); assert.equal(hasTool, false);
assert.equal(leakedText.includes('然后继续解释。'), true); assert.equal(leakedText.includes('然后继续解释。'), true);
assert.equal(leakedText.toLowerCase().includes('tool_calls'), false); assert.equal(leakedText.toLowerCase().includes('tool_calls'), true);
});
test('formatOpenAIStreamToolCalls reuses ids with the same idStore', () => {
const idStore = new Map();
const calls = [{ name: 'read_file', input: { path: 'README.MD' } }];
const first = formatOpenAIStreamToolCalls(calls, idStore);
const second = formatOpenAIStreamToolCalls(calls, idStore);
assert.equal(first.length, 1);
assert.equal(second.length, 1);
assert.equal(first[0].id, second[0].id);
}); });

View File

@@ -5,4 +5,18 @@ ROOT_DIR="$(cd "$(dirname "$0")/../.." && pwd)"
cd "$ROOT_DIR" cd "$ROOT_DIR"
./tests/scripts/check-node-split-syntax.sh ./tests/scripts/check-node-split-syntax.sh
node --test tests/node/stream-tool-sieve.test.js tests/node/chat-stream.test.js tests/node/js_compat_test.js "$@"
# Keep Node's file-level test scheduling serial to avoid intermittent cross-file
# interference when multiple suites import mutable module singletons.
NODE_TEST_LOG="$(mktemp)"
cleanup() {
rm -f "$NODE_TEST_LOG"
}
trap cleanup EXIT
if ! node --test --test-concurrency=1 tests/node/stream-tool-sieve.test.js tests/node/chat-stream.test.js tests/node/js_compat_test.js "$@" 2>&1 | tee "$NODE_TEST_LOG"; then
echo
echo "[run-unit-node] Node tests failed. 失败摘要如下:"
rg -n "^(not ok|# fail)|ERR_TEST_FAILURE" "$NODE_TEST_LOG" || true
exit 1
fi

View File

@@ -24,9 +24,8 @@
<meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" /> <meta name="apple-mobile-web-app-status-bar-style" content="black-translucent" />
<meta name="apple-mobile-web-app-title" content="DS2API" /> <meta name="apple-mobile-web-app-title" content="DS2API" />
<!-- Favicon - using data URI for orange-yellow gradient icon --> <!-- Favicon -->
<link rel="icon" type="image/svg+xml" <link rel="icon" type="image/svg+xml" href="/ds2api-favicon.svg" />
href="data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 100 100'%3E%3Cdefs%3E%3ClinearGradient id='g' x1='0%25' y1='0%25' x2='100%25' y2='100%25'%3E%3Cstop offset='0%25' stop-color='%23f59e0b'/%3E%3Cstop offset='100%25' stop-color='%23ef4444'/%3E%3C/linearGradient%3E%3C/defs%3E%3Crect rx='20' width='100' height='100' fill='url(%23g)'/%3E%3Ctext x='50' y='68' font-family='Arial,sans-serif' font-size='48' font-weight='bold' fill='white' text-anchor='middle'%3EDS%3C/text%3E%3C/svg%3E" />
<!-- Fonts --> <!-- Fonts -->
<link rel="preconnect" href="https://fonts.googleapis.com"> <link rel="preconnect" href="https://fonts.googleapis.com">

View File

@@ -0,0 +1,20 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 100 100" role="img" aria-label="DS2API icon">
<defs>
<linearGradient id="g" x1="0%" y1="0%" x2="100%" y2="100%">
<stop offset="0%" stop-color="#f59e0b" />
<stop offset="100%" stop-color="#ef4444" />
</linearGradient>
</defs>
<rect width="100" height="100" rx="20" fill="url(#g)" />
<text
x="50"
y="68"
text-anchor="middle"
font-family="Arial,sans-serif"
font-size="48"
font-weight="700"
fill="#ffffff"
>
DS
</text>
</svg>

After

Width:  |  Height:  |  Size: 539 B

View File

@@ -1,113 +1,121 @@
import { useI18n } from '../../i18n' import { useI18n } from '../../i18n'
import { useAccountsData } from './useAccountsData' import { useAccountsData } from './useAccountsData'
import { useAccountActions } from './useAccountActions' import { useAccountActions } from './useAccountActions'
import QueueCards from './QueueCards' import QueueCards from './QueueCards'
import ApiKeysPanel from './ApiKeysPanel' import ApiKeysPanel from './ApiKeysPanel'
import AccountsTable from './AccountsTable' import AccountsTable from './AccountsTable'
import AddKeyModal from './AddKeyModal' import AddKeyModal from './AddKeyModal'
import AddAccountModal from './AddAccountModal' import AddAccountModal from './AddAccountModal'
export default function AccountManagerContainer({ config, onRefresh, onMessage, authFetch }) { export default function AccountManagerContainer({ config, onRefresh, onMessage, authFetch }) {
const { t } = useI18n() const { t } = useI18n()
const apiFetch = authFetch || fetch const apiFetch = authFetch || fetch
const { const {
queueStatus, queueStatus,
keysExpanded, keysExpanded,
setKeysExpanded, setKeysExpanded,
accounts, accounts,
page, page,
totalPages, pageSize,
totalAccounts, totalPages,
loadingAccounts, totalAccounts,
fetchAccounts, loadingAccounts,
resolveAccountIdentifier, fetchAccounts,
} = useAccountsData({ apiFetch }) changePageSize,
resolveAccountIdentifier,
const { searchQuery,
showAddKey, handleSearchChange,
setShowAddKey, } = useAccountsData({ apiFetch })
showAddAccount,
setShowAddAccount, const {
newKey, showAddKey,
setNewKey, setShowAddKey,
copiedKey, showAddAccount,
setCopiedKey, setShowAddAccount,
newAccount, newKey,
setNewAccount, setNewKey,
loading, copiedKey,
testing, setCopiedKey,
testingAll, newAccount,
batchProgress, setNewAccount,
addKey, loading,
deleteKey, testing,
addAccount, testingAll,
deleteAccount, batchProgress,
testAccount, addKey,
testAllAccounts, deleteKey,
} = useAccountActions({ addAccount,
apiFetch, deleteAccount,
t, testAccount,
onMessage, testAllAccounts,
onRefresh, } = useAccountActions({
config, apiFetch,
fetchAccounts, t,
resolveAccountIdentifier, onMessage,
}) onRefresh,
config,
return ( fetchAccounts,
<div className="space-y-6"> resolveAccountIdentifier,
<QueueCards queueStatus={queueStatus} t={t} /> })
<ApiKeysPanel return (
t={t} <div className="space-y-6">
config={config} <QueueCards queueStatus={queueStatus} t={t} />
keysExpanded={keysExpanded}
setKeysExpanded={setKeysExpanded} <ApiKeysPanel
setShowAddKey={setShowAddKey} t={t}
copiedKey={copiedKey} config={config}
setCopiedKey={setCopiedKey} keysExpanded={keysExpanded}
onDeleteKey={deleteKey} setKeysExpanded={setKeysExpanded}
/> setShowAddKey={setShowAddKey}
copiedKey={copiedKey}
<AccountsTable setCopiedKey={setCopiedKey}
t={t} onDeleteKey={deleteKey}
accounts={accounts} />
loadingAccounts={loadingAccounts}
testing={testing} <AccountsTable
testingAll={testingAll} t={t}
batchProgress={batchProgress} accounts={accounts}
totalAccounts={totalAccounts} loadingAccounts={loadingAccounts}
page={page} testing={testing}
totalPages={totalPages} testingAll={testingAll}
resolveAccountIdentifier={resolveAccountIdentifier} batchProgress={batchProgress}
onTestAll={testAllAccounts} totalAccounts={totalAccounts}
onShowAddAccount={() => setShowAddAccount(true)} page={page}
onTestAccount={testAccount} pageSize={pageSize}
onDeleteAccount={deleteAccount} totalPages={totalPages}
onPrevPage={() => fetchAccounts(page - 1)} resolveAccountIdentifier={resolveAccountIdentifier}
onNextPage={() => fetchAccounts(page + 1)} onTestAll={testAllAccounts}
/> onShowAddAccount={() => setShowAddAccount(true)}
onTestAccount={testAccount}
<AddKeyModal onDeleteAccount={deleteAccount}
show={showAddKey} onPrevPage={() => fetchAccounts(page - 1)}
t={t} onNextPage={() => fetchAccounts(page + 1)}
newKey={newKey} onPageSizeChange={changePageSize}
setNewKey={setNewKey} searchQuery={searchQuery}
loading={loading} onSearchChange={handleSearchChange}
onClose={() => setShowAddKey(false)} />
onAdd={addKey}
/> <AddKeyModal
show={showAddKey}
<AddAccountModal t={t}
show={showAddAccount} newKey={newKey}
t={t} setNewKey={setNewKey}
newAccount={newAccount} loading={loading}
setNewAccount={setNewAccount} onClose={() => setShowAddKey(false)}
loading={loading} onAdd={addKey}
onClose={() => setShowAddAccount(false)} />
onAdd={addAccount}
/> <AddAccountModal
</div> show={showAddAccount}
) t={t}
} newAccount={newAccount}
setNewAccount={setNewAccount}
loading={loading}
onClose={() => setShowAddAccount(false)}
onAdd={addAccount}
/>
</div>
)
}

View File

@@ -1,149 +1,191 @@
import { ChevronLeft, ChevronRight, Play, Plus, Trash2 } from 'lucide-react' import { useState } from 'react'
import clsx from 'clsx' import { ChevronLeft, ChevronRight, Check, Copy, Play, Plus, Trash2 } from 'lucide-react'
import clsx from 'clsx'
export default function AccountsTable({
t, export default function AccountsTable({
accounts, t,
loadingAccounts, accounts,
testing, loadingAccounts,
testingAll, testing,
batchProgress, testingAll,
totalAccounts, batchProgress,
page, totalAccounts,
totalPages, page,
resolveAccountIdentifier, pageSize,
onTestAll, totalPages,
onShowAddAccount, resolveAccountIdentifier,
onTestAccount, onTestAll,
onDeleteAccount, onShowAddAccount,
onPrevPage, onTestAccount,
onNextPage, onDeleteAccount,
}) { onPrevPage,
return ( onNextPage,
<div className="bg-card border border-border rounded-xl overflow-hidden shadow-sm"> onPageSizeChange,
<div className="p-6 border-b border-border flex flex-col md:flex-row md:items-center justify-between gap-4"> searchQuery,
<div> onSearchChange,
<h2 className="text-lg font-semibold">{t('accountManager.accountsTitle')}</h2> }) {
<p className="text-sm text-muted-foreground">{t('accountManager.accountsDesc')}</p> const [copiedId, setCopiedId] = useState(null)
</div>
<div className="flex flex-wrap gap-2"> const copyId = (id) => {
<button navigator.clipboard.writeText(id).then(() => {
onClick={onTestAll} setCopiedId(id)
disabled={testingAll || totalAccounts === 0} setTimeout(() => setCopiedId(null), 1500)
className="flex items-center px-3 py-2 bg-secondary text-secondary-foreground rounded-lg hover:bg-secondary/80 transition-colors text-xs font-medium border border-border disabled:opacity-50" })
> }
{testingAll ? <span className="animate-spin mr-2"></span> : <Play className="w-3 h-3 mr-2" />} return (
{t('accountManager.testAll')} <div className="bg-card border border-border rounded-xl overflow-hidden shadow-sm">
</button> <div className="p-6 border-b border-border flex flex-col md:flex-row md:items-center justify-between gap-4">
<button <div>
onClick={onShowAddAccount} <h2 className="text-lg font-semibold">{t('accountManager.accountsTitle')}</h2>
className="flex items-center gap-2 px-4 py-2 bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 transition-colors font-medium text-sm shadow-sm" <p className="text-sm text-muted-foreground">{t('accountManager.accountsDesc')}</p>
> </div>
<Plus className="w-4 h-4" /> <div className="flex flex-wrap gap-2">
{t('accountManager.addAccount')} <input
</button> type="text"
</div> value={searchQuery}
</div> onChange={e => onSearchChange(e.target.value)}
placeholder={t('accountManager.searchPlaceholder')}
{testingAll && batchProgress.total > 0 && ( className="px-3 py-1.5 text-sm bg-muted border border-border rounded-lg focus:outline-none focus:ring-1 focus:ring-ring placeholder:text-muted-foreground"
<div className="p-4 border-b border-border bg-muted/30"> />
<div className="flex items-center justify-between text-sm mb-2"> <button
<span className="font-medium">{t('accountManager.testingAllAccounts')}</span> onClick={onTestAll}
<span className="text-muted-foreground">{batchProgress.current} / {batchProgress.total}</span> disabled={testingAll || totalAccounts === 0}
</div> className="flex items-center px-3 py-2 bg-secondary text-secondary-foreground rounded-lg hover:bg-secondary/80 transition-colors text-xs font-medium border border-border disabled:opacity-50"
<div className="w-full bg-muted rounded-full h-2 overflow-hidden mb-4"> >
<div {testingAll ? <span className="animate-spin mr-2"></span> : <Play className="w-3 h-3 mr-2" />}
className="bg-primary h-full transition-all duration-300" {t('accountManager.testAll')}
style={{ width: `${(batchProgress.current / batchProgress.total) * 100}%` }} </button>
/> <button
</div> onClick={onShowAddAccount}
{batchProgress.results.length > 0 && ( className="flex items-center gap-2 px-4 py-2 bg-primary text-primary-foreground rounded-lg hover:bg-primary/90 transition-colors font-medium text-sm shadow-sm"
<div className="grid grid-cols-2 md:grid-cols-4 gap-2 max-h-32 overflow-y-auto custom-scrollbar"> >
{batchProgress.results.map((r, i) => ( <Plus className="w-4 h-4" />
<div key={i} className={clsx( {t('accountManager.addAccount')}
"text-xs px-2 py-1 rounded border truncate", </button>
r.success ? "bg-emerald-500/10 border-emerald-500/20 text-emerald-500" : "bg-destructive/10 border-destructive/20 text-destructive" </div>
)}> </div>
{r.success ? '✓' : '✗'} {r.id}
</div> {testingAll && batchProgress.total > 0 && (
))} <div className="p-4 border-b border-border bg-muted/30">
</div> <div className="flex items-center justify-between text-sm mb-2">
)} <span className="font-medium">{t('accountManager.testingAllAccounts')}</span>
</div> <span className="text-muted-foreground">{batchProgress.current} / {batchProgress.total}</span>
)} </div>
<div className="w-full bg-muted rounded-full h-2 overflow-hidden mb-4">
<div className="divide-y divide-border"> <div
{loadingAccounts ? ( className="bg-primary h-full transition-all duration-300"
<div className="p-8 text-center text-muted-foreground">{t('actions.loading')}</div> style={{ width: `${(batchProgress.current / batchProgress.total) * 100}%` }}
) : accounts.length > 0 ? ( />
accounts.map((acc, i) => { </div>
const id = resolveAccountIdentifier(acc) {batchProgress.results.length > 0 && (
return ( <div className="grid grid-cols-2 md:grid-cols-4 gap-2 max-h-32 overflow-y-auto custom-scrollbar">
<div key={i} className="p-4 flex flex-col md:flex-row md:items-center justify-between gap-4 hover:bg-muted/50 transition-colors"> {batchProgress.results.map((r, i) => (
<div className="flex items-center gap-3 min-w-0"> <div key={i} className={clsx(
<div className={clsx( "text-xs px-2 py-1 rounded border truncate",
"w-2 h-2 rounded-full shrink-0", r.success ? "bg-emerald-500/10 border-emerald-500/20 text-emerald-500" : "bg-destructive/10 border-destructive/20 text-destructive"
acc.has_token ? "bg-emerald-500 shadow-[0_0_8px_rgba(16,185,129,0.5)]" : "bg-amber-500" )}>
)} /> {r.success ? '✓' : '✗'} {r.id}
<div className="min-w-0"> </div>
<div className="font-medium truncate">{id || '-'}</div> ))}
<div className="flex items-center gap-2 text-xs text-muted-foreground mt-0.5"> </div>
<span>{acc.has_token ? t('accountManager.sessionActive') : t('accountManager.reauthRequired')}</span> )}
{acc.token_preview && ( </div>
<span className="font-mono bg-muted px-1.5 py-0.5 rounded text-[10px]"> )}
{acc.token_preview}
</span> <div className="divide-y divide-border">
)} {loadingAccounts ? (
</div> <div className="p-8 text-center text-muted-foreground">{t('actions.loading')}</div>
</div> ) : accounts.length > 0 ? (
</div> accounts.map((acc, i) => {
<div className="flex items-center gap-2 self-start lg:self-auto ml-5 lg:ml-0"> const id = resolveAccountIdentifier(acc)
<button return (
onClick={() => onTestAccount(id)} <div key={i} className="p-4 flex flex-col md:flex-row md:items-center justify-between gap-4 hover:bg-muted/50 transition-colors">
disabled={testing[id]} <div className="flex items-center gap-3 min-w-0">
className="px-2 lg:px-3 py-1 lg:py-1.5 text-[10px] lg:text-xs font-medium border border-border rounded-md hover:bg-secondary transition-colors disabled:opacity-50" <div className={clsx(
> "w-2 h-2 rounded-full shrink-0",
{testing[id] ? t('actions.testing') : t('actions.test')} acc.test_status === 'failed' ? "bg-red-500 shadow-[0_0_8px_rgba(239,68,68,0.5)]" :
</button> (acc.test_status === 'ok' || acc.has_token) ? "bg-emerald-500 shadow-[0_0_8px_rgba(16,185,129,0.5)]" :
<button "bg-amber-500"
onClick={() => onDeleteAccount(id)} )} />
className="p-1 lg:p-1.5 text-muted-foreground hover:text-destructive hover:bg-destructive/10 rounded-md transition-colors" <div className="min-w-0">
> <div
<Trash2 className="w-3.5 h-3.5 lg:w-4 lg:h-4" /> className="font-medium truncate flex items-center gap-1.5 cursor-pointer hover:text-primary transition-colors group"
</button> onClick={() => copyId(id)}
</div> >
</div> <span className="truncate">{id || '-'}</span>
) {copiedId === id
}) ? <Check className="w-3 h-3 text-emerald-500 shrink-0" />
) : ( : <Copy className="w-3 h-3 opacity-0 group-hover:opacity-50 shrink-0 transition-opacity" />
<div className="p-8 text-center text-muted-foreground">{t('accountManager.noAccounts')}</div> }
)} </div>
</div> <div className="flex items-center gap-2 text-xs text-muted-foreground mt-0.5">
<span>{acc.test_status === 'failed' ? t('accountManager.testStatusFailed') : (acc.test_status === 'ok' || acc.has_token) ? t('accountManager.sessionActive') : t('accountManager.reauthRequired')}</span>
{totalPages > 1 && ( {acc.token_preview && (
<div className="p-4 border-t border-border flex items-center justify-between"> <span className="font-mono bg-muted px-1.5 py-0.5 rounded text-[10px]">
<div className="text-sm text-muted-foreground"> {acc.token_preview}
{t('accountManager.pageInfo', { current: page, total: totalPages, count: totalAccounts })} </span>
</div> )}
<div className="flex items-center gap-2"> </div>
<button </div>
onClick={onPrevPage} </div>
disabled={page <= 1 || loadingAccounts} <div className="flex items-center gap-2 self-start lg:self-auto ml-5 lg:ml-0">
className="p-2 border border-border rounded-md hover:bg-secondary transition-colors disabled:opacity-50 disabled:cursor-not-allowed" <button
> onClick={() => onTestAccount(id)}
<ChevronLeft className="w-4 h-4" /> disabled={testing[id]}
</button> className="px-2 lg:px-3 py-1 lg:py-1.5 text-[10px] lg:text-xs font-medium border border-border rounded-md hover:bg-secondary transition-colors disabled:opacity-50"
<span className="text-sm font-medium px-2">{page} / {totalPages}</span> >
<button {testing[id] ? t('actions.testing') : t('actions.test')}
onClick={onNextPage} </button>
disabled={page >= totalPages || loadingAccounts} <button
className="p-2 border border-border rounded-md hover:bg-secondary transition-colors disabled:opacity-50 disabled:cursor-not-allowed" onClick={() => onDeleteAccount(id)}
> className="p-1 lg:p-1.5 text-muted-foreground hover:text-destructive hover:bg-destructive/10 rounded-md transition-colors"
<ChevronRight className="w-4 h-4" /> >
</button> <Trash2 className="w-3.5 h-3.5 lg:w-4 lg:h-4" />
</div> </button>
</div> </div>
)} </div>
</div> )
) })
} ) : (
<div className="p-8 text-center text-muted-foreground">{searchQuery ? t('accountManager.searchNoResults') : t('accountManager.noAccounts')}</div>
)}
</div>
{totalPages > 1 && (
<div className="p-4 border-t border-border flex items-center justify-between">
<div className="flex items-center gap-3">
<div className="text-sm text-muted-foreground">
{t('accountManager.pageInfo', { current: page, total: totalPages, count: totalAccounts })}
</div>
<select
value={pageSize}
onChange={e => onPageSizeChange(Number(e.target.value))}
className="text-sm border border-border rounded-md px-2 py-1 bg-background text-foreground"
>
{[10, 20, 50, 100, 500, 1000, 2000, 5000].map(s => (
<option key={s} value={s}>{s}</option>
))}
</select>
</div>
<div className="flex items-center gap-2">
<button
onClick={onPrevPage}
disabled={page <= 1 || loadingAccounts}
className="p-2 border border-border rounded-md hover:bg-secondary transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
>
<ChevronLeft className="w-4 h-4" />
</button>
<span className="text-sm font-medium px-2">{page} / {totalPages}</span>
<button
onClick={onNextPage}
disabled={page >= totalPages || loadingAccounts}
className="p-2 border border-border rounded-md hover:bg-secondary transition-colors disabled:opacity-50 disabled:cursor-not-allowed"
>
<ChevronRight className="w-4 h-4" />
</button>
</div>
</div>
)}
</div>
)
}

View File

@@ -1,68 +1,86 @@
import { useEffect, useState } from 'react' import { useEffect, useState } from 'react'
export function useAccountsData({ apiFetch }) { export function useAccountsData({ apiFetch }) {
const [queueStatus, setQueueStatus] = useState(null) const [queueStatus, setQueueStatus] = useState(null)
const [keysExpanded, setKeysExpanded] = useState(false) const [keysExpanded, setKeysExpanded] = useState(false)
const [accounts, setAccounts] = useState([]) const [accounts, setAccounts] = useState([])
const [page, setPage] = useState(1) const [page, setPage] = useState(1)
const [pageSize] = useState(10) const [pageSize, setPageSize] = useState(10)
const [totalPages, setTotalPages] = useState(1) const [totalPages, setTotalPages] = useState(1)
const [totalAccounts, setTotalAccounts] = useState(0) const [totalAccounts, setTotalAccounts] = useState(0)
const [loadingAccounts, setLoadingAccounts] = useState(false) const [loadingAccounts, setLoadingAccounts] = useState(false)
const resolveAccountIdentifier = (acc) => { const resolveAccountIdentifier = (acc) => {
if (!acc || typeof acc !== 'object') return '' if (!acc || typeof acc !== 'object') return ''
return String(acc.identifier || acc.email || acc.mobile || '').trim() return String(acc.identifier || acc.email || acc.mobile || '').trim()
} }
const fetchAccounts = async (targetPage = page) => { const [searchQuery, setSearchQuery] = useState('')
setLoadingAccounts(true)
try { const fetchAccounts = async (targetPage = page, targetPageSize = pageSize, targetQuery = searchQuery) => {
const res = await apiFetch(`/admin/accounts?page=${targetPage}&page_size=${pageSize}`) setLoadingAccounts(true)
if (res.ok) { try {
const data = await res.json() let url = `/admin/accounts?page=${targetPage}&page_size=${targetPageSize}`
setAccounts(data.items || []) if (targetQuery.trim()) url += `&q=${encodeURIComponent(targetQuery.trim())}`
setTotalPages(data.total_pages || 1) const res = await apiFetch(url)
setTotalAccounts(data.total || 0) if (res.ok) {
setPage(data.page || 1) const data = await res.json()
} setAccounts(data.items || [])
} catch (e) { setTotalPages(data.total_pages || 1)
console.error('Failed to fetch accounts:', e) setTotalAccounts(data.total || 0)
} finally { setPage(data.page || 1)
setLoadingAccounts(false) }
} } catch (e) {
} console.error('Failed to fetch accounts:', e)
} finally {
const fetchQueueStatus = async () => { setLoadingAccounts(false)
try { }
const res = await apiFetch('/admin/queue/status') }
if (res.ok) {
const data = await res.json() const changePageSize = (newSize) => {
setQueueStatus(data) setPageSize(newSize)
} fetchAccounts(1, newSize)
} catch (e) { }
console.error('Failed to fetch queue status:', e)
} const handleSearchChange = (query) => {
} setSearchQuery(query)
fetchAccounts(1, pageSize, query)
useEffect(() => { }
fetchAccounts()
fetchQueueStatus() const fetchQueueStatus = async () => {
const interval = setInterval(fetchQueueStatus, 5000) try {
return () => clearInterval(interval) const res = await apiFetch('/admin/queue/status')
}, []) if (res.ok) {
const data = await res.json()
return { setQueueStatus(data)
queueStatus, }
keysExpanded, } catch (e) {
setKeysExpanded, console.error('Failed to fetch queue status:', e)
accounts, }
page, }
totalPages,
totalAccounts, useEffect(() => {
loadingAccounts, fetchAccounts()
fetchAccounts, fetchQueueStatus()
resolveAccountIdentifier, const interval = setInterval(fetchQueueStatus, 5000)
} return () => clearInterval(interval)
} }, [])
return {
queueStatus,
keysExpanded,
setKeysExpanded,
accounts,
page,
pageSize,
totalPages,
totalAccounts,
loadingAccounts,
fetchAccounts,
changePageSize,
resolveAccountIdentifier,
searchQuery,
handleSearchChange,
}
}

View File

@@ -1,294 +1,297 @@
{ {
"language": { "language": {
"label": "Language", "label": "Language",
"english": "English", "english": "English",
"chinese": "中文" "chinese": "中文"
}, },
"nav": { "nav": {
"accounts": { "accounts": {
"label": "Account Management", "label": "Account Management",
"desc": "Manage the DeepSeek account pool" "desc": "Manage the DeepSeek account pool"
}, },
"test": { "test": {
"label": "API Test", "label": "API Test",
"desc": "Test API connectivity and responses" "desc": "Test API connectivity and responses"
}, },
"import": { "import": {
"label": "Batch Import", "label": "Batch Import",
"desc": "Bulk import account configuration" "desc": "Bulk import account configuration"
}, },
"vercel": { "vercel": {
"label": "Vercel Sync", "label": "Vercel Sync",
"desc": "Sync configuration to Vercel" "desc": "Sync configuration to Vercel"
}, },
"settings": { "settings": {
"label": "Settings", "label": "Settings",
"desc": "Edit runtime and security settings online" "desc": "Edit runtime and security settings online"
} }
}, },
"sidebar": { "sidebar": {
"onlineAdminConsole": "Online Admin Console", "onlineAdminConsole": "Online Admin Console",
"systemStatus": "System Status", "systemStatus": "System Status",
"statusOnline": "Online", "statusOnline": "Online",
"accounts": "Accounts", "accounts": "Accounts",
"keys": "Keys", "keys": "Keys",
"signOut": "Sign out" "signOut": "Sign out"
}, },
"auth": { "auth": {
"expired": "Authentication expired. Please sign in again.", "expired": "Authentication expired. Please sign in again.",
"checking": "Checking authentication status..." "checking": "Checking authentication status..."
}, },
"errors": { "errors": {
"fetchConfig": "Failed to fetch configuration: {error}" "fetchConfig": "Failed to fetch configuration: {error}"
}, },
"actions": { "actions": {
"cancel": "Cancel", "cancel": "Cancel",
"add": "Add", "add": "Add",
"delete": "Delete", "delete": "Delete",
"copy": "Copy", "copy": "Copy",
"generate": "Generate", "generate": "Generate",
"test": "Test", "test": "Test",
"testing": "Testing...", "testing": "Testing...",
"loading": "Loading..." "loading": "Loading..."
}, },
"messages": { "messages": {
"deleted": "Deleted successfully", "deleted": "Deleted successfully",
"deleteFailed": "Delete failed", "deleteFailed": "Delete failed",
"failedToAdd": "Failed to add", "failedToAdd": "Failed to add",
"networkError": "Network error.", "networkError": "Network error.",
"requestFailed": "Request failed.", "requestFailed": "Request failed.",
"generationStopped": "Generation stopped.", "generationStopped": "Generation stopped.",
"invalidJson": "Invalid JSON format.", "invalidJson": "Invalid JSON format.",
"importFailed": "Import failed.", "importFailed": "Import failed.",
"copyFailed": "Copy failed." "copyFailed": "Copy failed."
}, },
"landing": { "landing": {
"adminConsole": "Admin Console", "adminConsole": "Admin Console",
"apiStatus": "API Status", "apiStatus": "API Status",
"features": { "features": {
"compatibility": { "compatibility": {
"title": "Full Compatibility", "title": "Full Compatibility",
"desc": "OpenAI & Claude format support" "desc": "OpenAI & Claude format support"
}, },
"loadBalancing": { "loadBalancing": {
"title": "Load Balancing", "title": "Load Balancing",
"desc": "Smart rotation with stable throughput" "desc": "Smart rotation with stable throughput"
}, },
"reasoning": { "reasoning": {
"title": "Deep Reasoning", "title": "Deep Reasoning",
"desc": "Expose reasoning traces when enabled" "desc": "Expose reasoning traces when enabled"
}, },
"search": { "search": {
"title": "Web Search", "title": "Web Search",
"desc": "Integrated native web search" "desc": "Integrated native web search"
} }
} }
}, },
"accountManager": { "accountManager": {
"addKeySuccess": "API key added successfully.", "addKeySuccess": "API key added successfully.",
"addAccountSuccess": "Account added successfully.", "addAccountSuccess": "Account added successfully.",
"requiredFields": "Password and email/mobile are required.", "requiredFields": "Password and email/mobile are required.",
"deleteKeyConfirm": "Are you sure you want to delete this API key?", "deleteKeyConfirm": "Are you sure you want to delete this API key?",
"deleteAccountConfirm": "Are you sure you want to delete this account?", "deleteAccountConfirm": "Are you sure you want to delete this account?",
"invalidIdentifier": "Invalid account identifier. Operation aborted.", "invalidIdentifier": "Invalid account identifier. Operation aborted.",
"testAllConfirm": "Test API connectivity for all accounts?", "testAllConfirm": "Test API connectivity for all accounts?",
"testAllCompleted": "Completed: {success}/{total} available", "testAllCompleted": "Completed: {success}/{total} available",
"testFailed": "Test failed: {error}", "testFailed": "Test failed: {error}",
"available": "Available", "available": "Available",
"inUse": "In use", "inUse": "In use",
"totalPool": "Total pool", "totalPool": "Total pool",
"accountsUnit": "accounts", "accountsUnit": "accounts",
"threadsUnit": "threads", "threadsUnit": "threads",
"apiKeysTitle": "API Keys", "apiKeysTitle": "API Keys",
"apiKeysDesc": "Manage the API access key pool", "apiKeysDesc": "Manage the API access key pool",
"addKey": "Add key", "addKey": "Add key",
"copied": "Copied", "copied": "Copied",
"copyKeyTitle": "Copy key", "copyKeyTitle": "Copy key",
"deleteKeyTitle": "Delete key", "deleteKeyTitle": "Delete key",
"noApiKeys": "No API keys found.", "noApiKeys": "No API keys found.",
"accountsTitle": "DeepSeek Accounts", "accountsTitle": "DeepSeek Accounts",
"accountsDesc": "Manage the DeepSeek account pool", "accountsDesc": "Manage the DeepSeek account pool",
"testAll": "Test all", "testAll": "Test all",
"addAccount": "Add account", "addAccount": "Add account",
"testingAllAccounts": "Testing all accounts...", "testingAllAccounts": "Testing all accounts...",
"sessionActive": "Session active", "sessionActive": "Session active",
"reauthRequired": "Re-auth required", "reauthRequired": "Re-auth required",
"noAccounts": "No accounts found.", "testStatusFailed": "Last test failed",
"modalAddKeyTitle": "Add API key", "noAccounts": "No accounts found.",
"newKeyLabel": "New key value", "modalAddKeyTitle": "Add API key",
"newKeyPlaceholder": "Enter a custom API key", "newKeyLabel": "New key value",
"generate": "Generate", "newKeyPlaceholder": "Enter a custom API key",
"generateHint": "Click Generate to create a random key.", "generate": "Generate",
"addKeyLoading": "Adding...", "generateHint": "Click Generate to create a random key.",
"addKeyAction": "Add key", "addKeyLoading": "Adding...",
"modalAddAccountTitle": "Add DeepSeek account", "addKeyAction": "Add key",
"emailOptional": "Email (optional)", "modalAddAccountTitle": "Add DeepSeek account",
"mobileOptional": "Mobile (optional)", "emailOptional": "Email (optional)",
"passwordLabel": "Password", "mobileOptional": "Mobile (optional)",
"passwordPlaceholder": "Account password", "passwordLabel": "Password",
"addAccountLoading": "Adding...", "passwordPlaceholder": "Account password",
"addAccountAction": "Add account", "addAccountLoading": "Adding...",
"pageInfo": "Page {current}/{total}, {count} accounts total" "addAccountAction": "Add account",
}, "pageInfo": "Page {current}/{total}, {count} accounts total",
"apiTester": { "searchPlaceholder": "Search accounts...",
"defaultMessage": "Hello, please introduce yourself in one sentence.", "searchNoResults": "No accounts match your search"
"models": { },
"chat": "Non-reasoning model", "apiTester": {
"reasoner": "Reasoning model", "defaultMessage": "Hello, please introduce yourself in one sentence.",
"chatSearch": "Non-reasoning model (with search)", "models": {
"reasonerSearch": "Reasoning model (with search)" "chat": "Non-reasoning model",
}, "reasoner": "Reasoning model",
"missingApiKey": "Please provide an API key.", "chatSearch": "Non-reasoning model (with search)",
"requestFailed": "Request failed.", "reasonerSearch": "Reasoning model (with search)"
"networkError": "Network error: {error}", },
"testSuccess": "{account}: Test successful ({time}ms)", "missingApiKey": "Please provide an API key.",
"config": "Configuration", "requestFailed": "Request failed.",
"modelLabel": "Model", "networkError": "Network error: {error}",
"streamMode": "Streaming", "testSuccess": "{account}: Test successful ({time}ms)",
"accountSelector": "Account", "config": "Configuration",
"autoRandom": "🤖 Auto / Random", "modelLabel": "Model",
"apiKeyOptional": "API Key (optional)", "streamMode": "Streaming",
"apiKeyDefault": "Default: ...{suffix}", "accountSelector": "Account",
"apiKeyPlaceholder": "Enter a custom key", "autoRandom": "🤖 Auto / Random",
"modeManaged": "Managed key mode (uses account pool).", "apiKeyOptional": "API Key (optional)",
"modeDirect": "Direct token mode (requires a valid DeepSeek token).", "apiKeyDefault": "Default: ...{suffix}",
"statusError": "Error", "apiKeyPlaceholder": "Enter a custom key",
"reasoningTrace": "Reasoning Trace", "modeManaged": "Managed key mode (uses account pool).",
"generating": "Generating response...", "modeDirect": "Direct token mode (requires a valid DeepSeek token).",
"enterMessage": "Enter a message...", "statusError": "Error",
"adminConsoleLabel": "DeepSeek admin console" "reasoningTrace": "Reasoning Trace",
}, "generating": "Generating response...",
"batchImport": { "enterMessage": "Enter a message...",
"templates": { "adminConsoleLabel": "DeepSeek admin console"
"full": { },
"name": "Full configuration template", "batchImport": {
"desc": "Includes keys, accounts, and model mapping" "templates": {
}, "full": {
"emailOnly": { "name": "Full configuration template",
"name": "Email-only accounts", "desc": "Includes keys, accounts, and model mapping"
"desc": "Batch import accounts using email login" },
}, "emailOnly": {
"mobileOnly": { "name": "Email-only accounts",
"name": "Mobile-only accounts", "desc": "Batch import accounts using email login"
"desc": "Batch import accounts using mobile login" },
}, "mobileOnly": {
"keysOnly": { "name": "Mobile-only accounts",
"name": "API keys only", "desc": "Batch import accounts using mobile login"
"desc": "Add API access keys only" },
} "keysOnly": {
}, "name": "API keys only",
"enterJson": "Please provide JSON configuration content.", "desc": "Add API access keys only"
"importSuccess": "Import successful: {keys} keys, {accounts} accounts", }
"templateLoaded": "Template loaded: {name}", },
"currentConfigLoaded": "Current configuration loaded.", "enterJson": "Please provide JSON configuration content.",
"fetchConfigFailed": "Failed to fetch configuration.", "importSuccess": "Import successful: {keys} keys, {accounts} accounts",
"copySuccess": "Base64 configuration copied to clipboard.", "templateLoaded": "Template loaded: {name}",
"quickTemplates": "Quick Templates", "currentConfigLoaded": "Current configuration loaded.",
"dataExport": "Data Export", "fetchConfigFailed": "Failed to fetch configuration.",
"dataExportDesc": "Copy the Base64-encoded configuration for Vercel environment variables.", "copySuccess": "Base64 configuration copied to clipboard.",
"copyBase64": "Copy Base64 config", "quickTemplates": "Quick Templates",
"copied": "Copied", "dataExport": "Data Export",
"variableName": "Variable name", "dataExportDesc": "Copy the Base64-encoded configuration for Vercel environment variables.",
"jsonEditor": "JSON Editor", "copyBase64": "Copy Base64 config",
"loadCurrentConfig": "Load current config", "copied": "Copied",
"applyConfig": "Apply config", "variableName": "Variable name",
"importing": "Importing...", "jsonEditor": "JSON Editor",
"importComplete": "Import complete", "loadCurrentConfig": "Load current config",
"importSummary": "Imported {keys} API keys and updated {accounts} accounts." "applyConfig": "Apply config",
}, "importing": "Importing...",
"settings": { "importComplete": "Import complete",
"loadFailed": "Failed to load settings.", "importSummary": "Imported {keys} API keys and updated {accounts} accounts."
"nonJsonResponse": "Unexpected non-JSON response from server (status: {status}).", },
"save": "Save settings", "settings": {
"saving": "Saving...", "loadFailed": "Failed to load settings.",
"saveSuccess": "Settings saved and hot reloaded.", "nonJsonResponse": "Unexpected non-JSON response from server (status: {status}).",
"saveFailed": "Failed to save settings.", "save": "Save settings",
"securityTitle": "Security", "saving": "Saving...",
"jwtExpireHours": "JWT expiry (hours)", "saveSuccess": "Settings saved and hot reloaded.",
"newPassword": "New admin password", "saveFailed": "Failed to save settings.",
"newPasswordPlaceholder": "Enter new password (min 4 chars)", "securityTitle": "Security",
"updatePassword": "Update password", "jwtExpireHours": "JWT expiry (hours)",
"updating": "Updating...", "newPassword": "New admin password",
"passwordTooShort": "Password must be at least 4 characters.", "newPasswordPlaceholder": "Enter new password (min 4 chars)",
"passwordUpdated": "Password updated. Please sign in again.", "updatePassword": "Update password",
"passwordUpdateFailed": "Failed to update password.", "updating": "Updating...",
"runtimeTitle": "Concurrency & Queue", "passwordTooShort": "Password must be at least 4 characters.",
"accountMaxInflight": "Per-account max inflight", "passwordUpdated": "Password updated. Please sign in again.",
"accountMaxQueue": "Account max queue size", "passwordUpdateFailed": "Failed to update password.",
"globalMaxInflight": "Global max inflight", "runtimeTitle": "Concurrency & Queue",
"behaviorTitle": "Behavior", "accountMaxInflight": "Per-account max inflight",
"toolcallMode": "Toolcall mode", "accountMaxQueue": "Account max queue size",
"earlyEmitConfidence": "Early emit confidence", "globalMaxInflight": "Global max inflight",
"responsesTTL": "Responses store TTL (seconds)", "behaviorTitle": "Behavior",
"embeddingsProvider": "Embeddings provider", "toolcallMode": "Toolcall mode",
"modelTitle": "Model mapping", "earlyEmitConfidence": "Early emit confidence",
"claudeMapping": "Claude mapping (JSON)", "responsesTTL": "Responses store TTL (seconds)",
"modelAliases": "Model aliases (JSON)", "embeddingsProvider": "Embeddings provider",
"backupTitle": "Backup & Restore", "modelTitle": "Model mapping",
"loadExport": "Load current export", "claudeMapping": "Claude mapping (JSON)",
"importModeMerge": "Merge import (default)", "modelAliases": "Model aliases (JSON)",
"importModeReplace": "Replace all import", "backupTitle": "Backup & Restore",
"importNow": "Import now", "loadExport": "Load current export",
"importing": "Importing...", "importModeMerge": "Merge import (default)",
"importPlaceholder": "Paste config JSON to import", "importModeReplace": "Replace all import",
"importEmpty": "Please input import JSON.", "importNow": "Import now",
"importInvalidJson": "Import JSON is invalid.", "importing": "Importing...",
"importFailed": "Import failed.", "importPlaceholder": "Paste config JSON to import",
"importSuccess": "Config imported (mode: {mode}).", "importEmpty": "Please input import JSON.",
"exportFailed": "Export failed.", "importInvalidJson": "Import JSON is invalid.",
"exportLoaded": "Current export loaded.", "importFailed": "Import failed.",
"exportJson": "Export JSON", "importSuccess": "Config imported (mode: {mode}).",
"invalidJsonField": "{field} is not a valid JSON object.", "exportFailed": "Export failed.",
"defaultPasswordWarning": "You are using the default admin password \"admin\". Please change it.", "exportLoaded": "Current export loaded.",
"vercelSyncHint": "Configuration changed. For Vercel deployments, sync manually in Vercel Sync and redeploy.", "exportJson": "Export JSON",
"autoFetchPaused": "Auto loading paused after {count} failures: {error}", "invalidJsonField": "{field} is not a valid JSON object.",
"retryLoad": "Retry now" "defaultPasswordWarning": "You are using the default admin password \"admin\". Please change it.",
}, "vercelSyncHint": "Configuration changed. For Vercel deployments, sync manually in Vercel Sync and redeploy.",
"login": { "autoFetchPaused": "Auto loading paused after {count} failures: {error}",
"welcome": "Welcome back", "retryLoad": "Retry now"
"subtitle": "Enter your admin key to continue", },
"adminKeyLabel": "Admin key", "login": {
"adminKeyPlaceholder": "Enter your admin key...", "welcome": "Welcome back",
"rememberSession": "Remember this session", "subtitle": "Enter your admin key to continue",
"signIn": "Sign in", "adminKeyLabel": "Admin key",
"secureConnection": "Secure connection", "adminKeyPlaceholder": "Enter your admin key...",
"adminPortal": "DS2API admin portal", "rememberSession": "Remember this session",
"signInFailed": "Sign-in failed.", "signIn": "Sign in",
"networkError": "Network error: {error}" "secureConnection": "Secure connection",
}, "adminPortal": "DS2API admin portal",
"vercel": { "signInFailed": "Sign-in failed.",
"tokenRequired": "Vercel access token is required.", "networkError": "Network error: {error}"
"projectRequired": "Project ID is required.", },
"syncFailed": "Sync failed.", "vercel": {
"networkError": "Network error.", "tokenRequired": "Vercel access token is required.",
"title": "Vercel Deployment", "projectRequired": "Project ID is required.",
"description": "Sync the current keys and accounts directly to Vercel environment variables.", "syncFailed": "Sync failed.",
"tokenLabel": "Vercel Access Token", "networkError": "Network error.",
"getToken": "Get token", "title": "Vercel Deployment",
"tokenPlaceholderPreconfig": "Using preconfigured token", "description": "Sync the current keys and accounts directly to Vercel environment variables.",
"tokenPlaceholder": "Enter Vercel access token", "tokenLabel": "Vercel Access Token",
"projectIdLabel": "Project ID", "getToken": "Get token",
"projectIdHint": "Find it in Project Settings → General.", "tokenPlaceholderPreconfig": "Using preconfigured token",
"teamIdLabel": "Team ID", "tokenPlaceholder": "Enter Vercel access token",
"optional": "optional", "projectIdLabel": "Project ID",
"syncing": "Syncing...", "projectIdHint": "Find it in Project Settings → General.",
"syncRedeploy": "Sync & redeploy", "teamIdLabel": "Team ID",
"redeployHint": "This triggers a Vercel redeploy and usually takes 3060 seconds.", "optional": "optional",
"syncSucceeded": "Sync succeeded", "syncing": "Syncing...",
"syncFailedLabel": "Sync failed", "syncRedeploy": "Sync & redeploy",
"openDeployment": "Open deployment", "redeployHint": "This triggers a Vercel redeploy and usually takes 3060 seconds.",
"statusSynced": "Synced", "syncSucceeded": "Sync succeeded",
"statusNotSynced": "Not synced", "syncFailedLabel": "Sync failed",
"statusNeverSynced": "Never synced", "openDeployment": "Open deployment",
"lastSyncTime": "Last sync: {time}", "statusSynced": "Synced",
"pollPaused": "Status polling paused after {count} failures.", "statusNotSynced": "Not synced",
"manualRefresh": "Refresh manually", "statusNeverSynced": "Never synced",
"howItWorks": "How it works", "lastSyncTime": "Last sync: {time}",
"steps": { "pollPaused": "Status polling paused after {count} failures.",
"one": "The current configuration (keys and accounts) is exported as JSON.", "manualRefresh": "Refresh manually",
"two": "The JSON is Base64-encoded for safe formatting.", "howItWorks": "How it works",
"three": "Update the env var in Vercel:", "steps": {
"four": "Trigger a redeploy to apply the updated environment variables." "one": "The current configuration (keys and accounts) is exported as JSON.",
} "two": "The JSON is Base64-encoded for safe formatting.",
} "three": "Update the env var in Vercel:",
} "four": "Trigger a redeploy to apply the updated environment variables."
}
}
}

View File

@@ -1,294 +1,297 @@
{ {
"language": { "language": {
"label": "语言", "label": "语言",
"english": "English", "english": "English",
"chinese": "中文" "chinese": "中文"
}, },
"nav": { "nav": {
"accounts": { "accounts": {
"label": "账号管理", "label": "账号管理",
"desc": "管理 DeepSeek 账号池" "desc": "管理 DeepSeek 账号池"
}, },
"test": { "test": {
"label": "API 测试", "label": "API 测试",
"desc": "测试 API 连接与响应" "desc": "测试 API 连接与响应"
}, },
"import": { "import": {
"label": "批量导入", "label": "批量导入",
"desc": "批量导入账号配置" "desc": "批量导入账号配置"
}, },
"vercel": { "vercel": {
"label": "Vercel 同步", "label": "Vercel 同步",
"desc": "同步配置到 Vercel" "desc": "同步配置到 Vercel"
}, },
"settings": { "settings": {
"label": "设置中心", "label": "设置中心",
"desc": "在线修改系统设置与配置" "desc": "在线修改系统设置与配置"
} }
}, },
"sidebar": { "sidebar": {
"onlineAdminConsole": "在线管理面板", "onlineAdminConsole": "在线管理面板",
"systemStatus": "系统状态", "systemStatus": "系统状态",
"statusOnline": "在线", "statusOnline": "在线",
"accounts": "账号", "accounts": "账号",
"keys": "密钥", "keys": "密钥",
"signOut": "退出登录" "signOut": "退出登录"
}, },
"auth": { "auth": {
"expired": "认证已过期,请重新登录", "expired": "认证已过期,请重新登录",
"checking": "正在检查登录状态..." "checking": "正在检查登录状态..."
}, },
"errors": { "errors": {
"fetchConfig": "获取配置失败: {error}" "fetchConfig": "获取配置失败: {error}"
}, },
"actions": { "actions": {
"cancel": "取消", "cancel": "取消",
"add": "添加", "add": "添加",
"delete": "删除", "delete": "删除",
"copy": "复制", "copy": "复制",
"generate": "生成", "generate": "生成",
"test": "测试", "test": "测试",
"testing": "正在测试...", "testing": "正在测试...",
"loading": "加载中..." "loading": "加载中..."
}, },
"messages": { "messages": {
"deleted": "删除成功", "deleted": "删除成功",
"deleteFailed": "删除失败", "deleteFailed": "删除失败",
"failedToAdd": "添加失败", "failedToAdd": "添加失败",
"networkError": "网络错误", "networkError": "网络错误",
"requestFailed": "请求失败", "requestFailed": "请求失败",
"generationStopped": "已停止生成", "generationStopped": "已停止生成",
"invalidJson": "无效的 JSON 格式", "invalidJson": "无效的 JSON 格式",
"importFailed": "导入失败", "importFailed": "导入失败",
"copyFailed": "复制失败" "copyFailed": "复制失败"
}, },
"landing": { "landing": {
"adminConsole": "管理面板", "adminConsole": "管理面板",
"apiStatus": "API 状态", "apiStatus": "API 状态",
"features": { "features": {
"compatibility": { "compatibility": {
"title": "全面兼容", "title": "全面兼容",
"desc": "适配 OpenAI 与 Claude 格式" "desc": "适配 OpenAI 与 Claude 格式"
}, },
"loadBalancing": { "loadBalancing": {
"title": "负载均衡", "title": "负载均衡",
"desc": "智能轮询,稳定高效" "desc": "智能轮询,稳定高效"
}, },
"reasoning": { "reasoning": {
"title": "深度思考", "title": "深度思考",
"desc": "支持推理过程输出" "desc": "支持推理过程输出"
}, },
"search": { "search": {
"title": "联网搜索", "title": "联网搜索",
"desc": "集成原生网页搜索能力" "desc": "集成原生网页搜索能力"
} }
} }
}, },
"accountManager": { "accountManager": {
"addKeySuccess": "API 密钥添加成功", "addKeySuccess": "API 密钥添加成功",
"addAccountSuccess": "账号添加成功", "addAccountSuccess": "账号添加成功",
"requiredFields": "需要填写密码以及邮箱或手机号", "requiredFields": "需要填写密码以及邮箱或手机号",
"deleteKeyConfirm": "确定要删除此 API 密钥吗?", "deleteKeyConfirm": "确定要删除此 API 密钥吗?",
"deleteAccountConfirm": "确定要删除此账号吗?", "deleteAccountConfirm": "确定要删除此账号吗?",
"invalidIdentifier": "账号标识无效,无法执行操作", "invalidIdentifier": "账号标识无效,无法执行操作",
"testAllConfirm": "测试所有账号的 API 连通性?", "testAllConfirm": "测试所有账号的 API 连通性?",
"testAllCompleted": "完成:{success}/{total} 可用", "testAllCompleted": "完成:{success}/{total} 可用",
"testFailed": "测试失败: {error}", "testFailed": "测试失败: {error}",
"available": "可用", "available": "可用",
"inUse": "正在使用", "inUse": "正在使用",
"totalPool": "账号池总数", "totalPool": "账号池总数",
"accountsUnit": "个账号", "accountsUnit": "个账号",
"threadsUnit": "线程", "threadsUnit": "线程",
"apiKeysTitle": "API 密钥", "apiKeysTitle": "API 密钥",
"apiKeysDesc": "管理 API 访问密钥池", "apiKeysDesc": "管理 API 访问密钥池",
"addKey": "添加密钥", "addKey": "添加密钥",
"copied": "已复制", "copied": "已复制",
"copyKeyTitle": "复制密钥", "copyKeyTitle": "复制密钥",
"deleteKeyTitle": "删除密钥", "deleteKeyTitle": "删除密钥",
"noApiKeys": "未找到 API 密钥", "noApiKeys": "未找到 API 密钥",
"accountsTitle": "DeepSeek 账号", "accountsTitle": "DeepSeek 账号",
"accountsDesc": "管理 DeepSeek 账号池", "accountsDesc": "管理 DeepSeek 账号池",
"testAll": "测试全部", "testAll": "测试全部",
"addAccount": "添加账号", "addAccount": "添加账号",
"testingAllAccounts": "正在测试所有账号...", "testingAllAccounts": "正在测试所有账号...",
"sessionActive": "已建立会话", "sessionActive": "已建立会话",
"reauthRequired": "需重新登录", "reauthRequired": "需重新登录",
"noAccounts": "未找到任何账号", "testStatusFailed": "上次测试失败",
"modalAddKeyTitle": "添加 API 密钥", "noAccounts": "未找到任何账号",
"newKeyLabel": "密钥", "modalAddKeyTitle": "添加 API 密钥",
"newKeyPlaceholder": "输入自定义 API 密钥", "newKeyLabel": "新密钥",
"generate": "生成", "newKeyPlaceholder": "输入自定义 API 密钥",
"generateHint": "点击「生成」自动创建随机密钥", "generate": "生成",
"addKeyLoading": "添加中...", "generateHint": "点击「生成」自动创建随机密钥",
"addKeyAction": "添加密钥", "addKeyLoading": "添加中...",
"modalAddAccountTitle": "添加 DeepSeek 账号", "addKeyAction": "添加密钥",
"emailOptional": "邮箱 (可选)", "modalAddAccountTitle": "添加 DeepSeek 账号",
"mobileOptional": "手机号 (可选)", "emailOptional": "邮箱 (可选)",
"passwordLabel": "密码", "mobileOptional": "手机号 (可选)",
"passwordPlaceholder": "账号密码", "passwordLabel": "密码",
"addAccountLoading": "添加中...", "passwordPlaceholder": "账号密码",
"addAccountAction": "添加账号", "addAccountLoading": "添加中...",
"pageInfo": "第 {current}/{total} 页,共 {count} 个账号" "addAccountAction": "添加账号",
}, "pageInfo": "第 {current}/{total} 页,共 {count} 个账号",
"apiTester": { "searchPlaceholder": "搜索账号...",
"defaultMessage": "你好,请用一句话介绍你自己。", "searchNoResults": "未找到匹配的账号"
"models": { },
"chat": "非思考模型", "apiTester": {
"reasoner": "思考模型", "defaultMessage": "你好,请用一句话介绍你自己。",
"chatSearch": "非思考模型 (带搜索)", "models": {
"reasonerSearch": "思考模型 (带搜索)" "chat": "思考模型",
}, "reasoner": "思考模型",
"missingApiKey": "请提供 API 密钥", "chatSearch": "非思考模型 (带搜索)",
"requestFailed": "请求失败", "reasonerSearch": "思考模型 (带搜索)"
"networkError": "网络错误: {error}", },
"testSuccess": "{account}: 测试成功 ({time}ms)", "missingApiKey": "请提供 API 密钥",
"config": "配置", "requestFailed": "请求失败",
"modelLabel": "模型", "networkError": "网络错误: {error}",
"streamMode": "流式模式", "testSuccess": "{account}: 测试成功 ({time}ms)",
"accountSelector": "选择账号", "config": "配置",
"autoRandom": "🤖 自动 / 随机", "modelLabel": "模型",
"apiKeyOptional": "API 密钥 (可选)", "streamMode": "流式模式",
"apiKeyDefault": "默认: ...{suffix}", "accountSelector": "选择账号",
"apiKeyPlaceholder": "输入自定义密钥", "autoRandom": "🤖 自动 / 随机",
"modeManaged": "当前使用托管 key 模式(会走账号池)。", "apiKeyOptional": "API 密钥 (可选)",
"modeDirect": "当前使用直通 token 模式(需填写有效 DeepSeek token", "apiKeyDefault": "默认: ...{suffix}",
"statusError": "错误", "apiKeyPlaceholder": "输入自定义密钥",
"reasoningTrace": "思维链过程", "modeManaged": "当前使用托管 key 模式(会走账号池)。",
"generating": "正在生成响应...", "modeDirect": "当前使用直通 token 模式(需填写有效 DeepSeek token",
"enterMessage": "输入消息...", "statusError": "错误",
"adminConsoleLabel": "DeepSeek 管理员界面" "reasoningTrace": "思维链过程",
}, "generating": "正在生成响应...",
"batchImport": { "enterMessage": "输入消息...",
"templates": { "adminConsoleLabel": "DeepSeek 管理员界面"
"full": { },
"name": "全量配置模板", "batchImport": {
"desc": "包含密钥、账号及模型映射" "templates": {
}, "full": {
"emailOnly": { "name": "全量配置模板",
"name": "仅邮箱账号", "desc": "包含密钥、账号及模型映射"
"desc": "批量导入邮箱格式账号" },
}, "emailOnly": {
"mobileOnly": { "name": "仅邮箱账号",
"name": "仅手机号账号", "desc": "批量导入邮箱格式账号"
"desc": "批量导入手机号格式账号" },
}, "mobileOnly": {
"keysOnly": { "name": "仅手机号账号",
"name": "仅 API 密钥", "desc": "批量导入手机号格式账号"
"desc": "仅添加 API 访问密钥" },
} "keysOnly": {
}, "name": "仅 API 密钥",
"enterJson": "请输入 JSON 配置内容", "desc": "仅添加 API 访问密钥"
"importSuccess": "导入成功: {keys} 个密钥, {accounts} 个账号", }
"templateLoaded": "已加载模板: {name}", },
"currentConfigLoaded": "当前配置已加载", "enterJson": "请输入 JSON 配置内容",
"fetchConfigFailed": "获取配置失败", "importSuccess": "导入成功: {keys} 个密钥, {accounts} 个账号",
"copySuccess": "Base64 配置已复制到剪贴板", "templateLoaded": "已加载模板: {name}",
"quickTemplates": "快速模板", "currentConfigLoaded": "当前配置已加载",
"dataExport": "数据导出", "fetchConfigFailed": "获取配置失败",
"dataExportDesc": "获取配置的 Base64 字符串,用于 Vercel 环境变量。", "copySuccess": "Base64 配置已复制到剪贴板",
"copyBase64": "复制 Base64 配置", "quickTemplates": "快速模板",
"copied": "已复制", "dataExport": "数据导出",
"variableName": "变量", "dataExportDesc": "获取配置的 Base64 字符串,用于 Vercel 环境变量",
"jsonEditor": "JSON 编辑器", "copyBase64": "复制 Base64 配置",
"loadCurrentConfig": "加载当前配置", "copied": "已复制",
"applyConfig": "应用配置", "variableName": "变量名",
"importing": "正在导入...", "jsonEditor": "JSON 编辑器",
"importComplete": "导入操作已完成", "loadCurrentConfig": "加载当前配置",
"importSummary": "成功导入了 {keys} 个 API 密钥,并更新了 {accounts} 个账号。" "applyConfig": "应用配置",
}, "importing": "正在导入...",
"settings": { "importComplete": "导入操作已完成",
"loadFailed": "加载设置失败", "importSummary": "成功导入了 {keys} 个 API 密钥,并更新了 {accounts} 个账号。"
"nonJsonResponse": "服务端返回了非 JSON 响应(状态码:{status}", },
"save": "保存设置", "settings": {
"saving": "保存中...", "loadFailed": "加载设置失败",
"saveSuccess": "设置已保存并热更新生效", "nonJsonResponse": "服务端返回了非 JSON 响应(状态码:{status}",
"saveFailed": "保存设置失败", "save": "保存设置",
"securityTitle": "安全设置", "saving": "保存中...",
"jwtExpireHours": "JWT 有效期(小时)", "saveSuccess": "设置已保存并热更新生效",
"newPassword": "面板新密码", "saveFailed": "保存设置失败",
"newPasswordPlaceholder": "输入新密码(至少 4 位)", "securityTitle": "安全设置",
"updatePassword": "修改密码", "jwtExpireHours": "JWT 有效期(小时)",
"updating": "更新中...", "newPassword": "面板新密码",
"passwordTooShort": "新密码至少 4 位", "newPasswordPlaceholder": "输入新密码至少 4 位",
"passwordUpdated": "密码已更新,需重新登录", "updatePassword": "修改密码",
"passwordUpdateFailed": "密码更新失败", "updating": "更新中...",
"runtimeTitle": "并发与队列", "passwordTooShort": "新密码至少 4 位",
"accountMaxInflight": "每账号并发上限", "passwordUpdated": "密码已更新,需重新登录",
"accountMaxQueue": "账号等待队列上限", "passwordUpdateFailed": "密码更新失败",
"globalMaxInflight": "全局并发上限", "runtimeTitle": "并发与队列",
"behaviorTitle": "行为设置", "accountMaxInflight": "每账号并发上限",
"toolcallMode": "Toolcall 模式", "accountMaxQueue": "账号等待队列上限",
"earlyEmitConfidence": "早发置信度", "globalMaxInflight": "全局并发上限",
"responsesTTL": "Responses 缓存 TTL", "behaviorTitle": "行为设置",
"embeddingsProvider": "Embeddings Provider", "toolcallMode": "Toolcall 模式",
"modelTitle": "模型映射", "earlyEmitConfidence": "早发置信度",
"claudeMapping": "Claude 映射JSON", "responsesTTL": "Responses 缓存 TTL",
"modelAliases": "模型别名JSON", "embeddingsProvider": "Embeddings Provider",
"backupTitle": "备份与恢复", "modelTitle": "模型映射",
"loadExport": "加载当前导出", "claudeMapping": "Claude 映射JSON",
"importModeMerge": "合并导入(默认", "modelAliases": "模型别名JSON",
"importModeReplace": "全量覆盖导入", "backupTitle": "备份与恢复",
"importNow": "立即导入", "loadExport": "加载当前导出",
"importing": "导入中...", "importModeMerge": "合并导入(默认)",
"importPlaceholder": "粘贴要导入的 JSON 配置", "importModeReplace": "全量覆盖导入",
"importEmpty": "请先输入导入 JSON", "importNow": "立即导入",
"importInvalidJson": "导入 JSON 格式无效", "importing": "导入中...",
"importFailed": "导入失败", "importPlaceholder": "粘贴要导入的 JSON 配置",
"importSuccess": "配置导入成功(模式:{mode}", "importEmpty": "请先输入导入 JSON",
"exportFailed": "导出失败", "importInvalidJson": "导入 JSON 格式无效",
"exportLoaded": "已加载当前配置导出", "importFailed": "导入失败",
"exportJson": "导出 JSON", "importSuccess": "配置导入成功(模式:{mode}",
"invalidJsonField": "{field} 不是有效 JSON 对象", "exportFailed": "导出失败",
"defaultPasswordWarning": "当前使用默认密码 admin请尽快在此修改。", "exportLoaded": "已加载当前配置导出",
"vercelSyncHint": "当前配置已更新。Vercel 部署请到 Vercel 同步页面手动同步并重部署。", "exportJson": "导出 JSON",
"autoFetchPaused": "自动加载已暂停:连续失败 {count} 次({error}", "invalidJsonField": "{field} 不是有效 JSON 对象",
"retryLoad": "立即重试" "defaultPasswordWarning": "当前使用默认密码 admin请尽快在此修改。",
}, "vercelSyncHint": "当前配置已更新。Vercel 部署请到 Vercel 同步页面手动同步并重部署。",
"login": { "autoFetchPaused": "自动加载已暂停:连续失败 {count} 次({error}",
"welcome": "欢迎回来", "retryLoad": "立即重试"
"subtitle": "请输入管理员密钥以继续", },
"adminKeyLabel": "管理员密钥", "login": {
"adminKeyPlaceholder": "输入您的管理员密钥...", "welcome": "欢迎回来",
"rememberSession": "记住登录状态", "subtitle": "请输入管理员密钥以继续",
"signIn": "登录", "adminKeyLabel": "管理员密钥",
"secureConnection": "安全连接", "adminKeyPlaceholder": "输入您的管理员密钥...",
"adminPortal": "DS2API 管理员门户", "rememberSession": "记住登录状态",
"signInFailed": "登录失败", "signIn": "登录",
"networkError": "网络错误: {error}" "secureConnection": "安全连接",
}, "adminPortal": "DS2API 管理员门户",
"vercel": { "signInFailed": "登录失败",
"tokenRequired": "需要 Vercel 访问令牌", "networkError": "网络错误: {error}"
"projectRequired": "需要项目 ID", },
"syncFailed": "同步失败", "vercel": {
"networkError": "网络错误", "tokenRequired": "需要 Vercel 访问令牌",
"title": "Vercel 部署", "projectRequired": "需要项目 ID",
"description": "将当前密钥和账号配置直接同步到 Vercel 环境变量中。", "syncFailed": "同步失败",
"tokenLabel": "Vercel 访问令牌", "networkError": "网络错误",
"getToken": "获取令牌", "title": "Vercel 部署",
"tokenPlaceholderPreconfig": "正在使用预配置的令牌", "description": "将当前密钥和账号配置直接同步到 Vercel 环境变量中。",
"tokenPlaceholder": "输入 Vercel 访问令牌", "tokenLabel": "Vercel 访问令牌",
"projectIdLabel": "项目 ID", "getToken": "获取令牌",
"projectIdHint": "可在项目设置 (Project Settings) → 常规 (General) 中找到", "tokenPlaceholderPreconfig": "正在使用预配置的令牌",
"teamIdLabel": "团队 ID", "tokenPlaceholder": "输入 Vercel 访问令牌",
"optional": "可选", "projectIdLabel": "项目 ID",
"syncing": "正在同步...", "projectIdHint": "可在项目设置 (Project Settings) → 常规 (General) 中找到",
"syncRedeploy": "同步并重新部署", "teamIdLabel": "团队 ID",
"redeployHint": "这将触发 Vercel 的重新部署,大约需要 30-60 秒。", "optional": "可选",
"syncSucceeded": "同步成功", "syncing": "正在同步...",
"syncFailedLabel": "同步失败", "syncRedeploy": "同步并重新部署",
"openDeployment": "访问部署地址", "redeployHint": "这将触发 Vercel 的重新部署,大约需要 30-60 秒。",
"statusSynced": "同步", "syncSucceeded": "同步成功",
"statusNotSynced": "同步", "syncFailedLabel": "同步失败",
"statusNeverSynced": "从未同步", "openDeployment": "访问部署地址",
"lastSyncTime": "上次同步: {time}", "statusSynced": "已同步",
"pollPaused": "状态轮询已暂停:连续失败 {count} 次。", "statusNotSynced": "未同步",
"manualRefresh": "手动刷新", "statusNeverSynced": "从未同步",
"howItWorks": "工作原理", "lastSyncTime": "上次同步: {time}",
"steps": { "pollPaused": "状态轮询已暂停:连续失败 {count} 次。",
"one": "当前配置 (密钥和账号) 被导出为 JSON 字符串。", "manualRefresh": "手动刷新",
"two": "JSON 被编码为 Base64 以确保格式兼容性。", "howItWorks": "工作原理",
"three": "更新 Vercel 项目中的环境变量:", "steps": {
"four": "触发重新部署以应用新的环境变量。" "one": "当前配置 (密钥和账号) 被导出为 JSON 字符串。",
} "two": "JSON 被编码为 Base64 以确保格式兼容性。",
} "three": "更新 Vercel 项目中的环境变量:",
} "four": "触发重新部署以应用新的环境变量。"
}
}
}

60
zeabur.yaml Normal file
View File

@@ -0,0 +1,60 @@
# yaml-language-server: $schema=https://schema.zeabur.app/template.json
apiVersion: zeabur.com/v1
kind: Template
metadata:
name: DS2API
spec:
description: DeepSeek Web 对话转 OpenAI/Claude/Gemini 兼容 APIGo 实现,含 WebUI
tags:
- DeepSeek
- API
- Go
readme: |-
# DS2API (Zeabur)
## After deployment
- Admin panel: `/admin`
- Health check: `/healthz`
- Config is persisted at `/data/config.json` (mounted volume)
## First-time setup
1. Open your service URL, then visit `/admin`
2. Login with `DS2API_ADMIN_KEY` (shown in Zeabur env/instructions)
3. Import / edit config in Admin UI (saved to `/data/config.json`)
services:
- name: ds2api
template: GIT
spec:
source:
source: GITHUB
repo: 1139136822
branch: main
rootDirectory: /
ports:
- id: web
port: 5001
type: HTTP
volumes:
- id: data
dir: /data
env:
PORT:
default: "5001"
LOG_LEVEL:
default: "INFO"
DS2API_ADMIN_KEY:
default: ${PASSWORD}
expose: true
DS2API_CONFIG_PATH:
default: /data/config.json
instructions:
- title: Admin panel
content: Visit `/admin` on your service URL.
- title: DS2API admin key
content: ${DS2API_ADMIN_KEY}
healthCheck:
type: HTTP
port: web
http:
path: /healthz