mirror of
https://github.com/CJackHwang/ds2api.git
synced 2026-05-05 00:45:29 +08:00
Merge pull request #393 from CJackHwang/dev
Merge pull request #391 from BigUncle/fix/vercel-admin-history-rewrite Fix: add missing Vercel rewrite rules for admin API routes
This commit is contained in:
@@ -318,7 +318,7 @@ go run ./cmd/ds2api
|
||||
- `runtime`:账号并发、队列与 token 刷新策略,可通过 Admin Settings 热更新。
|
||||
- `auto_delete.mode`:请求结束后的远端会话清理策略,支持 `none` / `single` / `all`。
|
||||
- `history_split`:旧轮次拆分字段,已废弃并忽略,仅保留兼容旧配置。
|
||||
- `current_input_file`:唯一生效的独立拆分策略;默认开启且阈值为 `0`,触发时将完整上下文合并上传为 `history.txt` 上下文文件。
|
||||
- `current_input_file`:唯一生效的独立拆分策略;默认开启且阈值为 `0`,触发时将完整上下文合并上传为 `DS2API_HISTORY.txt` 上下文文件。
|
||||
- 如果关闭 `current_input_file`,请求会直接透传,不上传拆分上下文文件。
|
||||
- `thinking_injection`:默认开启;在最新 user 消息末尾追加思考增强提示词,提高高强度推理与工具调用前的思考稳定性;`prompt` 留空时使用内置默认提示词。
|
||||
|
||||
|
||||
@@ -306,7 +306,7 @@ Common fields:
|
||||
- `runtime`: account concurrency, queueing, and token refresh behavior, hot-reloadable via Admin Settings.
|
||||
- `auto_delete.mode`: remote session cleanup after each request, supporting `none` / `single` / `all`.
|
||||
- `history_split`: legacy multi-turn history split field, now ignored and kept only for backward-compatible config loading.
|
||||
- `current_input_file`: the only active split mode; it is enabled by default and uploads the full context as a `history.txt` context file once the character threshold is reached.
|
||||
- `current_input_file`: the only active split mode; it is enabled by default and uploads the full context as a `DS2API_HISTORY.txt` context file once the character threshold is reached.
|
||||
- If you turn off `current_input_file`, requests pass through directly without uploading any split context file.
|
||||
|
||||
For the full environment variable list, see [docs/DEPLOY.en.md](docs/DEPLOY.en.md). For auth behavior, see [API.en.md](API.en.md#authentication).
|
||||
|
||||
@@ -271,6 +271,7 @@ VERCEL_TEAM_ID=team_xxxxxxxxxxxx # optional for personal accounts
|
||||
| `VERCEL_TOKEN` | Vercel sync token | — |
|
||||
| `VERCEL_PROJECT_ID` | Vercel project ID | — |
|
||||
| `VERCEL_TEAM_ID` | Vercel team ID | — |
|
||||
| `DS2API_CHAT_HISTORY_PATH` | Chat history storage path (must be set to `/tmp/chat_history.json` on Vercel, otherwise unavailable due to read-only filesystem) | `data/chat_history.json` |
|
||||
| `DS2API_VERCEL_PROTECTION_BYPASS` | Deployment protection bypass for internal Node→Go calls | — |
|
||||
|
||||
### 3.4 Vercel Architecture
|
||||
@@ -360,6 +361,22 @@ If API responses return Vercel HTML `Authentication Required`:
|
||||
- **Option B**: Add `x-vercel-protection-bypass` header to requests
|
||||
- **Option C**: Set `VERCEL_AUTOMATION_BYPASS_SECRET` (or `DS2API_VERCEL_PROTECTION_BYPASS`) for internal Node→Go calls
|
||||
|
||||
#### Chat History Unavailable (read-only file system)
|
||||
|
||||
```text
|
||||
create chat history dir: mkdir /var/task/data: read-only file system
|
||||
```
|
||||
|
||||
**Cause**: Vercel Serverless functions have a read-only filesystem (`/var/task`). Chat history fails because it cannot create directories there.
|
||||
|
||||
**Fix**: Add the following in Vercel Project Settings → Environment Variables:
|
||||
|
||||
```text
|
||||
DS2API_CHAT_HISTORY_PATH=/tmp/chat_history.json
|
||||
```
|
||||
|
||||
`/tmp` is the only writable directory in Vercel Serverless. Data is ephemeral (not persisted across cold starts), but the feature works within a single instance lifetime.
|
||||
|
||||
### 3.6 Build Artifacts Not Committed
|
||||
|
||||
- `static/admin` directory is not in Git
|
||||
|
||||
@@ -271,6 +271,7 @@ VERCEL_TEAM_ID=team_xxxxxxxxxxxx # 个人账号可留空
|
||||
| `VERCEL_TOKEN` | Vercel 同步 token | — |
|
||||
| `VERCEL_PROJECT_ID` | Vercel 项目 ID | — |
|
||||
| `VERCEL_TEAM_ID` | Vercel 团队 ID | — |
|
||||
| `DS2API_CHAT_HISTORY_PATH` | Chat history 存储路径(Vercel 上必须设为 `/tmp/chat_history.json`,否则因文件系统只读而不可用) | `data/chat_history.json` |
|
||||
| `DS2API_VERCEL_PROTECTION_BYPASS` | 部署保护绕过密钥(内部 Node→Go 调用) | — |
|
||||
|
||||
### 3.3 运行时行为配置(通过 Admin API 设置)
|
||||
@@ -370,6 +371,22 @@ No Output Directory named "public" found after the Build completed.
|
||||
- **方案 B**:请求中添加 `x-vercel-protection-bypass` 头
|
||||
- **方案 C**:设置 `VERCEL_AUTOMATION_BYPASS_SECRET`(或 `DS2API_VERCEL_PROTECTION_BYPASS`),仅影响内部 Node→Go 调用
|
||||
|
||||
#### Chat History 不可用(read-only file system)
|
||||
|
||||
```text
|
||||
create chat history dir: mkdir /var/task/data: read-only file system
|
||||
```
|
||||
|
||||
**原因**:Vercel Serverless 函数的文件系统(`/var/task`)为只读,chat history 尝试在该路径下创建目录失败。
|
||||
|
||||
**解决**:在 Vercel Project Settings → Environment Variables 中添加:
|
||||
|
||||
```text
|
||||
DS2API_CHAT_HISTORY_PATH=/tmp/chat_history.json
|
||||
```
|
||||
|
||||
`/tmp` 是 Vercel Serverless 环境中唯一可写的目录。数据在函数冷启动之间不会持久化(ephemeral),但在单个实例生命周期内功能正常。
|
||||
|
||||
### 3.6 仓库不提交构建产物
|
||||
|
||||
- `static/admin` 目录不在 Git 中
|
||||
|
||||
@@ -249,7 +249,7 @@ OpenAI 文件相关实现:
|
||||
|
||||
兼容层现在只保留 `current_input_file` 这一种拆分方式;旧的 `history_split` 已废弃,只保留为兼容旧配置的字段,不再参与请求处理。
|
||||
|
||||
- `current_input_file` 默认开启;它用于把“完整上下文”合并进 `history.txt` 上下文文件。当最新 user turn 的纯文本长度达到 `current_input_file.min_chars`(默认 `0`)时,兼容层会上传一个文件名为 `history.txt` 的上下文文件,并在 live prompt 中只保留一个中性的 user 消息要求模型直接回答最新请求,不再暴露文件名或要求模型读取本地文件。
|
||||
- `current_input_file` 默认开启;它用于把“完整上下文”合并进 `DS2API_HISTORY.txt` 上下文文件。当最新 user turn 的纯文本长度达到 `current_input_file.min_chars`(默认 `0`)时,兼容层会上传一个文件名为 `DS2API_HISTORY.txt` 的上下文文件。文件内容会先做 OpenAI 消息标准化,再序列化成按轮次编号的 `DS2API_HISTORY.txt` 风格 transcript,带有 `# DS2API_HISTORY.txt` 标题和 `=== N. ROLE ===` 分段;live prompt 中则会给出一个 continuation 语气的 user 消息,引导模型从 `DS2API_HISTORY.txt` 的最新状态继续推进,并直接回答最新请求,避免把任务拉回起点。
|
||||
- 如果 `current_input_file.enabled=false`,请求会直接透传,不上传任何拆分上下文文件。
|
||||
- 旧的 `history_split.enabled` / `history_split.trigger_after_turns` 会被读取进配置对象以保持兼容,但不会触发拆分上传,也不会影响 `current_input_file` 的默认开启。
|
||||
- 即使触发 `current_input_file` 后 live prompt 被缩短,对客户端回包里的上下文 token 统计,仍会沿用**拆分前的完整 prompt 语义**做计数,而不是按缩短后的占位 prompt 计算;否则会把真实上下文显著算小。
|
||||
@@ -263,11 +263,24 @@ OpenAI 文件相关实现:
|
||||
- 旧历史拆分兼容壳:
|
||||
[internal/httpapi/openai/history/history_split.go](../internal/httpapi/openai/history/history_split.go)
|
||||
|
||||
当前输入转文件启用并触发时,上传文件的真实文件名是 `history.txt`,文件内容是完整 `messages` 上下文;它仍会先用 OpenAI 消息标准化和 DeepSeek 角色标记序列化,并直接作为 `history.txt` 的纯文本内容上传(不再注入文件边界标签):
|
||||
当前输入转文件启用并触发时,上传文件的真实文件名是 `DS2API_HISTORY.txt`,文件内容是完整 `messages` 上下文;它仍会先用 OpenAI 消息标准化和 DeepSeek 角色标记序列化,再按轮次编号成 `DS2API_HISTORY.txt` 风格的 transcript(不再注入文件边界标签):
|
||||
|
||||
```text
|
||||
[uploaded filename]: history.txt
|
||||
<|begin▁of▁sentence|><|System|>...<|User|>...<|Assistant|>...<|Tool|>...<|User|>...
|
||||
[uploaded filename]: DS2API_HISTORY.txt
|
||||
# DS2API_HISTORY.txt
|
||||
Prior conversation history and tool progress.
|
||||
|
||||
=== 1. SYSTEM ===
|
||||
...
|
||||
|
||||
=== 2. USER ===
|
||||
...
|
||||
|
||||
=== 3. ASSISTANT ===
|
||||
...
|
||||
|
||||
=== 4. TOOL ===
|
||||
...
|
||||
```
|
||||
|
||||
开启后,请求的 live prompt 不再直接内联完整上下文,而是保留一个 user role 的短提示,提示模型基于已提供上下文直接回答最新请求;上传后的 `file_id` 会进入 `ref_file_ids`。
|
||||
@@ -319,7 +332,7 @@ OpenAI 文件相关实现:
|
||||
|
||||
```json
|
||||
{
|
||||
"prompt": "<|begin▁of▁sentence|><|System|>原 system / developer\n\nYou have access to these tools: ...<|end▁of▁instructions|><|User|>The current request and prior conversation context have already been provided. Answer the latest user request directly.<|Assistant|>",
|
||||
"prompt": "<|begin▁of▁sentence|><|System|>原 system / developer\n\nYou have access to these tools: ...<|end▁of▁instructions|><|User|>Continue from the latest state in the attached DS2API_HISTORY.txt context. Treat it as the current working state and answer the latest user request directly.<|Assistant|>",
|
||||
"ref_file_ids": [
|
||||
"file-current-input-ignore",
|
||||
"file-systemprompt",
|
||||
@@ -334,7 +347,7 @@ OpenAI 文件相关实现:
|
||||
|
||||
- 大部分结构化语义被压进 `prompt`
|
||||
- 文件保持文件
|
||||
- 需要时把完整上下文拆进 `history.txt` 上下文文件
|
||||
- 需要时把完整上下文拆进 `DS2API_HISTORY.txt` 上下文文件,并按轮次编号成 transcript
|
||||
|
||||
## 12. 修改时必须同步本文档的场景
|
||||
|
||||
@@ -347,7 +360,7 @@ OpenAI 文件相关实现:
|
||||
- tool result 注入方式变更
|
||||
- tool prompt 模板或 tool_choice 约束变更
|
||||
- inline 文件上传 / 文件引用收集规则变更
|
||||
- current input file 触发条件、上传格式、`history.txt` 包装格式变更
|
||||
- current input file 触发条件、上传格式、`DS2API_HISTORY.txt` transcript 结构变更
|
||||
- 旧 `history_split` 兼容逻辑的读取、忽略或退化行为变更
|
||||
- completion payload 字段语义变更
|
||||
- Claude / Gemini 对这套统一语义的复用关系变更
|
||||
|
||||
@@ -159,6 +159,6 @@ func toStringSet(in []string) map[string]struct{} {
|
||||
|
||||
const (
|
||||
KeepAliveTimeout = 5
|
||||
StreamIdleTimeout = 90
|
||||
MaxKeepaliveCount = 10
|
||||
StreamIdleTimeout = 300
|
||||
MaxKeepaliveCount = 40
|
||||
)
|
||||
|
||||
@@ -311,16 +311,16 @@ func TestChatCompletionsCurrentInputFilePersistsNeutralPrompt(t *testing.T) {
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected current input upload to happen, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
if ds.uploadCalls[0].Filename != "history.txt" {
|
||||
t.Fatalf("expected history.txt upload, got %q", ds.uploadCalls[0].Filename)
|
||||
if ds.uploadCalls[0].Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("expected DS2API_HISTORY.txt upload, got %q", ds.uploadCalls[0].Filename)
|
||||
}
|
||||
if full.HistoryText != string(ds.uploadCalls[0].Data) {
|
||||
t.Fatalf("expected uploaded current input file to be persisted in history text")
|
||||
}
|
||||
if len(full.Messages) != 1 {
|
||||
t.Fatalf("expected neutral prompt to be the only persisted message, got %#v", full.Messages)
|
||||
t.Fatalf("expected continuation prompt to be the only persisted message, got %#v", full.Messages)
|
||||
}
|
||||
if !strings.Contains(full.Messages[0].Content, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral prompt to be persisted, got %#v", full.Messages[0])
|
||||
if !strings.Contains(full.Messages[0].Content, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation prompt to be persisted, got %#v", full.Messages[0])
|
||||
}
|
||||
}
|
||||
|
||||
@@ -173,6 +173,15 @@ func (s *chatStreamRuntime) sendFailedChunk(status int, message, code string) {
|
||||
s.sendDone()
|
||||
}
|
||||
|
||||
func (s *chatStreamRuntime) markContextCancelled() {
|
||||
s.finalErrorStatus = 499
|
||||
s.finalErrorMessage = "request context cancelled"
|
||||
s.finalErrorCode = string(streamengine.StopReasonContextCancelled)
|
||||
s.finalThinking = s.thinking.String()
|
||||
s.finalText = cleanVisibleOutput(s.text.String(), s.stripReferenceMarkers)
|
||||
s.finalFinishReason = string(streamengine.StopReasonContextCancelled)
|
||||
}
|
||||
|
||||
func (s *chatStreamRuntime) resetStreamToolCallState() {
|
||||
s.streamToolCallIDs = map[int]string{}
|
||||
s.streamToolNames = map[int]string{}
|
||||
|
||||
@@ -247,11 +247,15 @@ func (h *Handler) consumeChatStreamAttempt(r *http.Request, resp *http.Response,
|
||||
}
|
||||
},
|
||||
OnContextDone: func() {
|
||||
streamRuntime.markContextCancelled()
|
||||
if historySession != nil {
|
||||
historySession.stopped(streamRuntime.thinking.String(), streamRuntime.text.String(), string(streamengine.StopReasonContextCancelled))
|
||||
}
|
||||
},
|
||||
})
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
return true, false
|
||||
}
|
||||
terminalWritten := streamRuntime.finalize(finalReason, allowDeferEmpty && finalReason != "content_filter")
|
||||
if terminalWritten {
|
||||
recordChatStreamHistory(streamRuntime, historySession)
|
||||
@@ -283,6 +287,10 @@ func logChatStreamTerminal(streamRuntime *chatStreamRuntime, attempts int) {
|
||||
if attempts > 0 {
|
||||
source = "synthetic_retry"
|
||||
}
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
config.Logger.Info("[openai_empty_retry] terminal cancelled", "surface", "chat.completions", "stream", true, "retry_attempts", attempts, "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
}
|
||||
if streamRuntime.finalErrorMessage != "" {
|
||||
config.Logger.Info("[openai_empty_retry] terminal empty output", "surface", "chat.completions", "stream", true, "retry_attempts", attempts, "success_source", "none", "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
|
||||
85
internal/httpapi/openai/chat/empty_retry_runtime_test.go
Normal file
85
internal/httpapi/openai/chat/empty_retry_runtime_test.go
Normal file
@@ -0,0 +1,85 @@
|
||||
package chat
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"ds2api/internal/chathistory"
|
||||
"ds2api/internal/stream"
|
||||
)
|
||||
|
||||
func TestConsumeChatStreamAttemptMarksContextCancelledState(t *testing.T) {
|
||||
historyStore := newTestChatHistoryStore(t)
|
||||
entry, err := historyStore.Start(chathistory.StartParams{
|
||||
CallerID: "caller:test",
|
||||
Model: "deepseek-v4-flash",
|
||||
Stream: true,
|
||||
UserInput: "hello",
|
||||
})
|
||||
if err != nil {
|
||||
t.Fatalf("start history failed: %v", err)
|
||||
}
|
||||
session := &chatHistorySession{
|
||||
store: historyStore,
|
||||
entryID: entry.ID,
|
||||
startedAt: time.Now(),
|
||||
lastPersist: time.Now(),
|
||||
finalPrompt: "prompt",
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/chat/completions", nil).WithContext(ctx)
|
||||
rec := httptest.NewRecorder()
|
||||
streamRuntime := newChatStreamRuntime(
|
||||
rec,
|
||||
http.NewResponseController(rec),
|
||||
true,
|
||||
"cid-cancelled",
|
||||
time.Now().Unix(),
|
||||
"deepseek-v4-flash",
|
||||
"prompt",
|
||||
false,
|
||||
false,
|
||||
true,
|
||||
nil,
|
||||
nil,
|
||||
false,
|
||||
false,
|
||||
)
|
||||
resp := makeOpenAISSEHTTPResponse(
|
||||
`data: {"p":"response/content","v":"hello"}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
|
||||
h := &Handler{}
|
||||
terminalWritten, retryable := h.consumeChatStreamAttempt(req, resp, streamRuntime, "text", false, session, true)
|
||||
if !terminalWritten || retryable {
|
||||
t.Fatalf("expected cancelled attempt to terminate without retry, got terminalWritten=%v retryable=%v", terminalWritten, retryable)
|
||||
}
|
||||
if got, want := streamRuntime.finalErrorCode, string(stream.StopReasonContextCancelled); got != want {
|
||||
t.Fatalf("expected cancelled final error code %q, got %q", want, got)
|
||||
}
|
||||
if streamRuntime.finalErrorMessage == "" {
|
||||
t.Fatalf("expected cancelled final error message to be preserved")
|
||||
}
|
||||
|
||||
snapshot, err := historyStore.Snapshot()
|
||||
if err != nil {
|
||||
t.Fatalf("snapshot failed: %v", err)
|
||||
}
|
||||
if len(snapshot.Items) != 1 {
|
||||
t.Fatalf("expected one history item, got %d", len(snapshot.Items))
|
||||
}
|
||||
full, err := historyStore.Get(snapshot.Items[0].ID)
|
||||
if err != nil {
|
||||
t.Fatalf("get detail failed: %v", err)
|
||||
}
|
||||
if full.Status != "stopped" {
|
||||
t.Fatalf("expected stopped status, got %#v", full)
|
||||
}
|
||||
}
|
||||
@@ -130,8 +130,8 @@ func TestHandleVercelStreamPrepareAppliesCurrentInputFile(t *testing.T) {
|
||||
t.Fatalf("expected payload object, got %#v", body["payload"])
|
||||
}
|
||||
promptText, _ := payload["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral prompt, got %s", promptText)
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation prompt, got %s", promptText)
|
||||
}
|
||||
if strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("expected original turns hidden from prompt, got %s", promptText)
|
||||
|
||||
@@ -62,7 +62,7 @@ func (s Service) ApplyCurrentInputFile(ctx context.Context, a *auth.RequestAuth,
|
||||
stdReq.RefFileIDs = prependUniqueRefFileID(stdReq.RefFileIDs, fileID)
|
||||
stdReq.FinalPrompt, stdReq.ToolNames = promptcompat.BuildOpenAIPrompt(messages, stdReq.ToolsRaw, "", stdReq.ToolChoice, stdReq.Thinking)
|
||||
// Token accounting must reflect the actual downstream context:
|
||||
// the uploaded history.txt file content + the neutral live prompt.
|
||||
// the uploaded DS2API_HISTORY.txt file content + the continuation live prompt.
|
||||
stdReq.PromptTokenText = fileText + "\n" + stdReq.FinalPrompt
|
||||
return stdReq, nil
|
||||
}
|
||||
@@ -87,5 +87,5 @@ func latestUserInputForFile(messages []any) (int, string) {
|
||||
}
|
||||
|
||||
func currentInputFilePrompt() string {
|
||||
return "The current request and prior conversation context have already been provided. Answer the latest user request directly."
|
||||
return "Continue from the latest state in the attached DS2API_HISTORY.txt context. Treat it as the current working state and answer the latest user request directly."
|
||||
}
|
||||
|
||||
@@ -61,26 +61,33 @@ func (streamStatusManagedAuthStub) DetermineCaller(_ *http.Request) (*auth.Reque
|
||||
|
||||
func (streamStatusManagedAuthStub) Release(_ *auth.RequestAuth) {}
|
||||
|
||||
func TestBuildOpenAICurrentInputContextTranscriptUsesInjectedFileWrapper(t *testing.T) {
|
||||
func TestBuildOpenAICurrentInputContextTranscriptUsesNumberedHistorySections(t *testing.T) {
|
||||
_, historyMessages := splitOpenAIHistoryMessages(historySplitTestMessages(), 1)
|
||||
transcript := buildOpenAICurrentInputContextTranscript(historyMessages)
|
||||
|
||||
if strings.Contains(transcript, "[file content end]") || strings.Contains(transcript, "[file content begin]") || strings.Contains(transcript, "[file name]:") {
|
||||
t.Fatalf("expected plain transcript without file wrapper tags, got %q", transcript)
|
||||
t.Fatalf("expected transcript without file wrapper tags, got %q", transcript)
|
||||
}
|
||||
if !strings.Contains(transcript, "<|begin▁of▁sentence|>") {
|
||||
t.Fatalf("expected serialized conversation markers, got %q", transcript)
|
||||
if !strings.Contains(transcript, "# DS2API_HISTORY.txt") {
|
||||
t.Fatalf("expected history transcript header, got %q", transcript)
|
||||
}
|
||||
if !strings.Contains(transcript, "first user turn") || !strings.Contains(transcript, "tool result") {
|
||||
t.Fatalf("expected historical turns preserved, got %q", transcript)
|
||||
if !strings.Contains(transcript, "Prior conversation history and tool progress.") {
|
||||
t.Fatalf("expected history transcript description, got %q", transcript)
|
||||
}
|
||||
if !strings.Contains(transcript, "[reasoning_content]") || !strings.Contains(transcript, "hidden reasoning") {
|
||||
t.Fatalf("expected reasoning block preserved, got %q", transcript)
|
||||
for _, want := range []string{
|
||||
"=== 1. USER ===",
|
||||
"=== 2. ASSISTANT ===",
|
||||
"=== 3. TOOL ===",
|
||||
"first user turn",
|
||||
"tool result",
|
||||
"[reasoning_content]",
|
||||
"hidden reasoning",
|
||||
"<|DSML|tool_calls>",
|
||||
} {
|
||||
if !strings.Contains(transcript, want) {
|
||||
t.Fatalf("expected transcript to contain %q, got %q", want, transcript)
|
||||
}
|
||||
}
|
||||
if !strings.Contains(transcript, "<|DSML|tool_calls>") {
|
||||
t.Fatalf("expected tool calls preserved, got %q", transcript)
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
func TestSplitOpenAIHistoryMessagesUsesLatestUserTurn(t *testing.T) {
|
||||
@@ -243,7 +250,7 @@ func TestApplyCurrentInputFileDisabledPassThrough(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T) {
|
||||
func TestApplyCurrentInputFileUploadsFirstTurnWithNumberedHistoryTranscript(t *testing.T) {
|
||||
ds := &inlineUploadDSStub{}
|
||||
h := &openAITestSurface{
|
||||
Store: mockOpenAIConfig{
|
||||
@@ -273,15 +280,21 @@ func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T)
|
||||
t.Fatalf("expected 1 current input upload, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
upload := ds.uploadCalls[0]
|
||||
if upload.Filename != "history.txt" {
|
||||
if upload.Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("unexpected upload filename: %q", upload.Filename)
|
||||
}
|
||||
uploadedText := string(upload.Data)
|
||||
if strings.Contains(uploadedText, "[file content end]") || strings.Contains(uploadedText, "[file content begin]") || strings.Contains(uploadedText, "[file name]:") {
|
||||
t.Fatalf("expected uploaded transcript without file wrapper tags, got %q", uploadedText)
|
||||
}
|
||||
if !strings.Contains(uploadedText, "<|begin▁of▁sentence|><|User|>first turn content that is long enough") {
|
||||
t.Fatalf("expected serialized current user turn markers, got %q", uploadedText)
|
||||
for _, want := range []string{
|
||||
"# DS2API_HISTORY.txt",
|
||||
"=== 1. USER ===",
|
||||
"first turn content that is long enough",
|
||||
} {
|
||||
if !strings.Contains(uploadedText, want) {
|
||||
t.Fatalf("expected uploaded transcript to contain %q, got %q", want, uploadedText)
|
||||
}
|
||||
}
|
||||
if !strings.Contains(uploadedText, promptcompat.ThinkingInjectionMarker) {
|
||||
t.Fatalf("expected thinking injection in current input file, got %q", uploadedText)
|
||||
@@ -290,11 +303,11 @@ func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T)
|
||||
if strings.Contains(out.FinalPrompt, "first turn content that is long enough") {
|
||||
t.Fatalf("expected current input text to be replaced in live prompt, got %s", out.FinalPrompt)
|
||||
}
|
||||
if strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "history.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
if strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
t.Fatalf("expected live prompt not to instruct file reads, got %s", out.FinalPrompt)
|
||||
}
|
||||
if !strings.Contains(out.FinalPrompt, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral continuation instruction in live prompt, got %s", out.FinalPrompt)
|
||||
if !strings.Contains(out.FinalPrompt, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt in live prompt, got %s", out.FinalPrompt)
|
||||
}
|
||||
if len(out.RefFileIDs) != 1 || out.RefFileIDs[0] != "file-inline-1" {
|
||||
t.Fatalf("expected current input file id in ref_file_ids, got %#v", out.RefFileIDs)
|
||||
@@ -302,6 +315,9 @@ func TestApplyCurrentInputFileUploadsFirstTurnWithInjectedWrapper(t *testing.T)
|
||||
if !strings.Contains(out.PromptTokenText, "first turn content that is long enough") {
|
||||
t.Fatalf("expected prompt token text to preserve original full context, got %q", out.PromptTokenText)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "# DS2API_HISTORY.txt") || !strings.Contains(out.PromptTokenText, "=== 1. USER ===") {
|
||||
t.Fatalf("expected prompt token text to include numbered history transcript, got %q", out.PromptTokenText)
|
||||
}
|
||||
}
|
||||
|
||||
func TestApplyCurrentInputFilePreservesFullContextPromptForTokenCounting(t *testing.T) {
|
||||
@@ -337,10 +353,13 @@ func TestApplyCurrentInputFilePreservesFullContextPromptForTokenCounting(t *test
|
||||
t.Fatalf("expected prompt token text to contain file context with full conversation, got %q", out.PromptTokenText)
|
||||
}
|
||||
if strings.Contains(out.PromptTokenText, "[file content end]") || strings.Contains(out.PromptTokenText, "[file name]:") {
|
||||
t.Fatalf("expected prompt token text to use raw transcript without wrapper tags, got %q", out.PromptTokenText)
|
||||
t.Fatalf("expected prompt token text to omit file wrapper tags, got %q", out.PromptTokenText)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected prompt token text to also include neutral live prompt, got %q", out.PromptTokenText)
|
||||
if !strings.Contains(out.PromptTokenText, "# DS2API_HISTORY.txt") || !strings.Contains(out.PromptTokenText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected prompt token text to include numbered history transcript, got %q", out.PromptTokenText)
|
||||
}
|
||||
if !strings.Contains(out.PromptTokenText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected prompt token text to also include continuation prompt, got %q", out.PromptTokenText)
|
||||
}
|
||||
if strings.Contains(out.FinalPrompt, "first user turn") || strings.Contains(out.FinalPrompt, "latest user turn") {
|
||||
t.Fatalf("expected live prompt to hide original turns, got %q", out.FinalPrompt)
|
||||
@@ -378,20 +397,20 @@ func TestApplyCurrentInputFileUploadsFullContextFile(t *testing.T) {
|
||||
t.Fatalf("expected one current input upload, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
upload := ds.uploadCalls[0]
|
||||
if upload.Filename != "history.txt" {
|
||||
t.Fatalf("expected history.txt upload, got %q", upload.Filename)
|
||||
if upload.Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("expected DS2API_HISTORY.txt upload, got %q", upload.Filename)
|
||||
}
|
||||
uploadedText := string(upload.Data)
|
||||
for _, want := range []string{"system instructions", "first user turn", "hidden reasoning", "tool result", "latest user turn", promptcompat.ThinkingInjectionMarker} {
|
||||
for _, want := range []string{"# DS2API_HISTORY.txt", "=== 1. SYSTEM ===", "=== 2. USER ===", "=== 3. ASSISTANT ===", "=== 4. TOOL ===", "=== 5. USER ===", "system instructions", "first user turn", "hidden reasoning", "tool result", "latest user turn", promptcompat.ThinkingInjectionMarker} {
|
||||
if !strings.Contains(uploadedText, want) {
|
||||
t.Fatalf("expected full context file to contain %q, got %q", want, uploadedText)
|
||||
}
|
||||
}
|
||||
if strings.Contains(out.FinalPrompt, "first user turn") || strings.Contains(out.FinalPrompt, "latest user turn") || strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "history.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
t.Fatalf("expected live prompt to use only a neutral continuation instruction, got %s", out.FinalPrompt)
|
||||
if strings.Contains(out.FinalPrompt, "first user turn") || strings.Contains(out.FinalPrompt, "latest user turn") || strings.Contains(out.FinalPrompt, "CURRENT_USER_INPUT.txt") || strings.Contains(out.FinalPrompt, "Read that file") {
|
||||
t.Fatalf("expected live prompt to use only a continuation instruction, got %s", out.FinalPrompt)
|
||||
}
|
||||
if !strings.Contains(out.FinalPrompt, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral continuation instruction in live prompt, got %s", out.FinalPrompt)
|
||||
if !strings.Contains(out.FinalPrompt, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt in live prompt, got %s", out.FinalPrompt)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -423,6 +442,9 @@ func TestApplyCurrentInputFileCarriesHistoryText(t *testing.T) {
|
||||
if out.HistoryText != string(ds.uploadCalls[0].Data) {
|
||||
t.Fatalf("expected current input file flow to preserve uploaded text in history, got %q", out.HistoryText)
|
||||
}
|
||||
if !strings.Contains(out.HistoryText, "# DS2API_HISTORY.txt") || !strings.Contains(out.HistoryText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected history text to use numbered transcript format, got %q", out.HistoryText)
|
||||
}
|
||||
}
|
||||
|
||||
func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *testing.T) {
|
||||
@@ -454,7 +476,7 @@ func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *t
|
||||
t.Fatalf("expected 1 upload call, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
upload := ds.uploadCalls[0]
|
||||
if upload.Filename != "history.txt" {
|
||||
if upload.Filename != "DS2API_HISTORY.txt" {
|
||||
t.Fatalf("unexpected upload filename: %q", upload.Filename)
|
||||
}
|
||||
if upload.Purpose != "assistants" {
|
||||
@@ -462,7 +484,10 @@ func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *t
|
||||
}
|
||||
historyText := string(upload.Data)
|
||||
if strings.Contains(historyText, "[file content end]") || strings.Contains(historyText, "[file content begin]") || strings.Contains(historyText, "[file name]:") {
|
||||
t.Fatalf("expected plain history transcript without wrapper tags, got %s", historyText)
|
||||
t.Fatalf("expected history transcript without file wrapper tags, got %s", historyText)
|
||||
}
|
||||
if !strings.Contains(historyText, "# DS2API_HISTORY.txt") || !strings.Contains(historyText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected history transcript to use numbered sections, got %s", historyText)
|
||||
}
|
||||
if !strings.Contains(historyText, "latest user turn") {
|
||||
t.Fatalf("expected full context to include latest turn, got %s", historyText)
|
||||
@@ -471,8 +496,8 @@ func TestChatCompletionsCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *t
|
||||
t.Fatal("expected completion payload to be captured")
|
||||
}
|
||||
promptText, _ := ds.completionReq["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral completion prompt, got %s", promptText)
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt, got %s", promptText)
|
||||
}
|
||||
if strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("expected prompt to hide original turns, got %s", promptText)
|
||||
@@ -523,12 +548,16 @@ func TestResponsesCurrentInputFileUploadsContextAndKeepsNeutralPrompt(t *testing
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected 1 upload call, got %d", len(ds.uploadCalls))
|
||||
}
|
||||
historyText := string(ds.uploadCalls[0].Data)
|
||||
if !strings.Contains(historyText, "# DS2API_HISTORY.txt") || !strings.Contains(historyText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected uploaded history text to use numbered transcript format, got %s", historyText)
|
||||
}
|
||||
if ds.completionReq == nil {
|
||||
t.Fatal("expected completion payload to be captured")
|
||||
}
|
||||
promptText, _ := ds.completionReq["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") {
|
||||
t.Fatalf("expected neutral completion prompt, got %s", promptText)
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") {
|
||||
t.Fatalf("expected continuation-oriented prompt, got %s", promptText)
|
||||
}
|
||||
if strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("expected prompt to hide original turns, got %s", promptText)
|
||||
@@ -669,11 +698,15 @@ func TestCurrentInputFileWorksAcrossAutoDeleteModes(t *testing.T) {
|
||||
if len(ds.uploadCalls) != 1 {
|
||||
t.Fatalf("expected current input upload for mode=%s, got %d", mode, len(ds.uploadCalls))
|
||||
}
|
||||
historyText := string(ds.uploadCalls[0].Data)
|
||||
if !strings.Contains(historyText, "# DS2API_HISTORY.txt") || !strings.Contains(historyText, "=== 1. SYSTEM ===") {
|
||||
t.Fatalf("expected uploaded history text to use numbered transcript format, got %s", historyText)
|
||||
}
|
||||
if ds.completionReq == nil {
|
||||
t.Fatalf("expected completion payload for mode=%s", mode)
|
||||
}
|
||||
promptText, _ := ds.completionReq["prompt"].(string)
|
||||
if !strings.Contains(promptText, "Answer the latest user request directly.") || strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
if !strings.Contains(promptText, "Continue from the latest state in the attached DS2API_HISTORY.txt context.") || strings.Contains(promptText, "first user turn") || strings.Contains(promptText, "latest user turn") {
|
||||
t.Fatalf("unexpected prompt for mode=%s: %s", mode, promptText)
|
||||
}
|
||||
})
|
||||
|
||||
@@ -222,7 +222,13 @@ func (h *Handler) consumeResponsesStreamAttempt(r *http.Request, resp *http.Resp
|
||||
finalReason = "content_filter"
|
||||
}
|
||||
},
|
||||
OnContextDone: func() {
|
||||
streamRuntime.markContextCancelled()
|
||||
},
|
||||
})
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
return true, false
|
||||
}
|
||||
terminalWritten := streamRuntime.finalize(finalReason, allowDeferEmpty && finalReason != "content_filter")
|
||||
if terminalWritten {
|
||||
return true, false
|
||||
@@ -235,6 +241,10 @@ func logResponsesStreamTerminal(streamRuntime *responsesStreamRuntime, attempts
|
||||
if attempts > 0 {
|
||||
source = "synthetic_retry"
|
||||
}
|
||||
if streamRuntime.finalErrorCode == string(streamengine.StopReasonContextCancelled) {
|
||||
config.Logger.Info("[openai_empty_retry] terminal cancelled", "surface", "responses", "stream", true, "retry_attempts", attempts, "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
}
|
||||
if streamRuntime.failed {
|
||||
config.Logger.Info("[openai_empty_retry] terminal empty output", "surface", "responses", "stream", true, "retry_attempts", attempts, "success_source", "none", "error_code", streamRuntime.finalErrorCode)
|
||||
return
|
||||
|
||||
@@ -0,0 +1,70 @@
|
||||
package responses
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"ds2api/internal/promptcompat"
|
||||
"ds2api/internal/stream"
|
||||
)
|
||||
|
||||
func makeResponsesOpenAISSEHTTPResponse(lines ...string) *http.Response {
|
||||
body := strings.Join(lines, "\n")
|
||||
if !strings.HasSuffix(body, "\n") {
|
||||
body += "\n"
|
||||
}
|
||||
return &http.Response{
|
||||
StatusCode: http.StatusOK,
|
||||
Header: make(http.Header),
|
||||
Body: io.NopCloser(strings.NewReader(body)),
|
||||
}
|
||||
}
|
||||
|
||||
func TestConsumeResponsesStreamAttemptMarksContextCancelledState(t *testing.T) {
|
||||
ctx, cancel := context.WithCancel(context.Background())
|
||||
cancel()
|
||||
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1/responses", nil).WithContext(ctx)
|
||||
rec := httptest.NewRecorder()
|
||||
streamRuntime := newResponsesStreamRuntime(
|
||||
rec,
|
||||
http.NewResponseController(rec),
|
||||
true,
|
||||
"resp-cancelled",
|
||||
"deepseek-v4-flash",
|
||||
"prompt",
|
||||
false,
|
||||
false,
|
||||
true,
|
||||
nil,
|
||||
nil,
|
||||
false,
|
||||
false,
|
||||
promptcompat.DefaultToolChoicePolicy(),
|
||||
"",
|
||||
nil,
|
||||
)
|
||||
resp := makeResponsesOpenAISSEHTTPResponse(
|
||||
`data: {"p":"response/content","v":"hello"}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
|
||||
h := &Handler{}
|
||||
terminalWritten, retryable := h.consumeResponsesStreamAttempt(req, resp, streamRuntime, "text", false, true)
|
||||
if !terminalWritten || retryable {
|
||||
t.Fatalf("expected cancelled attempt to terminate without retry, got terminalWritten=%v retryable=%v", terminalWritten, retryable)
|
||||
}
|
||||
if !streamRuntime.failed {
|
||||
t.Fatalf("expected cancelled response stream to be marked failed")
|
||||
}
|
||||
if got, want := streamRuntime.finalErrorCode, string(stream.StopReasonContextCancelled); got != want {
|
||||
t.Fatalf("expected cancelled final error code %q, got %q", want, got)
|
||||
}
|
||||
if streamRuntime.finalErrorMessage == "" {
|
||||
t.Fatalf("expected cancelled final error message to be preserved")
|
||||
}
|
||||
}
|
||||
@@ -139,6 +139,13 @@ func (s *responsesStreamRuntime) failResponse(status int, message, code string)
|
||||
s.sendDone()
|
||||
}
|
||||
|
||||
func (s *responsesStreamRuntime) markContextCancelled() {
|
||||
s.failed = true
|
||||
s.finalErrorStatus = 499
|
||||
s.finalErrorMessage = "request context cancelled"
|
||||
s.finalErrorCode = string(streamengine.StopReasonContextCancelled)
|
||||
}
|
||||
|
||||
func (s *responsesStreamRuntime) finalize(finishReason string, deferEmptyOutput bool) bool {
|
||||
s.failed = false
|
||||
s.finalErrorStatus = 0
|
||||
|
||||
@@ -1,35 +1,108 @@
|
||||
package promptcompat
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"ds2api/internal/prompt"
|
||||
)
|
||||
|
||||
const CurrentInputContextFilename = "history.txt"
|
||||
const CurrentInputContextFilename = "DS2API_HISTORY.txt"
|
||||
|
||||
const historyTranscriptTitle = "# DS2API_HISTORY.txt"
|
||||
const historyTranscriptSummary = "Prior conversation history and tool progress."
|
||||
|
||||
func BuildOpenAIHistoryTranscript(messages []any) string {
|
||||
return buildOpenAIInjectedFileTranscript(messages)
|
||||
return buildOpenAIHistoryTranscript(messages)
|
||||
}
|
||||
|
||||
func BuildOpenAICurrentUserInputTranscript(text string) string {
|
||||
if strings.TrimSpace(text) == "" {
|
||||
return ""
|
||||
}
|
||||
return BuildOpenAICurrentInputContextTranscript([]any{
|
||||
return buildOpenAIHistoryTranscript([]any{
|
||||
map[string]any{"role": "user", "content": text},
|
||||
})
|
||||
}
|
||||
|
||||
func BuildOpenAICurrentInputContextTranscript(messages []any) string {
|
||||
return buildOpenAIInjectedFileTranscript(messages)
|
||||
return buildOpenAIHistoryTranscript(messages)
|
||||
}
|
||||
|
||||
func buildOpenAIInjectedFileTranscript(messages []any) string {
|
||||
normalized := NormalizeOpenAIMessagesForPrompt(messages, "")
|
||||
transcript := strings.TrimSpace(prompt.MessagesPrepare(normalized))
|
||||
func buildOpenAIHistoryTranscript(messages []any) string {
|
||||
if len(messages) == 0 {
|
||||
return ""
|
||||
}
|
||||
var b strings.Builder
|
||||
b.WriteString(historyTranscriptTitle)
|
||||
b.WriteString("\n")
|
||||
b.WriteString(historyTranscriptSummary)
|
||||
b.WriteString("\n\n")
|
||||
|
||||
entry := 0
|
||||
for _, raw := range messages {
|
||||
msg, ok := raw.(map[string]any)
|
||||
if !ok {
|
||||
continue
|
||||
}
|
||||
role := normalizeOpenAIRoleForPrompt(strings.ToLower(strings.TrimSpace(asString(msg["role"]))))
|
||||
content := strings.TrimSpace(buildOpenAIHistoryEntry(role, msg))
|
||||
if content == "" {
|
||||
continue
|
||||
}
|
||||
entry++
|
||||
fmt.Fprintf(&b, "=== %d. %s ===\n%s\n\n", entry, strings.ToUpper(roleLabelForHistory(role)), content)
|
||||
}
|
||||
|
||||
transcript := strings.TrimSpace(b.String())
|
||||
if transcript == "" {
|
||||
return ""
|
||||
}
|
||||
return transcript
|
||||
return transcript + "\n"
|
||||
}
|
||||
|
||||
func buildOpenAIHistoryEntry(role string, msg map[string]any) string {
|
||||
switch role {
|
||||
case "assistant":
|
||||
return strings.TrimSpace(buildAssistantContentForPrompt(msg))
|
||||
case "tool", "function":
|
||||
return strings.TrimSpace(buildToolHistoryContent(msg))
|
||||
case "system", "user":
|
||||
return strings.TrimSpace(NormalizeOpenAIContentForPrompt(msg["content"]))
|
||||
default:
|
||||
return strings.TrimSpace(NormalizeOpenAIContentForPrompt(msg["content"]))
|
||||
}
|
||||
}
|
||||
|
||||
func buildToolHistoryContent(msg map[string]any) string {
|
||||
content := strings.TrimSpace(NormalizeOpenAIContentForPrompt(msg["content"]))
|
||||
parts := make([]string, 0, 2)
|
||||
if name := strings.TrimSpace(asString(msg["name"])); name != "" {
|
||||
parts = append(parts, "name="+name)
|
||||
}
|
||||
if callID := strings.TrimSpace(asString(msg["tool_call_id"])); callID != "" {
|
||||
parts = append(parts, "tool_call_id="+callID)
|
||||
}
|
||||
header := ""
|
||||
if len(parts) > 0 {
|
||||
header = "[" + strings.Join(parts, " ") + "]"
|
||||
}
|
||||
switch {
|
||||
case header != "" && content != "":
|
||||
return header + "\n" + content
|
||||
case header != "":
|
||||
return header
|
||||
default:
|
||||
return content
|
||||
}
|
||||
}
|
||||
|
||||
func roleLabelForHistory(role string) string {
|
||||
role = strings.ToLower(strings.TrimSpace(role))
|
||||
switch role {
|
||||
case "function":
|
||||
return "tool"
|
||||
case "":
|
||||
return "unknown"
|
||||
default:
|
||||
return role
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-b10a2aca-39e9-4a38-be9d-9f22e398cb62","status":"PENDING","file_name":"history.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":null,"error_code":null,"inserted_at":1777485015.255,"updated_at":1777485015.255,"is_image":false,"audit_result":null}}}
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-b10a2aca-39e9-4a38-be9d-9f22e398cb62","status":"PENDING","file_name":"DS2API_HISTORY.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":null,"error_code":null,"inserted_at":1777485015.255,"updated_at":1777485015.255,"is_image":false,"audit_result":null}}}
|
||||
event: ready
|
||||
data: {"request_message_id":1,"response_message_id":2,"model_type":"default"}
|
||||
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-9c8ae986-75f7-4611-9956-5e1b502f3ec2","status":"SUCCESS","file_name":"history.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":145,"error_code":null,"inserted_at":1777485076.42,"updated_at":1777485076.42,"signed_path":"/file?file_id=9c8ae986-75f7-4611-9956-5e1b502f3ec2&state=a1REa2AdO8JmDuxMFiUTPJfpiyY4ie2weyUpYxfvEOrk5lxUCZifpRw9toZAEzn3DAjkgbR6blgZf41KLkHBKwwrcYTIjfxTRKijDqjEfguis03yddpuVrii6keG4%2BXIlcLAsyZG3qcGhfTGVZhsr%2BRl17J%2BcnT9roslhxBcEy4rthFJVMWUI%2BSHjuo2gLEUDfvMfULQ1gSLVGtr%2Fpq%2FcNPCPSxZapIQv04ZVmJLcdbzRkz%2Bb%2BxM5RWUIPujp%2B3ke1WDa3%2B6S4pP0Pv%2BAJ0MFUjQsloUwO4AsJ8YhGBFWg8Ehe1b2yt1N%2Fi%2BIjLRPt5xiNmALcJJXIY%3D","is_image":false,"audit_result":null}}}
|
||||
{"code":0,"msg":"","data":{"biz_code":0,"biz_msg":"","biz_data":{"id":"file-9c8ae986-75f7-4611-9956-5e1b502f3ec2","status":"SUCCESS","file_name":"DS2API_HISTORY.txt","from_share":false,"file_size":732,"model_kind":"NORMAL","token_usage":145,"error_code":null,"inserted_at":1777485076.42,"updated_at":1777485076.42,"signed_path":"/file?file_id=9c8ae986-75f7-4611-9956-5e1b502f3ec2&state=a1REa2AdO8JmDuxMFiUTPJfpiyY4ie2weyUpYxfvEOrk5lxUCZifpRw9toZAEzn3DAjkgbR6blgZf41KLkHBKwwrcYTIjfxTRKijDqjEfguis03yddpuVrii6keG4%2BXIlcLAsyZG3qcGhfTGVZhsr%2BRl17J%2BcnT9roslhxBcEy4rthFJVMWUI%2BSHjuo2gLEUDfvMfULQ1gSLVGtr%2Fpq%2FcNPCPSxZapIQv04ZVmJLcdbzRkz%2Bb%2BxM5RWUIPujp%2B3ke1WDa3%2B6S4pP0Pv%2BAJ0MFUjQsloUwO4AsJ8YhGBFWg8Ehe1b2yt1N%2Fi%2BIjLRPt5xiNmALcJJXIY%3D","is_image":false,"audit_result":null}}}
|
||||
event: ready
|
||||
data: {"request_message_id":1,"response_message_id":2,"model_type":"expert"}
|
||||
|
||||
|
||||
16
vercel.json
16
vercel.json
@@ -81,6 +81,22 @@
|
||||
"source": "/admin/version",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/chat-history(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/proxies(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/dev/raw-samples/(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin/dev/captures(.*)",
|
||||
"destination": "/api/index"
|
||||
},
|
||||
{
|
||||
"source": "/admin",
|
||||
"destination": "/admin/index.html"
|
||||
|
||||
@@ -16,7 +16,15 @@ const TOOL_MARKER = '<|Tool|>'
|
||||
const END_INSTRUCTIONS_MARKER = '<|end▁of▁instructions|>'
|
||||
const END_SENTENCE_MARKER = '<|end▁of▁sentence|>'
|
||||
const END_TOOL_RESULTS_MARKER = '<|end▁of▁toolresults|>'
|
||||
const CURRENT_INPUT_FILE_PROMPT = 'The current request and prior conversation context have already been provided. Answer the latest user request directly.'
|
||||
const CURRENT_INPUT_FILE_PROMPT = 'Continue from the latest state in the attached DS2API_HISTORY.txt context. Treat it as the current working state and answer the latest user request directly.'
|
||||
const LEGACY_CURRENT_INPUT_FILE_PROMPTS = new Set([
|
||||
'The current request and prior conversation context have already been provided. Answer the latest user request directly.',
|
||||
])
|
||||
|
||||
function isCurrentInputFilePrompt(value) {
|
||||
const text = String(value || '').trim()
|
||||
return text === CURRENT_INPUT_FILE_PROMPT || LEGACY_CURRENT_INPUT_FILE_PROMPTS.has(text)
|
||||
}
|
||||
|
||||
function formatDateTime(value, lang) {
|
||||
if (!value) return '-'
|
||||
@@ -312,7 +320,7 @@ function buildListModeMessages(item, t) {
|
||||
|
||||
const placeholderOnly = liveMessages.length === 1
|
||||
&& String(liveMessages[0]?.role || '').trim().toLowerCase() === 'user'
|
||||
&& String(liveMessages[0]?.content || '').trim() === CURRENT_INPUT_FILE_PROMPT
|
||||
&& isCurrentInputFilePrompt(liveMessages[0]?.content)
|
||||
|
||||
if (placeholderOnly) {
|
||||
return { messages: historyMessages, historyMerged: true }
|
||||
|
||||
@@ -394,7 +394,7 @@
|
||||
"thinkingInjectionPromptHelp": "Leave empty to use the built-in default prompt shown as the input placeholder.",
|
||||
"currentInputFileTitle": "Independent Split",
|
||||
"currentInputFileEnabled": "Independent split (by size)",
|
||||
"currentInputFileDesc": "Enabled by default. Once the character threshold is reached, upload the full context as a history.txt context file.",
|
||||
"currentInputFileDesc": "Enabled by default. Once the character threshold is reached, upload the full context as a DS2API_HISTORY.txt context file.",
|
||||
"currentInputFileMinChars": "Current input threshold (characters)",
|
||||
"currentInputFileHelp": "Default is 0, which uses independent split for any non-empty input.",
|
||||
"compatibilityTitle": "Compatibility",
|
||||
@@ -485,4 +485,4 @@
|
||||
"four": "Trigger a redeploy to apply the updated environment variables."
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -394,7 +394,7 @@
|
||||
"thinkingInjectionPromptHelp": "留空时使用内置默认提示词;默认内容会显示在输入框占位文本中。",
|
||||
"currentInputFileTitle": "独立拆分",
|
||||
"currentInputFileEnabled": "独立拆分(按量)",
|
||||
"currentInputFileDesc": "默认开启。达到字符阈值后,将完整上下文上传为 history.txt 上下文文件。",
|
||||
"currentInputFileDesc": "默认开启。达到字符阈值后,将完整上下文上传为 DS2API_HISTORY.txt 上下文文件。",
|
||||
"currentInputFileMinChars": "当前输入阈值(字符数)",
|
||||
"currentInputFileHelp": "默认 0,表示只要有输入就会使用独立拆分。",
|
||||
"compatibilityTitle": "兼容性设置",
|
||||
@@ -485,4 +485,4 @@
|
||||
"four": "触发重新部署以应用新的环境变量。"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user