mirror of
https://github.com/CJackHwang/ds2api.git
synced 2026-05-05 00:45:29 +08:00
feat: implement support for thinking blocks in Gemini API and enable thinking by default for supported models
This commit is contained in:
@@ -555,7 +555,7 @@ data: {"type":"message_stop"}
|
||||
|
||||
**Notes**:
|
||||
|
||||
- Models whose names contain `opus` / `reasoner` / `slow` stream `thinking_delta`
|
||||
- Models that support thinking emit `thinking` blocks / `thinking_delta` by default; explicit thinking disablement or `-nothinking` models suppress them
|
||||
- `signature_delta` is not emitted (DeepSeek does not provide verifiable thinking signatures)
|
||||
- In `tools` mode, the stream avoids leaking raw tool JSON and does not force `input_json_delta`
|
||||
|
||||
@@ -601,6 +601,7 @@ Request body accepts Gemini-style `contents` / `tools`. Model names can use alia
|
||||
Response uses Gemini-compatible fields, including:
|
||||
|
||||
- `candidates[].content.parts[].text`
|
||||
- `candidates[].content.parts[].thought=true` for thinking output
|
||||
- `candidates[].content.parts[].functionCall` (when tool call is produced)
|
||||
- `usageMetadata` (`promptTokenCount` / `candidatesTokenCount` / `totalTokenCount`)
|
||||
|
||||
@@ -609,6 +610,7 @@ Response uses Gemini-compatible fields, including:
|
||||
Returns SSE (`text/event-stream`), each chunk as `data: <json>`:
|
||||
|
||||
- regular text: incremental text chunks
|
||||
- thinking: incremental chunks with `parts[].thought=true`
|
||||
- `tools` mode: buffered and emitted as `functionCall` at finalize phase
|
||||
- final chunk: includes `finishReason: "STOP"` and `usageMetadata`
|
||||
- Token counting prefers pass-through from upstream DeepSeek SSE (`accumulated_token_usage` / `token_usage`), and only falls back to local estimation when upstream usage is absent
|
||||
|
||||
4
API.md
4
API.md
@@ -561,7 +561,7 @@ data: {"type":"message_stop"}
|
||||
|
||||
**说明**:
|
||||
|
||||
- 默认模型会按各 surface 的既有规则输出 thinking / reasoning 相关增量
|
||||
- 默认支持 thinking 的模型会输出 `thinking` block / `thinking_delta`;请求显式关闭 thinking 或使用 `-nothinking` 模型时不会输出
|
||||
- 带 `-nothinking` 后缀的模型会强制关闭 thinking,即使请求显式传了 `thinking` / `reasoning` / `reasoning_effort` 也不会输出 `thinking_delta`
|
||||
- 不会输出 `signature_delta`(上游 DeepSeek 未提供可验证签名)
|
||||
- `tools` 场景优先避免泄露原始工具 JSON,不强制发送 `input_json_delta`
|
||||
@@ -608,6 +608,7 @@ data: {"type":"message_stop"}
|
||||
响应为 Gemini 兼容结构,核心字段包括:
|
||||
|
||||
- `candidates[].content.parts[].text`
|
||||
- `candidates[].content.parts[].thought=true`(thinking 输出)
|
||||
- `candidates[].content.parts[].functionCall`(工具调用时)
|
||||
- `usageMetadata`(`promptTokenCount` / `candidatesTokenCount` / `totalTokenCount`)
|
||||
|
||||
@@ -616,6 +617,7 @@ data: {"type":"message_stop"}
|
||||
返回 SSE(`text/event-stream`),每个 chunk 为一条 `data: <json>`:
|
||||
|
||||
- 常规文本:持续返回增量文本 chunk
|
||||
- thinking:持续返回 `parts[].thought=true` 的增量 chunk
|
||||
- `tools` 场景:会缓冲并在结束时输出 `functionCall` 结构
|
||||
- 结束 chunk:包含 `finishReason: "STOP"` 与 `usageMetadata`
|
||||
- token 计数优先透传上游 DeepSeek SSE(如 `accumulated_token_usage` / `token_usage`);仅在上游缺失时回退本地估算
|
||||
|
||||
@@ -109,7 +109,7 @@ DS2API 当前的核心思路,不是把客户端传来的 `messages`、`tools`
|
||||
- 但 DeepSeek 远端本身支持同一 `chat_session_id` 的跨轮次持续对话。2026-04-27 已用项目内现有 DeepSeek client 做过一次不改业务代码的双轮实测:同一 `chat_session_id` 下,第 1 轮返回 `request_message_id=1` / `response_message_id=2` / 文本 `SESSION_TEST_ONE`;第 2 轮重新获取一次 PoW,并发送 `parent_message_id=2` 后,成功返回 `request_message_id=3` / `response_message_id=4` / 文本 `SESSION_TEST_TWO`。这说明“同远端会话持续聊天”能力存在,且每轮需要携带正确的 parent/message 链接信息,同时重新获取对应轮次可用的 PoW。
|
||||
- OpenAI Chat / Responses 原生走统一 OpenAI 标准化与 DeepSeek payload 组装;Claude / Gemini 会尽量复用 OpenAI prompt/tool 语义,其中 Gemini 直接复用 `promptcompat.BuildOpenAIPromptForAdapter`。Go 主服务新增 `completionruntime` 启动层,统一执行 DeepSeek session/PoW/call;输出侧新增 `assistantturn` 语义层:非流式 OpenAI Chat / Responses / Claude / Gemini 会把 DeepSeek SSE 收集结果先归一成同一份 assistant turn,再分别渲染成各协议原生外形;流式 OpenAI Chat / Responses / Claude / Gemini 继续保持各协议实时 SSE framing,但最终收尾的 tool fallback、schema 归一、usage、empty-output / content-filter 错误语义同样由 `assistantturn` 判定。Claude / Gemini 的常规 Go 主路径不再依赖内部 `httptest` 转发到 OpenAI handler;`translatorcliproxy` 仍保留用于 Vercel bridge、兼容工具和回归测试。
|
||||
- Vercel Node 流式路径本轮不迁移,仍使用现有 Node bridge / stream-tool-sieve 实现;后续若变更 Node 流式语义,需要按 `assistantturn` 的 Go canonical 输出语义同步对齐。
|
||||
- 客户端传入的 thinking / reasoning 开关会被归一到下游 `thinking_enabled`。Gemini `generationConfig.thinkingConfig.thinkingBudget` 会翻译成同一套 thinking 开关;关闭时即使上游返回 `response/thinking_content`,兼容层也不会把它当作可见正文输出。若最终解析出的模型名带 `-nothinking` 后缀,则会无条件强制关闭 thinking,优先级高于请求体中的 `thinking` / `reasoning` / `reasoning_effort`。Claude surface 在流式请求且未显式声明 `thinking` 时,仍按 Anthropic 语义默认关闭;但在非流式代理场景,兼容层会内部开启一次下游 thinking,用于捕获“正文为空、工具调用落在 thinking 里”的情况,随后在回包前剥离用户不可见的 thinking block。
|
||||
- 客户端传入的 thinking / reasoning 开关会被归一到下游 `thinking_enabled`。Gemini `generationConfig.thinkingConfig.thinkingBudget` 会翻译成同一套 thinking 开关;关闭时即使上游返回 `response/thinking_content`,兼容层也不会把它当作可见正文输出。若最终解析出的模型名带 `-nothinking` 后缀,则会无条件强制关闭 thinking,优先级高于请求体中的 `thinking` / `reasoning` / `reasoning_effort`。未显式关闭时,各 surface 会按解析后的 DeepSeek 模型默认能力开启 thinking,并用各自协议的原生形态暴露:OpenAI Chat 为 `reasoning_content`,OpenAI Responses 为 `response.reasoning.delta` / `reasoning` content,Claude 为 `thinking` block / `thinking_delta`,Gemini 为 `thought: true` part。
|
||||
- 对 OpenAI Chat / Responses 的非流式收尾,如果最终可见正文为空,兼容层会优先尝试把思维链中的独立 DSML / XML 工具块当作真实工具调用解析出来。流式链路也会在收尾阶段做同样的 fallback 检测,但不会因为思维链内容去中途拦截或改写流式输出;真正的工具识别始终基于原始上游文本,而不是基于“已经做过可见输出清洗”的版本,因此即使最终可见层会剥离完整 leaked DSML / XML `tool_calls` wrapper、并抑制全空参数或无效 wrapper 块,也不会影响真实工具调用转成结构化 `tool_calls` / `function_call`。补发结果会作为本轮 assistant 的结构化 `tool_calls` / `function_call` 输出返回,而不是塞进 `content` 文本;如果客户端没有开启 thinking / reasoning,思维链只用于检测,不会作为 `reasoning_content` 或可见正文暴露。只有正文为空且思维链里也没有可执行工具调用时,才继续按空回复错误处理。
|
||||
- OpenAI Chat / Responses 的空回复错误处理之前会默认做一次内部补偿重试:第一次上游完整结束后,如果最终可见正文为空、没有解析到工具调用、也没有已经向客户端流式发出工具调用,并且终止原因不是 `content_filter`,兼容层会复用同一个 `chat_session_id`、账号、token 与工具策略,把原始 completion `prompt` 追加固定后缀 `Previous reply had no visible output. Please regenerate the visible final answer or tool call now.` 后重新提交一次。重试遵循 DeepSeek 多轮对话协议:从第一次上游 SSE 流中提取 `response_message_id`,并在重试 payload 中设置 `parent_message_id` 为该值,使重试成为同一会话的后续轮次而非断裂的根消息;同时重新获取一次 PoW(若 PoW 获取失败则回退到原始 PoW)。该重试不会重新标准化消息、不会新建 session、不会切换账号,也不会向流式客户端插入重试标记;第二次 thinking / reasoning 会按正常增量直接接到第一次之后,并继续使用 overlap trim 去重。若第二次仍为空,终端错误码仍保持现有 `upstream_empty_output`;若任一尝试触发空 `content_filter`,不做补偿重试并保持 `content_filter` 错误。JS Vercel 运行时同样设置 `parent_message_id`,但因无法直接调用 PoW API 而复用原始 PoW。
|
||||
|
||||
|
||||
@@ -27,11 +27,32 @@ func TestNormalizeClaudeRequestUsesGlobalAliasMapping(t *testing.T) {
|
||||
if out.Standard.ResolvedModel != "deepseek-v4-pro-search" {
|
||||
t.Fatalf("resolved model mismatch: got=%q", out.Standard.ResolvedModel)
|
||||
}
|
||||
if out.Standard.Thinking || !out.Standard.Search {
|
||||
if !out.Standard.Thinking || !out.Standard.Search {
|
||||
t.Fatalf("unexpected flags: thinking=%v search=%v", out.Standard.Thinking, out.Standard.Search)
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeClaudeRequestDisablesThinkingWhenRequested(t *testing.T) {
|
||||
req := map[string]any{
|
||||
"model": "claude-opus-4-6",
|
||||
"messages": []any{
|
||||
map[string]any{"role": "user", "content": "hello"},
|
||||
},
|
||||
"thinking": map[string]any{"type": "disabled"},
|
||||
}
|
||||
out, err := normalizeClaudeRequest(mockClaudeConfig{
|
||||
aliases: map[string]string{
|
||||
"claude-opus-4-6": "deepseek-v4-pro",
|
||||
},
|
||||
}, req)
|
||||
if err != nil {
|
||||
t.Fatalf("normalizeClaudeRequest error: %v", err)
|
||||
}
|
||||
if out.Standard.Thinking {
|
||||
t.Fatalf("expected explicit Claude thinking disable to win")
|
||||
}
|
||||
}
|
||||
|
||||
func TestNormalizeClaudeRequestEnablesThinkingWhenRequested(t *testing.T) {
|
||||
req := map[string]any{
|
||||
"model": "claude-opus-4-6",
|
||||
|
||||
@@ -67,17 +67,12 @@ func (h *Handler) handleClaudeDirect(w http.ResponseWriter, r *http.Request) boo
|
||||
writeClaudeError(w, http.StatusBadRequest, "invalid json")
|
||||
return true
|
||||
}
|
||||
exposeThinking := false
|
||||
if enabled, ok := util.ResolveThinkingOverride(req); ok && enabled {
|
||||
exposeThinking = true
|
||||
} else if _, ok := util.ResolveThinkingOverride(req); !ok && !util.ToBool(req["stream"]) {
|
||||
req["thinking"] = map[string]any{"type": "enabled"}
|
||||
}
|
||||
norm, err := normalizeClaudeRequest(h.Store, req)
|
||||
if err != nil {
|
||||
writeClaudeError(w, http.StatusBadRequest, err.Error())
|
||||
return true
|
||||
}
|
||||
exposeThinking := norm.Standard.Thinking
|
||||
a, err := h.Auth.Determine(r)
|
||||
if err != nil {
|
||||
writeClaudeError(w, http.StatusUnauthorized, err.Error())
|
||||
@@ -140,7 +135,7 @@ func (h *Handler) proxyViaOpenAI(w http.ResponseWriter, r *http.Request, store C
|
||||
}
|
||||
}
|
||||
translatedReq := translatorcliproxy.ToOpenAI(sdktranslator.FormatClaude, translateModel, raw, stream)
|
||||
translatedReq, exposeThinking := applyClaudeThinkingPolicyToOpenAIRequest(translatedReq, req, stream)
|
||||
translatedReq, exposeThinking := applyClaudeThinkingPolicyToOpenAIRequest(translatedReq, req)
|
||||
|
||||
isVercelPrepare := strings.TrimSpace(r.URL.Query().Get("__stream_prepare")) == "1"
|
||||
isVercelRelease := strings.TrimSpace(r.URL.Query().Get("__stream_release")) == "1"
|
||||
@@ -215,7 +210,7 @@ func (h *Handler) proxyViaOpenAI(w http.ResponseWriter, r *http.Request, store C
|
||||
return true
|
||||
}
|
||||
|
||||
func applyClaudeThinkingPolicyToOpenAIRequest(translated []byte, original map[string]any, stream bool) ([]byte, bool) {
|
||||
func applyClaudeThinkingPolicyToOpenAIRequest(translated []byte, original map[string]any) ([]byte, bool) {
|
||||
req := map[string]any{}
|
||||
if err := json.Unmarshal(translated, &req); err != nil {
|
||||
return translated, false
|
||||
@@ -225,7 +220,7 @@ func applyClaudeThinkingPolicyToOpenAIRequest(translated []byte, original map[st
|
||||
if _, translatedHasOverride := util.ResolveThinkingOverride(req); translatedHasOverride {
|
||||
return translated, false
|
||||
}
|
||||
enabled = !stream
|
||||
enabled = true
|
||||
}
|
||||
typ := "disabled"
|
||||
if enabled {
|
||||
@@ -234,9 +229,9 @@ func applyClaudeThinkingPolicyToOpenAIRequest(translated []byte, original map[st
|
||||
req["thinking"] = map[string]any{"type": typ}
|
||||
out, err := json.Marshal(req)
|
||||
if err != nil {
|
||||
return translated, ok && enabled
|
||||
return translated, enabled
|
||||
}
|
||||
return out, ok && enabled
|
||||
return out, enabled
|
||||
}
|
||||
|
||||
func stripClaudeThinkingBlocks(raw []byte) []byte {
|
||||
|
||||
@@ -166,7 +166,7 @@ func TestClaudeProxyViaOpenAIEnablesThinkingWhenRequested(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestClaudeProxyViaOpenAIKeepsStreamDefaultThinkingDisabled(t *testing.T) {
|
||||
func TestClaudeProxyViaOpenAIEnablesStreamThinkingByDefault(t *testing.T) {
|
||||
openAI := &openAIProxyCaptureStub{}
|
||||
h := &Handler{
|
||||
Store: claudeProxyStoreStub{aliases: map[string]string{"claude-sonnet-4-6": "deepseek-v4-flash"}},
|
||||
@@ -178,12 +178,12 @@ func TestClaudeProxyViaOpenAIKeepsStreamDefaultThinkingDisabled(t *testing.T) {
|
||||
h.Messages(rec, req)
|
||||
|
||||
thinking, _ := openAI.seenReq["thinking"].(map[string]any)
|
||||
if thinking["type"] != "disabled" {
|
||||
t.Fatalf("expected Claude stream default to keep downstream thinking disabled, got %#v", openAI.seenReq)
|
||||
if thinking["type"] != "enabled" {
|
||||
t.Fatalf("expected Claude stream default to enable downstream thinking, got %#v", openAI.seenReq)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClaudeProxyViaOpenAIStripsThinkingBlocksFromNonStreamResponse(t *testing.T) {
|
||||
func TestClaudeProxyViaOpenAIExposesThinkingBlocksByDefault(t *testing.T) {
|
||||
body := `{"id":"chatcmpl_1","object":"chat.completion","created":1,"model":"claude-sonnet-4-5","choices":[{"index":0,"message":{"role":"assistant","content":null,"reasoning_content":"internal reasoning","tool_calls":[{"id":"call_1","type":"function","function":{"name":"search","arguments":"{\"q\":\"x\"}"}}]},"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":1,"completion_tokens":1,"total_tokens":2}}`
|
||||
h := &Handler{OpenAI: openAIProxyStub{status: 200, body: body}}
|
||||
req := httptest.NewRequest(http.MethodPost, "/anthropic/v1/messages", strings.NewReader(`{"model":"claude-sonnet-4-5","messages":[{"role":"user","content":"hi"}],"stream":false}`))
|
||||
@@ -195,14 +195,31 @@ func TestClaudeProxyViaOpenAIStripsThinkingBlocksFromNonStreamResponse(t *testin
|
||||
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
got := rec.Body.String()
|
||||
if strings.Contains(got, `"type":"thinking"`) {
|
||||
t.Fatalf("expected converted Claude response to strip thinking block, got %s", got)
|
||||
if !strings.Contains(got, `"type":"thinking"`) {
|
||||
t.Fatalf("expected converted Claude response to expose thinking block, got %s", got)
|
||||
}
|
||||
if !strings.Contains(got, `"tool_use"`) {
|
||||
t.Fatalf("expected converted Claude response to preserve tool_use, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClaudeProxyViaOpenAIStripsThinkingBlocksWhenDisabled(t *testing.T) {
|
||||
body := `{"id":"chatcmpl_1","object":"chat.completion","created":1,"model":"claude-sonnet-4-5","choices":[{"index":0,"message":{"role":"assistant","content":"ok","reasoning_content":"internal reasoning"},"finish_reason":"stop"}],"usage":{"prompt_tokens":1,"completion_tokens":1,"total_tokens":2}}`
|
||||
h := &Handler{OpenAI: openAIProxyStub{status: 200, body: body}}
|
||||
req := httptest.NewRequest(http.MethodPost, "/anthropic/v1/messages", strings.NewReader(`{"model":"claude-sonnet-4-5","messages":[{"role":"user","content":"hi"}],"thinking":{"type":"disabled"},"stream":false}`))
|
||||
rec := httptest.NewRecorder()
|
||||
|
||||
h.Messages(rec, req)
|
||||
|
||||
if rec.Code != http.StatusOK {
|
||||
t.Fatalf("unexpected status: %d body=%s", rec.Code, rec.Body.String())
|
||||
}
|
||||
got := rec.Body.String()
|
||||
if strings.Contains(got, `"type":"thinking"`) {
|
||||
t.Fatalf("expected disabled thinking to strip thinking block, got %s", got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestClaudeProxyTranslatesInlineImageToOpenAIDataURL(t *testing.T) {
|
||||
openAI := &openAIProxyCaptureStub{}
|
||||
h := &Handler{OpenAI: openAI}
|
||||
|
||||
@@ -32,11 +32,11 @@ func normalizeClaudeRequest(store ConfigReader, req map[string]any) (claudeNorma
|
||||
|
||||
dsPayload := convertClaudeToDeepSeek(payload, store)
|
||||
dsModel, _ := dsPayload["model"].(string)
|
||||
_, searchEnabled, ok := config.GetModelConfig(dsModel)
|
||||
defaultThinkingEnabled, searchEnabled, ok := config.GetModelConfig(dsModel)
|
||||
if !ok {
|
||||
searchEnabled = false
|
||||
}
|
||||
thinkingEnabled := util.ResolveThinkingEnabled(req, false)
|
||||
thinkingEnabled := util.ResolveThinkingEnabled(req, defaultThinkingEnabled)
|
||||
if config.IsNoThinkingModel(dsModel) {
|
||||
thinkingEnabled = false
|
||||
}
|
||||
|
||||
@@ -343,8 +343,17 @@ func buildGeminiGenerateContentResponseFromTurn(turn assistantturn.Turn) map[str
|
||||
}
|
||||
|
||||
func buildGeminiPartsFromTurn(turn assistantturn.Turn) []map[string]any {
|
||||
thinkingPart := func() []map[string]any {
|
||||
if turn.Thinking == "" {
|
||||
return nil
|
||||
}
|
||||
return []map[string]any{{"text": turn.Thinking, "thought": true}}
|
||||
}
|
||||
if len(turn.ToolCalls) > 0 {
|
||||
parts := make([]map[string]any, 0, len(turn.ToolCalls))
|
||||
parts := thinkingPart()
|
||||
if parts == nil {
|
||||
parts = make([]map[string]any, 0, len(turn.ToolCalls))
|
||||
}
|
||||
for _, tc := range turn.ToolCalls {
|
||||
parts = append(parts, map[string]any{
|
||||
"functionCall": map[string]any{
|
||||
@@ -355,11 +364,14 @@ func buildGeminiPartsFromTurn(turn assistantturn.Turn) []map[string]any {
|
||||
}
|
||||
return parts
|
||||
}
|
||||
text := turn.Text
|
||||
if text == "" {
|
||||
text = turn.Thinking
|
||||
parts := thinkingPart()
|
||||
if turn.Text != "" {
|
||||
parts = append(parts, map[string]any{"text": turn.Text})
|
||||
}
|
||||
return []map[string]any{{"text": text}}
|
||||
if len(parts) == 0 {
|
||||
parts = append(parts, map[string]any{"text": ""})
|
||||
}
|
||||
return parts
|
||||
}
|
||||
|
||||
//nolint:unused // retained for native Gemini non-stream handling path.
|
||||
@@ -380,8 +392,17 @@ func buildGeminiPartsFromFinal(finalText, finalThinking string, toolNames []stri
|
||||
if len(detected) == 0 && finalThinking != "" {
|
||||
detected = toolcall.ParseToolCalls(finalThinking, toolNames)
|
||||
}
|
||||
thinkingPart := func() []map[string]any {
|
||||
if finalThinking == "" {
|
||||
return nil
|
||||
}
|
||||
return []map[string]any{{"text": finalThinking, "thought": true}}
|
||||
}
|
||||
if len(detected) > 0 {
|
||||
parts := make([]map[string]any, 0, len(detected))
|
||||
parts := thinkingPart()
|
||||
if parts == nil {
|
||||
parts = make([]map[string]any, 0, len(detected))
|
||||
}
|
||||
for _, tc := range detected {
|
||||
parts = append(parts, map[string]any{
|
||||
"functionCall": map[string]any{
|
||||
@@ -393,9 +414,12 @@ func buildGeminiPartsFromFinal(finalText, finalThinking string, toolNames []stri
|
||||
return parts
|
||||
}
|
||||
|
||||
text := finalText
|
||||
if text == "" {
|
||||
text = finalThinking
|
||||
parts := thinkingPart()
|
||||
if finalText != "" {
|
||||
parts = append(parts, map[string]any{"text": finalText})
|
||||
}
|
||||
return []map[string]any{{"text": text}}
|
||||
if len(parts) == 0 {
|
||||
parts = append(parts, map[string]any{"text": ""})
|
||||
}
|
||||
return parts
|
||||
}
|
||||
|
||||
@@ -134,6 +134,21 @@ func (s *geminiStreamRuntime) onParsed(parsed sse.LineResult) streamengine.Parse
|
||||
accumulated := s.accumulator.Apply(parsed)
|
||||
for _, p := range accumulated.Parts {
|
||||
if p.Type == "thinking" {
|
||||
if p.VisibleText == "" || s.bufferContent {
|
||||
continue
|
||||
}
|
||||
s.sendChunk(map[string]any{
|
||||
"candidates": []map[string]any{
|
||||
{
|
||||
"index": 0,
|
||||
"content": map[string]any{
|
||||
"role": "model",
|
||||
"parts": []map[string]any{{"text": p.VisibleText, "thought": true}},
|
||||
},
|
||||
},
|
||||
},
|
||||
"modelVersion": s.model,
|
||||
})
|
||||
continue
|
||||
}
|
||||
if p.RawText == "" || p.CitationOnly || p.VisibleText == "" {
|
||||
|
||||
@@ -257,6 +257,56 @@ func TestStreamGenerateContentEmitsSSE(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func TestNativeStreamGenerateContentEmitsThoughtParts(t *testing.T) {
|
||||
h := &Handler{}
|
||||
resp := makeGeminiUpstreamResponse(
|
||||
`data: {"p":"response/thinking_content","v":"think"}`,
|
||||
`data: {"p":"response/content","v":"answer"}`,
|
||||
`data: [DONE]`,
|
||||
)
|
||||
rec := httptest.NewRecorder()
|
||||
req := httptest.NewRequest(http.MethodPost, "/v1beta/models/gemini-2.5-pro:streamGenerateContent", nil)
|
||||
|
||||
h.handleStreamGenerateContent(rec, req, resp, "gemini-2.5-pro", "prompt", true, false, nil, nil)
|
||||
|
||||
frames := extractGeminiSSEFrames(t, rec.Body.String())
|
||||
if len(frames) < 2 {
|
||||
t.Fatalf("expected thought and text stream frames, body=%s", rec.Body.String())
|
||||
}
|
||||
var gotThought, gotText string
|
||||
for _, frame := range frames {
|
||||
for _, part := range geminiPartsFromFrame(frame) {
|
||||
if part["thought"] == true {
|
||||
gotThought += asString(part["text"])
|
||||
} else {
|
||||
gotText += asString(part["text"])
|
||||
}
|
||||
}
|
||||
}
|
||||
if gotThought != "think" {
|
||||
t.Fatalf("expected thought part, got %q body=%s", gotThought, rec.Body.String())
|
||||
}
|
||||
if !strings.Contains(gotText, "answer") {
|
||||
t.Fatalf("expected text part answer, got %q body=%s", gotText, rec.Body.String())
|
||||
}
|
||||
}
|
||||
|
||||
func TestBuildGeminiPartsFromFinalIncludesThoughtPart(t *testing.T) {
|
||||
parts := buildGeminiPartsFromFinal("answer", "think", nil)
|
||||
if len(parts) != 2 {
|
||||
t.Fatalf("expected thought + answer parts, got %#v", parts)
|
||||
}
|
||||
if parts[0]["thought"] != true || parts[0]["text"] != "think" {
|
||||
t.Fatalf("expected first part to be thought, got %#v", parts[0])
|
||||
}
|
||||
if _, ok := parts[1]["thought"]; ok {
|
||||
t.Fatalf("expected second part to be visible text, got %#v", parts[1])
|
||||
}
|
||||
if parts[1]["text"] != "answer" {
|
||||
t.Fatalf("expected answer text, got %#v", parts[1])
|
||||
}
|
||||
}
|
||||
|
||||
func TestGeminiProxyTranslatesInlineImageToOpenAIDataURL(t *testing.T) {
|
||||
openAI := &geminiOpenAISuccessStub{}
|
||||
h := &Handler{Store: testGeminiConfig{}, OpenAI: openAI}
|
||||
@@ -396,3 +446,21 @@ func extractGeminiSSEFrames(t *testing.T, body string) []map[string]any {
|
||||
}
|
||||
return out
|
||||
}
|
||||
|
||||
func geminiPartsFromFrame(frame map[string]any) []map[string]any {
|
||||
candidates, _ := frame["candidates"].([]any)
|
||||
if len(candidates) == 0 {
|
||||
return nil
|
||||
}
|
||||
c0, _ := candidates[0].(map[string]any)
|
||||
content, _ := c0["content"].(map[string]any)
|
||||
rawParts, _ := content["parts"].([]any)
|
||||
parts := make([]map[string]any, 0, len(rawParts))
|
||||
for _, raw := range rawParts {
|
||||
part, _ := raw.(map[string]any)
|
||||
if part != nil {
|
||||
parts = append(parts, part)
|
||||
}
|
||||
}
|
||||
return parts
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user