Skip to content

agentic lllm client: allow multi-question calls and response#529

Open
academo wants to merge 11 commits intomainfrom
academo/validator-llm-hackathon-go-langchain
Open

agentic lllm client: allow multi-question calls and response#529
academo wants to merge 11 commits intomainfrom
academo/validator-llm-hackathon-go-langchain

Conversation

@academo
Copy link
Collaborator

@academo academo commented Mar 4, 2026

The agent is still not used anywhere but it enhances on the previous version:

  • you can send an array of prompts and each answer will match its question instead of a single prompt with (possible) many questions.
  • Add support for openai
  • Fix anthropic tool-specific flow

@github-project-automation github-project-automation bot moved this from 📬 Triage to 🔬 In review in Grafana Catalog Team Mar 4, 2026
@tolzhabayev
Copy link
Contributor

Code review

Found 1 issue:

  1. docs/anthropic-choices-behavior.md is stale: this document describes the go-langchain architecture (ContentChoice, handleAIMessage(), Parts[0] serialization bug) that was removed in this same PR. The new native-SDK implementation in agentic_client.go no longer uses any of these constructs. The document should be removed or rewritten to reflect the current architecture.

# Anthropic Choices and Message Serialization in go-langchain
## Overview
Anthropic's response structure and go-langchain's serialization behavior require special handling when building multi-turn conversations with tool use.
## Response Structure (Anthropic → go-langchain)
Anthropic API returns responses as an array of **content blocks**:
```
[text_block, tool_use_block, tool_use_block, ...]
```
go-langchain converts each content block into a **separate ContentChoice**:
- `type: "text"``ContentChoice{Content: "...", ToolCalls: []}`
- `type: "tool_use"``ContentChoice{Content: "", ToolCalls: [{...}]}`
- `type: "thinking"``ContentChoice{Content: "", GenerationInfo: {...}}`
**Key insight:** One Anthropic response can produce multiple Choices. For example:
- Response with text + 2 tool calls → 3 Choices
- Response with just text → 1 Choice
## Serialization Constraint (go-langchain → Anthropic)
The critical limitation is in `handleAIMessage()`:
```go
if toolCall, ok := msg.Parts[0].(llms.ToolCall); ok {
// Only Parts[0] is serialized!
}
```
**This means:**
- Only `Parts[0]` of a MessageContent is serialized back to Anthropic
- If you create `MessageContent{Parts: [toolCall1, toolCall2]}`, only `toolCall1` is sent
- Multiple ToolCalls in one message **will lose data**
## Required Pattern: Interleaved Messages
To work around this limitation, tool calls must be **interleaved** as separate messages:
```
AI message: Parts[toolCall1]
Tool message: Parts[toolResult1]
AI message: Parts[toolCall2]
Tool message: Parts[toolResult2]
```
Not:
```
AI message: Parts[toolCall1, toolCall2] // toolCall2 would be lost!
Tool message: Parts[toolResult1, toolResult2]
```
## Why Merging Choices is Necessary
When processing Anthropic's response:
1. Anthropic returns separate content blocks (potentially text + multiple tools)
2. go-langchain creates one Choice per block
3. We must merge these Choices to get the complete response
4. Then we must split them back into individual AI messages for serialization
The merge preserves all information for processing, but the split ensures proper serialization.
## Implementation Details in agentic_client.go
The choice-merging code performs this merge:
- Collects all content parts from separate Choices
- Collects all ToolCalls from separate Choices
- Creates one merged view for processing
Then later in the tool call processing, it **reverses** this by creating one AI message per ToolCall to avoid the serialization bug.

🤖 Generated with Claude Code

- If this code review was useful, please react with 👍. Otherwise, react with 👎.

@tolzhabayev
Copy link
Contributor

the above was a test :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: 🔬 In review

Development

Successfully merging this pull request may close these issues.

2 participants