Conversation
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #1443 +/- ##
==========================================
+ Coverage 68.95% 69.06% +0.10%
==========================================
Files 362 362
Lines 28072 28191 +119
==========================================
+ Hits 19358 19470 +112
- Misses 7857 7861 +4
- Partials 857 860 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Pull request overview
This PR introduces tool calling to the framework AI module, allowing models to request function/tool invocations during a conversation and enabling providers (OpenAI) to send tool definitions and parse tool-call responses.
Changes:
- Add
Tool/ToolCallcontracts and extend message/response types to carry tool-call data. - Implement a tool-call → execute → re-prompt loop in
Conversation.Prompt()with an iteration guard. - Extend the OpenAI provider to serialize tool definitions and deserialize tool-call responses; add tests covering tool request/response handling.
Reviewed changes
Copilot reviewed 15 out of 15 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| mocks/ai/Tool.go | Adds mock for the new contracts/ai.Tool interface. |
| mocks/ai/Response.go | Extends response mock to support ToolCalls(). |
| mocks/ai/Agent.go | Extends agent mock to support Tools(). |
| contracts/ai/tool.go | Introduces Tool interface and ToolCall type. |
| contracts/ai/response.go | Extends Response with ToolCalls(). |
| contracts/ai/provider.go | Adds Tools to AgentPrompt for provider requests. |
| contracts/ai/message.go | Adds ToolCallID/ToolCalls fields to conversation history messages. |
| contracts/ai/agent.go | Extends Agent with Tools(). |
| ai/streamable_response_test.go | Updates test response stub to satisfy new Response interface. |
| ai/openai/response.go | Adds tool-call storage and accessor to OpenAI response type. |
| ai/openai/provider_tool_test.go | Adds OpenAI provider tests for tool definitions, tool-call parsing, and history mapping. |
| ai/openai/provider.go | Adds tool serialization, tool-call parsing, and tool-aware message building. |
| ai/conversation_test.go | Adds tests for tool-call execution loop and updates existing conversation tests. |
| ai/conversation.go | Implements tool-call/re-prompt loop and tool execution into message history. |
| ai/application_test.go | Updates application tests for the new Agent.Tools() and Response.ToolCalls() expectations. |
- buildMessages: only append prompt.Input when non-empty to avoid sending an empty user message during tool-call re-prompts - parseToolCalls: initialize args to empty map instead of nil so tool implementations receive consistent inputs on JSON decode failure - conversation.Prompt: delay committing user message to r.messages until the entire prompt/tool loop succeeds, eliminating the duplicate user message that appeared in both Agent.Messages() and AgentPrompt.Input on the first provider call - conversation.Prompt: add promptMu to serialize concurrent Prompt calls, preventing a rollback in one call from corrupting history written by a concurrent call - conversation.Stream: remove Tools from AgentPrompt since the streaming path does not support tool calling yet
|
|
||
| if i == MaxToolCallIterations-1 { | ||
| clearPending() | ||
| return nil, fmt.Errorf("ai: tool call loop exceeded %d iterations", MaxToolCallIterations) |
There was a problem hiding this comment.
The error should be moved to the errors/list.go file.
| err error | ||
| ) | ||
|
|
||
| for i := range MaxToolCallIterations { |
There was a problem hiding this comment.
If there are many different tools and the model needs to call all of them, it's very easy to exceed the max limit, right?
|
|
||
| result, err := tool.Execute(r.ctx, call.Args) | ||
| if err != nil { | ||
| return nil, fmt.Errorf("ai: tool %q execution failed: %w", call.Name, err) |
| for _, call := range calls { | ||
| tool, ok := index[call.Name] | ||
| if !ok { | ||
| return nil, fmt.Errorf("ai: tool %q not found", call.Name) |
There was a problem hiding this comment.
Pull request overview
This PR introduces first-class tool calling in the AI module, enabling models to request function/tool invocations mid-conversation and letting the framework execute those tools and continue the interaction until a final text response is produced.
Changes:
- Added
ToolandToolCallcontracts, plus tool-call metadata onMessageandResponse. - Implemented a tool-call/re-prompt loop in
Conversation.Prompt()with an iteration guard and tool dispatching. - Updated the OpenAI provider to send tool definitions and parse tool-call responses; added mocks and tests for the new behavior.
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
contracts/ai/tool.go |
Adds Tool interface and ToolCall type for tool calling. |
contracts/ai/agent.go |
Extends Agent contract with Tools(). |
contracts/ai/provider.go |
Extends AgentPrompt with Tools. |
contracts/ai/response.go |
Extends Response contract with ToolCalls(). |
contracts/ai/message.go |
Adds tool-call metadata (ToolCalls, ToolCallID) to message history. |
ai/conversation.go |
Implements tool-call execution loop, guard (MaxToolCallIterations), and concurrency serialization for Prompt(). |
ai/conversation_test.go |
Adds/updates tests for tool-call loop behavior and history handling. |
ai/openai/provider.go |
Serializes tool definitions to requests, parses tool-call responses, and replays tool-call history. |
ai/openai/response.go |
Stores tool calls on OpenAI responses to satisfy the new Response contract. |
ai/openai/provider_tool_test.go |
New tests covering OpenAI tool request serialization, tool-call parsing, and message history mapping. |
ai/streamable_response_test.go |
Updates test response stub to satisfy new Response.ToolCalls() method. |
ai/application_test.go |
Updates application tests to account for agent tools and response tool calls. |
errors/list.go |
Adds new sentinel errors for tool-call loop guard and tool execution failures. |
mocks/ai/Tool.go |
Adds mock for the new contracts/ai.Tool interface. |
mocks/ai/Agent.go |
Updates agent mock to include Tools(). |
mocks/ai/Response.go |
Updates response mock to include ToolCalls(). |
AGENTS.md |
Documents repository guidance for centralizing errors in errors/list.go. |
.github/copilot-instructions.md |
Updates error-handling guidance to require centralized sentinel errors. |
| } | ||
| } | ||
| messages = append(messages, goopenai.UserMessage(prompt.Input)) | ||
| if prompt.Input != "" { |
| AIToolCallLoopExceeded = New("ai: tool call loop exceeded %d iterations") | ||
| AIToolExecutionFailed = New("ai: tool %q execution failed: %v") | ||
| AIToolNotFound = New("ai: tool %q not found") |
| agent.EXPECT().Tools().Return(nil).Once() | ||
|
|
||
| var response *mocksai.Response | ||
| if tt.expectResponse { | ||
| response = mocksai.NewResponse(t) | ||
| response.EXPECT().ToolCalls().Return(nil).Once() | ||
| response.EXPECT().Text().Return(tt.responseText).Once() | ||
| } | ||
|
|
||
| provider.EXPECT(). | ||
| Prompt(ctx, expectedPrompt). | ||
| Return(response, tt.promptErr). | ||
| Once() |
Summary
Toolinterface andToolCalltype tocontracts/aiso agents can expose callable capabilities to the modelConversation.Prompt()now executes a tool-call/re-prompt loop (up toMaxToolCallIterations = 10) when the model requests tool invocationsToolCallvaluesCloses goravel/goravel#917
Why
Goravel's AI module previously had no mechanism for models to call external functions during a conversation. Without tool calling, agents cannot retrieve live data, perform side-effects, or compose multi-step reasoning across external systems.
The new
Toolinterface (Name,Description,Parameters,Execute) lets any agent expose callable capabilities.Conversation.Prompt()drives a loop: after each model response it checks forToolCalls, dispatches them, appends the results to conversation history, and re-prompts — stopping only when the model returns plain text or the iteration guard fires.