Skip to content

Conversation

@IanSteno
Copy link
Contributor

@IanSteno IanSteno commented Feb 7, 2026

Summary

When max_tool_steps is reached during a tool-calling loop, the agent currently silently stops — the _pipeline_reply_task returns without generating a final LLM response. This leaves users (both voice and text) with no feedback after potentially multiple tool executions.

This PR changes the behavior so that when the limit is hit, the last recursive call to _pipeline_reply_task uses tool_choice="none", forcing the LLM to produce a text response using whatever tool results have been gathered so far, instead of silently returning.

Problem

In agent_activity.py, the tool execution loop checks:

if speech_handle.num_steps >= self._session.options.max_tool_steps + 1:
    logger.warning("maximum number of function calls steps reached")
    return  # <-- Silent stop. No response to user.

This means:

  • The user asks a question requiring multiple tool calls
  • The agent executes tools up to the limit
  • When the limit is hit, it just stops — no "here's what I found" response
  • The user sees silence (voice) or no message (text)

Solution

Instead of a hard return, the fix:

  1. Removes the early return
  2. Captures max_steps_reached as a boolean flag
  3. Adds max_steps_reached to the existing tool_choice condition
  4. When max steps is reached, tool_choice="none" forces the LLM to respond with text only (no more tool calls)

The tool results from the final step are still added to the chat context, so the LLM can reference them in its response.

Changes

File: livekit-agents/livekit/agents/voice/agent_activity.py

  • Replaced the early return with a max_steps_reached flag
  • Added max_steps_reached to the tool_choice condition alongside the existing draining and model_settings.tool_choice == "none" checks
  • Updated the warning log message to indicate a final response will be generated
  • No new dependencies, no API changes, fully backward compatible

Before (current behavior)

User: "Search for restaurants near me and check their ratings"
Agent: [calls search tool] -> [calls ratings tool] -> [calls details tool] -> (max_tool_steps reached) -> SILENCE

After (with this fix)

User: "Search for restaurants near me and check their ratings"
Agent: [calls search tool] -> [calls ratings tool] -> [calls details tool] -> (max_tool_steps reached) -> "I found several restaurants near you. Here are the top rated ones: ..."

Testing

  • The change is minimal and contained to the tool step limit logic
  • The tool_choice="none" parameter is already used in the same code path for draining scenarios
  • No new code paths are introduced — the existing _pipeline_reply_task recursive call is reused
  • Backward compatible: agents with max_tool_steps set high enough to never hit the limit see no change

Open with Devin

When the maximum number of tool steps is reached, the agent now generates
a final LLM response with tool_choice="none" instead of silently stopping.

Previously, hitting max_tool_steps caused the agent to return without any
response, leaving users with no feedback. This change ensures the LLM
always produces a text response using the tool results gathered so far.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 3 additional findings.

Open in Devin Review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant