-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Allow google search tool to set different model #4136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Allow google search tool to set different model #4136
Conversation
Summary of ChangesHello @lwangverizon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively adds the capability to specify a custom model for the GoogleSearchTool, decoupling it from the parent agent's model. The implementation is straightforward and correct, and the new functionality is well-covered by unit tests. My review includes a couple of suggestions for improvement: one regarding a potentially unused parameter in the GoogleSearchTool's initializer, and another to refactor the new tests to reduce code duplication and improve maintainability.
| def __init__( | ||
| self, | ||
| *, | ||
| bypass_multi_tools_limit: bool = False, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The bypass_multi_tools_limit parameter appears to be unused within the GoogleSearchTool class. The logic in process_llm_request for Gemini 1.x models unconditionally raises a ValueError if other tools are present, and this check does not consult bypass_multi_tools_limit. For Gemini 2.x+ models, multiple tools are supported by default, making the flag seem redundant there as well.
If this parameter is obsolete, consider removing it and the corresponding instance attribute self.bypass_multi_tools_limit to improve code clarity. If it has a purpose that is not immediately apparent, adding a more detailed explanation in the docstring would be helpful.
| @pytest.mark.asyncio | ||
| async def test_process_llm_request_with_custom_model_parameter(self): | ||
| """Test that custom model parameter overrides the llm_request model.""" | ||
| # Create tool with custom model | ||
| tool = GoogleSearchTool(model='gemini-2.0-flash') | ||
| tool_context = await _create_tool_context() | ||
|
|
||
| # LLM request with a different model (Gemini 1.x) | ||
| llm_request = LlmRequest( | ||
| model='gemini-1.5-flash', config=types.GenerateContentConfig() | ||
| ) | ||
|
|
||
| await tool.process_llm_request( | ||
| tool_context=tool_context, llm_request=llm_request | ||
| ) | ||
|
|
||
| # Verify that the model was overridden to gemini-2.0-flash | ||
| assert llm_request.model == 'gemini-2.0-flash' | ||
| # Verify that google_search is used (Gemini 2.x behavior) | ||
| assert llm_request.config.tools is not None | ||
| assert len(llm_request.config.tools) == 1 | ||
| assert llm_request.config.tools[0].google_search is not None | ||
| assert llm_request.config.tools[0].google_search_retrieval is None | ||
|
|
||
| @pytest.mark.asyncio | ||
| async def test_process_llm_request_without_custom_model_parameter(self): | ||
| """Test that without custom model parameter, original model is used.""" | ||
| # Create tool without custom model | ||
| tool = GoogleSearchTool() | ||
| tool_context = await _create_tool_context() | ||
|
|
||
| # LLM request with Gemini 1.x model | ||
| llm_request = LlmRequest( | ||
| model='gemini-1.5-flash', config=types.GenerateContentConfig() | ||
| ) | ||
|
|
||
| await tool.process_llm_request( | ||
| tool_context=tool_context, llm_request=llm_request | ||
| ) | ||
|
|
||
| # Verify that the model was not changed | ||
| assert llm_request.model == 'gemini-1.5-flash' | ||
| # Verify that google_search_retrieval is used (Gemini 1.x behavior) | ||
| assert llm_request.config.tools is not None | ||
| assert len(llm_request.config.tools) == 1 | ||
| assert llm_request.config.tools[0].google_search_retrieval is not None | ||
| assert llm_request.config.tools[0].google_search is None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The two new tests, test_process_llm_request_with_custom_model_parameter and test_process_llm_request_without_custom_model_parameter, share a lot of boilerplate code. To improve maintainability and reduce code duplication, they can be refactored into a single parameterized test using pytest.mark.parametrize. This makes the test logic more concise and easier to extend with additional scenarios in the future.
@pytest.mark.asyncio
@pytest.mark.parametrize(
(
'tool_model',
'request_model',
'expected_model',
'expect_retrieval',
),
[
(
'gemini-2.0-flash',
'gemini-1.5-flash',
'gemini-2.0-flash',
False,
),
(
None,
'gemini-1.5-flash',
'gemini-1.5-flash',
True,
),
],
ids=['with_custom_model', 'without_custom_model'],
)
async def test_process_llm_request_custom_model_behavior(
self,
tool_model,
request_model,
expected_model,
expect_retrieval,
):
"""Tests custom model parameter behavior in process_llm_request."""
tool = GoogleSearchTool(model=tool_model)
tool_context = await _create_tool_context()
llm_request = LlmRequest(
model=request_model, config=types.GenerateContentConfig()
)
await tool.process_llm_request(
tool_context=tool_context, llm_request=llm_request
)
assert llm_request.model == expected_model
assert llm_request.config.tools is not None
assert len(llm_request.config.tools) == 1
if expect_retrieval:
assert llm_request.config.tools[0].google_search_retrieval is not None
assert llm_request.config.tools[0].google_search is None
else:
assert llm_request.config.tools[0].google_search is not None
assert llm_request.config.tools[0].google_search_retrieval is None
Please ensure you have read the contribution guide before creating a pull request.
Link to Issue or Description of Change
1. Link to an existing issue (if applicable):
Allow google search tool to set different model #4135
2. Or, if no issue exists, describe the change:
If applicable, please follow the issue templates to provide as much detail as
possible.
Problem:
Currently, the Google Search tool inherits and uses the same LLM model set from the parent agent for processing and summarizing search results. This creates a limitation for users who wish to decouple the agent's reasoning model from the model used for search summarization (e.g., for cost optimization or using a lightweight model for simpler summarization tasks).
Solution:
I have updated the Google Search tool to accept an optional LLM model parameter.
Custom Model: Users can now explicitly specify which model should be used for processing search results.
Default Behavior: If no model is specified, the tool defaults to the parent agent's model, ensuring backward compatibility.
Testing Plan
Added a new test case test_process_llm_request_with_custom_model in [test_google_search_tool.py] that verifies:
When a custom model parameter is provided to GoogleSearchTool, it overrides the model from the incoming llm_request during process_llm_request
The tool correctly uses the custom model for LLM calls while maintaining other request parameters
Unit Tests:
(base) wanglu2:adk-python/ (feature/allow-google-search-tool-set-different-llm✗) $ uv run pytest ./tests/unittests/tools/test_google_search_tool.py [22:07:32]
======================================================================== test session starts ========================================================================
platform darwin -- Python 3.13.1, pytest-9.0.2, pluggy-1.6.0
rootdir: /Users/wanglu2/Documents/Git/adk-python
configfile: pyproject.toml
plugins: mock-3.15.1, anyio-4.12.0, xdist-3.8.0, asyncio-1.3.0, langsmith-0.6.0
asyncio: mode=Mode.AUTO, debug=False, asyncio_default_fixture_loop_scope=function, asyncio_default_test_loop_scope=function
collected 21 items
tests/unittests/tools/test_google_search_tool.py ..................... [100%]
======================================================================== 21 passed in 7.91s =========================================================================
Checklist
Additional context