Skip to content

integrate anthropic as LLM provider#501

Open
DanielDCM212 wants to merge 3 commits intoAsyncFuncAI:mainfrom
DanielDCM212:feature/implemented-anthropic-as-provider
Open

integrate anthropic as LLM provider#501
DanielDCM212 wants to merge 3 commits intoAsyncFuncAI:mainfrom
DanielDCM212:feature/implemented-anthropic-as-provider

Conversation

@DanielDCM212
Copy link
Copy Markdown


Add Anthropic Claude as an LLM Provider

Motivation

DeepWiki already supports multiple LLM providers (Google Gemini, OpenAI, OpenRouter, Ollama, AWS Bedrock, Azure OpenAI, DashScope). Anthropic's Claude models are among the most
capable and widely used LLMs available today, and a significant portion of users prefer them for wiki generation and code analysis tasks. This PR adds first-class Anthropic
support so users can select Claude models from the provider dropdown without any workarounds.

What Changed

New: api/anthropic_client.py
An AnthropicClient that implements the AdalFlow ModelClient interface:

  • Parses AdalFlow's <START_OF_SYSTEM_PROMPT> / <START_OF_USER_PROMPT> tags and maps them to Anthropic's dedicated system parameter
  • Wraps both sync (call) and async (acall) paths using the official anthropic SDK
  • Produces a GeneratorOutput with token usage (prompt, completion, total)

Updated: api/config.py and api/config/generator.json

  • AnthropicClient registered in CLIENT_CLASSES
  • "anthropic" provider block added to generator.json with three models: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5-20251001
  • Provider resolution in load_generator_config extended to include "anthropic"

Updated: api/websocket_wiki.py

  • Anthropic provider branch added to the wiki-generation WebSocket handler for both model setup and async response streaming
  • Fixed top_p/top_k access to use .get() with defaults (defensive fix uncovered during this work)
  • Fixed app.add_api_websocket_route registration (was add_api_route)

Dependency: api/pyproject.toml

  • Added anthropic >= 0.40.0

New: tests/unit/test_anthropic_provider.py
20 unit tests covering:

  • Client initialisation (env key, explicit key, missing key error)
  • convert_inputs_to_api_kwargs — prompt tag splitting, max_tokens defaulting, model type guard
  • parse_chat_completion — happy path and error path
  • call — invocation, stream key stripping, model type guard
  • Config registration — CLIENT_CLASSES, generator.json provider, get_model_config

How to Use

Set your API key and select the provider in the UI:

export ANTHROPIC_API_KEY=sk-ant-...

Then choose Anthropic from the provider dropdown and pick a Claude model (claude-sonnet-4-6 is the default).

Testing

python -m pytest tests/unit/test_anthropic_provider.py -v

All 20 tests pass without requiring a live API key (network calls are mocked).

@DanielDCM212 DanielDCM212 changed the title integrate claude as a LLM provider integrate anthropic as a LLM provider Mar 31, 2026
@DanielDCM212 DanielDCM212 changed the title integrate anthropic as a LLM provider integrate anthropic as LLM provider Mar 31, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request integrates Anthropic Claude into the AdalFlow library, introducing the AnthropicClient, updating provider configurations, and extending the WebSocket chat handler. It also includes minor updates to the FastAPI route definitions and Ollama model configurations. Feedback highlights that the acall method should lazily initialize the async client for better performance, the WebSocket handler is missing the intended streaming support for Anthropic, and the regex used for prompt parsing is fragile.

Comment on lines +725 to +729
response = await model.acall(api_kwargs=api_kwargs, model_type=ModelType.LLM)
text = response.content[0].text if response.content else ""
if text:
await websocket.send_text(text)
await websocket.close()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The Anthropic implementation in this WebSocket handler is currently non-streaming. It awaits the entire response from model.acall and sends it as a single block. This contradicts the PR description which mentions 'async response streaming' and results in a poor user experience for long wiki generations compared to the OpenAI or Google providers which are implemented with streaming in this same file.

To fix this, you should check if stream is enabled in api_kwargs and iterate over the response stream if it is. Note that Anthropic's stream events require specific handling (e.g., checking for content_block_delta or using the text_stream helper).

Comment on lines +50 to +57
pattern = (
rf"{system_tag_start}\s*(.*?)\s*{system_tag_end}\s*"
rf"{user_tag_start}\s*(.*?)\s*{user_tag_end}"
)
match = re.search(pattern, input, re.DOTALL)
if match:
system_prompt = match.group(1).strip()
user_content = match.group(2).strip()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The regex for splitting system and user prompts is quite fragile. It expects a very specific sequence of tags and whitespace. If the input string contains the system tag but doesn't match this exact pattern (e.g., if there is text before <START_OF_SYSTEM_PROMPT> or if the user tags are missing), the match will fail. Consequently, system_prompt will remain None and the entire input (including the tags) will be sent as the user message. Consider using a more flexible parsing approach to extract the system and user parts independently.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant