Skip to content

feat: add MiniMax as alternative LLM provider#27

Open
octo-patch wants to merge 1 commit intoImprintLab:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as alternative LLM provider#27
octo-patch wants to merge 1 commit intoImprintLab:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax M2.7 and M2.7-highspeed (204K context) as first-class LLM providers across the Medical-Graph-RAG pipeline:

  • Project-level code (utils.py, summerize.py): Provider auto-detection via MINIMAX_API_KEY env var, LLM_PROVIDER / LLM_MODEL overrides, temperature clamping to MiniMax (0.0, 1.0] range
  • nano_graphrag (_llm.py): Async minimax_complete_if_cache(), minimax_m27_complete(), and minimax_m27_highspeed_complete() functions with caching and temperature clamping
  • CAMEL framework: MiniMaxModel backend class, MiniMaxConfig, ModelType.MINIMAX_M27/M27_HIGHSPEED enums, ModelPlatformType.MINIMAX, ModelFactory registration
  • README: MiniMax usage docs with examples for all three integration layers

Usage

export MINIMAX_API_KEY=your_key
# Auto-detected from API key, or set explicitly:
export LLM_PROVIDER=minimax
python run.py -simple True

Note: MiniMax does not have a public embedding API, so embeddings fall back to OpenAI text-embedding-3-small when using MiniMax as the LLM provider.

Changes

File Change
utils.py Provider presets, auto-detection, temp clamping
summerize.py Provider-aware call_openai_api()
nano_graphrag/_llm.py MiniMax async completion functions
camel/types/enums.py ModelType.MINIMAX_M27/M27_HIGHSPEED, ModelPlatformType.MINIMAX
camel/configs/minimax_config.py MiniMaxConfig + MINIMAX_API_PARAMS
camel/models/minimax_model.py MiniMaxModel backend with temp clamping
camel/models/model_factory.py MiniMax branch in ModelFactory.create()
camel/models/init.py Export MiniMaxModel
camel/configs/init.py Export MiniMaxConfig, MINIMAX_API_PARAMS
README.md MiniMax setup docs + code examples
tests/test_minimax_unit.py 37 unit tests
tests/test_minimax_integration.py 4 integration tests

12 files changed, ~842 additions

Test plan

  • 37 unit tests pass (provider detection, temp clamping, enums, config, model backend, factory, nano_graphrag functions)
  • 4 integration tests pass against live MiniMax API (utils call_llm, nano_graphrag M2.7/M2.7-highspeed, CAMEL ModelFactory)
  • Verify existing OpenAI-based pipeline still works unchanged (backward compatible)

Add MiniMax M2.7 and M2.7-highspeed as first-class LLM providers across
the entire pipeline:

- utils.py / summerize.py: Provider auto-detection via MINIMAX_API_KEY
  env var, LLM_PROVIDER / LLM_MODEL overrides, temperature clamping
- nano_graphrag/_llm.py: Async MiniMax completion functions with caching
  and temperature clamping for GraphRAG pipeline
- CAMEL framework: MiniMaxModel backend, MiniMaxConfig, ModelType/
  ModelPlatformType enums, ModelFactory registration
- README: MiniMax usage documentation with examples
- Tests: 37 unit tests + 4 integration tests (12 files, ~900 additions)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant