feat: add MiniMax as direct LLM provider#704
feat: add MiniMax as direct LLM provider#704octo-patch wants to merge 1 commit intopotpie-ai:mainfrom
Conversation
Add direct MiniMax API support alongside the existing OpenRouter proxy, enabling users to call MiniMax models without the OpenRouter middleman. Changes: - Add MiniMax M2.7, M2.7-highspeed, M2.5, M2.5-highspeed to MODEL_CONFIG_MAP with base_url pointing to https://api.minimax.io/v1 - Register direct MiniMax models in AVAILABLE_MODELS - Add minimax to openai_like_providers for pydantic-ai model routing - Add minimax to supports_pydantic fallback for unknown minimax/* models - Document MINIMAX_API_KEY in .env.template and README Tests: 50 unit tests + 3 integration tests (real API calls)
WalkthroughThis PR adds support for the MiniMax LLM provider by introducing configuration entries for four MiniMax models (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed), registering them in model availability lists, treating MiniMax as an OpenAI-compatible provider, and adding integration and unit test coverage. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/modules/intelligence/provider/llm_config.py (1)
388-409:⚠️ Potential issue | 🟠 MajorUnknown
minimax/*models fall back to OpenAI endpoint due to missingbase_urlconfiguration.For unknown MiniMax model strings,
get_config_for_model()returnsbase_url=None. Since MiniMax is routed throughOpenAIProviderat line 1513, the provider is instantiated without an explicitbase_urlparameter, causing it to default to OpenAI's endpoint instead ofhttps://api.minimax.io/v1. In contrast, Ollama has special fallback logic (lines 1490–1496) to set its endpoint even whenconfig.base_urlisNone; MiniMax lacks this. Knownminimax/*models work because they have explicitbase_urlvalues inMODEL_CONFIG_MAP, but unknown variants will fail in the Pydantic path.Proposed fix
def get_config_for_model(model_string: str) -> Dict[str, Any]: """Get configuration for a specific model, with fallback to defaults.""" if model_string in MODEL_CONFIG_MAP: return MODEL_CONFIG_MAP[model_string] # If model not found, use default configuration based on provider provider, _ = parse_model_string(model_string) env_base_url = os.environ.get("LLM_API_BASE") + fallback_base_url = env_base_url + if provider == "minimax" and not fallback_base_url: + fallback_base_url = "https://api.minimax.io/v1" supports_pydantic = provider in { "openai", "anthropic", "openrouter", "azure", "ollama", "minimax", } return { "provider": provider, "context_window": DEFAULT_CONTEXT_WINDOW, "default_params": {"temperature": 0.3}, "capabilities": { "supports_pydantic": supports_pydantic or bool(env_base_url), "supports_streaming": True, "supports_vision": provider in {"openai", "anthropic"}, "supports_tool_parallelism": provider in {"openai", "anthropic"}, }, - "base_url": None, + "base_url": fallback_base_url, "api_version": None, "auth_provider": provider, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/modules/intelligence/provider/llm_config.py` around lines 388 - 409, get_config_for_model() currently returns base_url=None for unknown minimax models, causing OpenAIProvider (instantiated at OpenAIProvider) to default to OpenAI; mirror Ollama's fallback by ensuring get_config_for_model() sets base_url to "https://api.minimax.io/v1" when provider == "minimax" (or model string begins with "minimax/") so the returned dict (provider, base_url, api_version, etc.) always contains the MiniMax endpoint; update the logic around the provider variable in llm_config.py to assign base_url for "minimax" before returning the config so OpenAIProvider instantiation uses the correct endpoint.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/unit/intelligence/provider/test_minimax_provider.py`:
- Around line 36-41: DIRECT_MODELS and DIRECT_IDS are defined as mutable lists
at class level which triggers mutable-class-default warnings; replace their list
literals with immutable tuples (e.g., ("minimax/MiniMax-M2.7", ...)) in the
test_minimax_provider.py class so they become immutable class-level constants
and keep the same values and usage; locate the DIRECT_MODELS and DIRECT_IDS
symbols in the file and change their definitions from [ ... ] to ( ... ).
---
Outside diff comments:
In `@app/modules/intelligence/provider/llm_config.py`:
- Around line 388-409: get_config_for_model() currently returns base_url=None
for unknown minimax models, causing OpenAIProvider (instantiated at
OpenAIProvider) to default to OpenAI; mirror Ollama's fallback by ensuring
get_config_for_model() sets base_url to "https://api.minimax.io/v1" when
provider == "minimax" (or model string begins with "minimax/") so the returned
dict (provider, base_url, api_version, etc.) always contains the MiniMax
endpoint; update the logic around the provider variable in llm_config.py to
assign base_url for "minimax" before returning the config so OpenAIProvider
instantiation uses the correct endpoint.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: be1ae767-5c49-4aba-80ff-61a4f1937365
📒 Files selected for processing (10)
.env.templateREADME.mdapp/modules/intelligence/provider/llm_config.pyapp/modules/intelligence/provider/provider_service.pytests/integration-tests/intelligence/__init__.pytests/integration-tests/intelligence/provider/__init__.pytests/integration-tests/intelligence/provider/test_minimax_integration.pytests/unit/intelligence/__init__.pytests/unit/intelligence/provider/__init__.pytests/unit/intelligence/provider/test_minimax_provider.py
| DIRECT_MODELS = [ | ||
| "minimax/MiniMax-M2.7", | ||
| "minimax/MiniMax-M2.7-highspeed", | ||
| "minimax/MiniMax-M2.5", | ||
| "minimax/MiniMax-M2.5-highspeed", | ||
| ] |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if the file exists and examine the code at the specified lines
if [ -f "tests/unit/intelligence/provider/test_minimax_provider.py" ]; then
echo "=== File exists ==="
echo "=== Lines 30-50 ==="
sed -n '30,50p' "tests/unit/intelligence/provider/test_minimax_provider.py"
echo ""
echo "=== Lines 145-160 ==="
sed -n '145,160p' "tests/unit/intelligence/provider/test_minimax_provider.py"
else
echo "File not found"
fiRepository: potpie-ai/potpie
Length of output: 1337
🌐 Web query:
RUF012 Ruff linter rule definition and when it applies to class attributes
💡 Result:
RUF012 is the Ruff linter rule named "mutable-class-default". It checks for mutable default values in class attributes, such as lists, sets, or dictionaries defined directly in the class body. It applies to class attributes that use mutable types (e.g., list=[], set, dict={}) without proper annotation or initialization. These share state across all class instances, leading to unexpected bugs when modified. The rule triggers on regular classes and dataclasses (extended from RUF008 via PR #4390, added in Ruff v0.0.273). It flags cases like: class A: variable_1: list[int] = [] variable_2: set[int] = set variable_3: dict[str, int] = {} Recommended fixes: - Initialize in init (for instance attributes). - Annotate with typing.ClassVar (for shared class variables). - Use None and initialize later. - Use immutable defaults (e.g., tuple instead of list). Exceptions include Pydantic models and some framework classes (e.g., Django). There are discussions on false positives in untyped code or specific frameworks, but the rule remains active.
Citations:
- 1: https://docs.astral.sh/ruff/rules/mutable-class-default/
- 2: Expand RUF008 to all classes, but to a new code (RUF012) astral-sh/ruff#4390
- 3: Avoid
mutable-class-default(RUF012) for fully untyped classes astral-sh/ruff#5275 - 4: RUF012 triggers many false positives (are they really? they are correct) in some projects astral-sh/ruff#5243
Use immutable tuples for class-level constants to avoid RUF012 (mutable-class-default) warnings.
DIRECT_MODELS and DIRECT_IDS are class attributes defined as mutable lists without ClassVar annotation. Using tuples instead makes them immutable and aligns with best practices.
♻️ Proposed fix
- DIRECT_MODELS = [
+ DIRECT_MODELS = (
"minimax/MiniMax-M2.7",
"minimax/MiniMax-M2.7-highspeed",
"minimax/MiniMax-M2.5",
"minimax/MiniMax-M2.5-highspeed",
- ]
+ )
@@
- DIRECT_IDS = [
+ DIRECT_IDS = (
"minimax/MiniMax-M2.7",
"minimax/MiniMax-M2.7-highspeed",
"minimax/MiniMax-M2.5",
"minimax/MiniMax-M2.5-highspeed",
- ]
+ )🧰 Tools
🪛 Ruff (0.15.6)
[warning] 36-41: Mutable default value for class attribute
(RUF012)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/unit/intelligence/provider/test_minimax_provider.py` around lines 36 -
41, DIRECT_MODELS and DIRECT_IDS are defined as mutable lists at class level
which triggers mutable-class-default warnings; replace their list literals with
immutable tuples (e.g., ("minimax/MiniMax-M2.7", ...)) in the
test_minimax_provider.py class so they become immutable class-level constants
and keep the same values and usage; locate the DIRECT_MODELS and DIRECT_IDS
symbols in the file and change their definitions from [ ... ] to ( ... ).




Summary
https://api.minimax.io/v1, complementing the existing OpenRouter proxy pathminimaxas an OpenAI-compatible provider for pydantic-ai model routingMINIMAX_API_KEYin.env.templateand README with quickstart instructionsMotivation
Potpie already supports MiniMax via OpenRouter (
openrouter/minimax/minimax-m2.5), but this adds direct API access. Benefits:MINIMAX_API_KEYUsers can now set
CHAT_MODEL=minimax/MiniMax-M2.7andMINIMAX_API_KEY=...to use MiniMax directly.Changes
llm_config.pyMODEL_CONFIG_MAP; addminimaxtosupports_pydanticsetprovider_service.pyAVAILABLE_MODELS; addminimaxtoopenai_like_providers.env.templateMINIMAX_API_KEY; addminimaxtoLLM_PROVIDERcommentREADME.mdminimaxto provider listTest plan
openrouter/minimax/minimax-m2.5entry)Run unit tests:
Run integration tests (requires
MINIMAX_API_KEY):Summary by CodeRabbit