Skip to content

feat: add MiniMax as direct LLM provider#704

Open
octo-patch wants to merge 1 commit intopotpie-ai:mainfrom
octo-patch:feature/add-minimax-direct-provider
Open

feat: add MiniMax as direct LLM provider#704
octo-patch wants to merge 1 commit intopotpie-ai:mainfrom
octo-patch:feature/add-minimax-direct-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 21, 2026

Summary

  • Add direct MiniMax API support (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed) via https://api.minimax.io/v1, complementing the existing OpenRouter proxy path
  • Register minimax as an OpenAI-compatible provider for pydantic-ai model routing
  • Document MINIMAX_API_KEY in .env.template and README with quickstart instructions

Motivation

Potpie already supports MiniMax via OpenRouter (openrouter/minimax/minimax-m2.5), but this adds direct API access. Benefits:

  • Lower latency — no OpenRouter middleman
  • Lower cost — no OpenRouter markup
  • Latest models — MiniMax M2.7 with 1M-token context window
  • Full control — direct API key management via MINIMAX_API_KEY

Users can now set CHAT_MODEL=minimax/MiniMax-M2.7 and MINIMAX_API_KEY=... to use MiniMax directly.

Changes

File Change
llm_config.py Add 4 direct MiniMax models to MODEL_CONFIG_MAP; add minimax to supports_pydantic set
provider_service.py Add 4 direct models to AVAILABLE_MODELS; add minimax to openai_like_providers
.env.template Add MINIMAX_API_KEY; add minimax to LLM_PROVIDER comment
README.md Add MiniMax quickstart tip; add minimax to provider list

Test plan

  • 50 unit tests covering MODEL_CONFIG_MAP, AVAILABLE_MODELS, parse_model_string, LLMProviderConfig, pydantic model routing, API key resolution, temperature defaults
  • 3 integration tests (real API calls) covering M2.7 completion, M2.7-highspeed completion, streaming
  • Verify existing OpenRouter MiniMax path is unaffected (preserved openrouter/minimax/minimax-m2.5 entry)

Run unit tests:

pytest tests/unit/intelligence/provider/test_minimax_provider.py -v

Run integration tests (requires MINIMAX_API_KEY):

pytest tests/integration-tests/intelligence/provider/test_minimax_integration.py -v

Summary by CodeRabbit

  • New Features
    • Added MiniMax as a supported LLM provider alongside existing options.
    • Introduced multiple MiniMax model variants (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed) with streaming and tool support.
    • Added configuration documentation and environment variable setup instructions for MiniMax integration.

Add direct MiniMax API support alongside the existing OpenRouter proxy,
enabling users to call MiniMax models without the OpenRouter middleman.

Changes:
- Add MiniMax M2.7, M2.7-highspeed, M2.5, M2.5-highspeed to MODEL_CONFIG_MAP
  with base_url pointing to https://api.minimax.io/v1
- Register direct MiniMax models in AVAILABLE_MODELS
- Add minimax to openai_like_providers for pydantic-ai model routing
- Add minimax to supports_pydantic fallback for unknown minimax/* models
- Document MINIMAX_API_KEY in .env.template and README

Tests: 50 unit tests + 3 integration tests (real API calls)
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 21, 2026

Walkthrough

This PR adds support for the MiniMax LLM provider by introducing configuration entries for four MiniMax models (M2.7, M2.7-highspeed, M2.5, M2.5-highspeed), registering them in model availability lists, treating MiniMax as an OpenAI-compatible provider, and adding integration and unit test coverage.

Changes

Cohort / File(s) Summary
Configuration & Documentation
.env.template, README.md
Updated LLM provider configurations to include minimax as a supported option and added MINIMAX_API_KEY environment variable with usage hints.
Core Provider Configuration
app/modules/intelligence/provider/llm_config.py
Added four MiniMax model entries to MODEL_CONFIG_MAP with base URL, context window, default parameters, and capabilities; updated get_config_for_model() to recognize minimax provider.
Provider Service
app/modules/intelligence/provider/provider_service.py
Expanded AVAILABLE_MODELS with four MiniMax model options and one OpenRouter-proxied MiniMax entry; added minimax to openai_like_providers to enable PydanticAI OpenAI-style integration.
Test Coverage
tests/unit/intelligence/provider/test_minimax_provider.py, tests/integration-tests/intelligence/provider/test_minimax_integration.py
Added comprehensive unit tests validating model configuration, capabilities, and provider routing; added integration tests for non-streaming and streaming chat completions with MiniMax models.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Possibly related PRs

  • #615: Both PRs add minimax entries to MODEL_CONFIG_MAP and AVAILABLE_MODELS, with this PR adding direct minimax provider support while the other added OpenRouter-proxied minimax.
  • #465: Both PRs modify provider registration structures and model configuration mappings in llm_config.py and provider_service.py.
  • #466: Both PRs modify AVAILABLE_MODELS in provider_service.py to expand model offerings.

Suggested reviewers

  • dhirenmathur
  • nndn

Poem

🐰 A minimax model hops into the fray,
With M2.7 and M2.5 here to stay,
OpenAI-like routing makes the path quite clear,
Configuration maps the tokens near and far,
Tests validate each leap and bound—
A million tokens of context profound!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.27% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'feat: add MiniMax as direct LLM provider' accurately summarizes the main change—adding direct MiniMax API support as a new LLM provider option.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@sonarqubecloud
Copy link
Copy Markdown

Quality Gate Failed Quality Gate failed

Failed conditions
C Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
app/modules/intelligence/provider/llm_config.py (1)

388-409: ⚠️ Potential issue | 🟠 Major

Unknown minimax/* models fall back to OpenAI endpoint due to missing base_url configuration.

For unknown MiniMax model strings, get_config_for_model() returns base_url=None. Since MiniMax is routed through OpenAIProvider at line 1513, the provider is instantiated without an explicit base_url parameter, causing it to default to OpenAI's endpoint instead of https://api.minimax.io/v1. In contrast, Ollama has special fallback logic (lines 1490–1496) to set its endpoint even when config.base_url is None; MiniMax lacks this. Known minimax/* models work because they have explicit base_url values in MODEL_CONFIG_MAP, but unknown variants will fail in the Pydantic path.

Proposed fix
 def get_config_for_model(model_string: str) -> Dict[str, Any]:
     """Get configuration for a specific model, with fallback to defaults."""
     if model_string in MODEL_CONFIG_MAP:
         return MODEL_CONFIG_MAP[model_string]
     # If model not found, use default configuration based on provider
     provider, _ = parse_model_string(model_string)
     env_base_url = os.environ.get("LLM_API_BASE")
+    fallback_base_url = env_base_url
+    if provider == "minimax" and not fallback_base_url:
+        fallback_base_url = "https://api.minimax.io/v1"
     supports_pydantic = provider in {
         "openai",
         "anthropic",
         "openrouter",
         "azure",
         "ollama",
         "minimax",
     }
     return {
         "provider": provider,
         "context_window": DEFAULT_CONTEXT_WINDOW,
         "default_params": {"temperature": 0.3},
         "capabilities": {
             "supports_pydantic": supports_pydantic or bool(env_base_url),
             "supports_streaming": True,
             "supports_vision": provider in {"openai", "anthropic"},
             "supports_tool_parallelism": provider in {"openai", "anthropic"},
         },
-        "base_url": None,
+        "base_url": fallback_base_url,
         "api_version": None,
         "auth_provider": provider,
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@app/modules/intelligence/provider/llm_config.py` around lines 388 - 409,
get_config_for_model() currently returns base_url=None for unknown minimax
models, causing OpenAIProvider (instantiated at OpenAIProvider) to default to
OpenAI; mirror Ollama's fallback by ensuring get_config_for_model() sets
base_url to "https://api.minimax.io/v1" when provider == "minimax" (or model
string begins with "minimax/") so the returned dict (provider, base_url,
api_version, etc.) always contains the MiniMax endpoint; update the logic around
the provider variable in llm_config.py to assign base_url for "minimax" before
returning the config so OpenAIProvider instantiation uses the correct endpoint.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/unit/intelligence/provider/test_minimax_provider.py`:
- Around line 36-41: DIRECT_MODELS and DIRECT_IDS are defined as mutable lists
at class level which triggers mutable-class-default warnings; replace their list
literals with immutable tuples (e.g., ("minimax/MiniMax-M2.7", ...)) in the
test_minimax_provider.py class so they become immutable class-level constants
and keep the same values and usage; locate the DIRECT_MODELS and DIRECT_IDS
symbols in the file and change their definitions from [ ... ] to ( ... ).

---

Outside diff comments:
In `@app/modules/intelligence/provider/llm_config.py`:
- Around line 388-409: get_config_for_model() currently returns base_url=None
for unknown minimax models, causing OpenAIProvider (instantiated at
OpenAIProvider) to default to OpenAI; mirror Ollama's fallback by ensuring
get_config_for_model() sets base_url to "https://api.minimax.io/v1" when
provider == "minimax" (or model string begins with "minimax/") so the returned
dict (provider, base_url, api_version, etc.) always contains the MiniMax
endpoint; update the logic around the provider variable in llm_config.py to
assign base_url for "minimax" before returning the config so OpenAIProvider
instantiation uses the correct endpoint.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: be1ae767-5c49-4aba-80ff-61a4f1937365

📥 Commits

Reviewing files that changed from the base of the PR and between 71b61bd and 2091605.

📒 Files selected for processing (10)
  • .env.template
  • README.md
  • app/modules/intelligence/provider/llm_config.py
  • app/modules/intelligence/provider/provider_service.py
  • tests/integration-tests/intelligence/__init__.py
  • tests/integration-tests/intelligence/provider/__init__.py
  • tests/integration-tests/intelligence/provider/test_minimax_integration.py
  • tests/unit/intelligence/__init__.py
  • tests/unit/intelligence/provider/__init__.py
  • tests/unit/intelligence/provider/test_minimax_provider.py

Comment on lines +36 to +41
DIRECT_MODELS = [
"minimax/MiniMax-M2.7",
"minimax/MiniMax-M2.7-highspeed",
"minimax/MiniMax-M2.5",
"minimax/MiniMax-M2.5-highspeed",
]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# Check if the file exists and examine the code at the specified lines
if [ -f "tests/unit/intelligence/provider/test_minimax_provider.py" ]; then
  echo "=== File exists ==="
  echo "=== Lines 30-50 ==="
  sed -n '30,50p' "tests/unit/intelligence/provider/test_minimax_provider.py"
  echo ""
  echo "=== Lines 145-160 ==="
  sed -n '145,160p' "tests/unit/intelligence/provider/test_minimax_provider.py"
else
  echo "File not found"
fi

Repository: potpie-ai/potpie

Length of output: 1337


🌐 Web query:

RUF012 Ruff linter rule definition and when it applies to class attributes

💡 Result:

RUF012 is the Ruff linter rule named "mutable-class-default". It checks for mutable default values in class attributes, such as lists, sets, or dictionaries defined directly in the class body. It applies to class attributes that use mutable types (e.g., list=[], set, dict={}) without proper annotation or initialization. These share state across all class instances, leading to unexpected bugs when modified. The rule triggers on regular classes and dataclasses (extended from RUF008 via PR #4390, added in Ruff v0.0.273). It flags cases like: class A: variable_1: list[int] = [] variable_2: set[int] = set variable_3: dict[str, int] = {} Recommended fixes: - Initialize in init (for instance attributes). - Annotate with typing.ClassVar (for shared class variables). - Use None and initialize later. - Use immutable defaults (e.g., tuple instead of list). Exceptions include Pydantic models and some framework classes (e.g., Django). There are discussions on false positives in untyped code or specific frameworks, but the rule remains active.

Citations:


Use immutable tuples for class-level constants to avoid RUF012 (mutable-class-default) warnings.

DIRECT_MODELS and DIRECT_IDS are class attributes defined as mutable lists without ClassVar annotation. Using tuples instead makes them immutable and aligns with best practices.

♻️ Proposed fix
-    DIRECT_MODELS = [
+    DIRECT_MODELS = (
         "minimax/MiniMax-M2.7",
         "minimax/MiniMax-M2.7-highspeed",
         "minimax/MiniMax-M2.5",
         "minimax/MiniMax-M2.5-highspeed",
-    ]
+    )
@@
-    DIRECT_IDS = [
+    DIRECT_IDS = (
         "minimax/MiniMax-M2.7",
         "minimax/MiniMax-M2.7-highspeed",
         "minimax/MiniMax-M2.5",
         "minimax/MiniMax-M2.5-highspeed",
-    ]
+    )
🧰 Tools
🪛 Ruff (0.15.6)

[warning] 36-41: Mutable default value for class attribute

(RUF012)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/intelligence/provider/test_minimax_provider.py` around lines 36 -
41, DIRECT_MODELS and DIRECT_IDS are defined as mutable lists at class level
which triggers mutable-class-default warnings; replace their list literals with
immutable tuples (e.g., ("minimax/MiniMax-M2.7", ...)) in the
test_minimax_provider.py class so they become immutable class-level constants
and keep the same values and usage; locate the DIRECT_MODELS and DIRECT_IDS
symbols in the file and change their definitions from [ ... ] to ( ... ).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant