Skip to content

feat: add MiniMax as a supported LLM provider (M2.7 default)#438

Open
octo-patch wants to merge 2 commits intoalgorithmicsuperintelligence:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as a supported LLM provider (M2.7 default)#438
octo-patch wants to merge 2 commits intoalgorithmicsuperintelligence:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link

@octo-patch octo-patch commented Mar 15, 2026

Summary

Add MiniMax as a supported LLM provider for OpenEvolve, with the latest M2.7 model as default.

Changes

  • Add MiniMax configuration file (configs/minimax_config.yaml) with M2.7 and M2.7-highspeed models
  • Add MiniMax section in README with setup instructions and cost estimates
  • Add MINIMAX_API_KEY to test environment for config validation
  • Set MiniMax-M2.7 as the default model (latest flagship with enhanced reasoning and coding)

Models

  • MiniMax-M2.7 — Latest flagship model with enhanced reasoning and coding (default, weight 0.6)
  • MiniMax-M2.7-highspeed — High-speed version for low-latency scenarios (weight 0.4)

Why

MiniMax provides an OpenAI-compatible API with 204K context window, making it a cost-effective alternative for code evolution tasks. M2.7 is the latest flagship model.

Testing

  • All 368 unit tests passing
  • Config validation tests pass with MiniMax config

@CLAassistant
Copy link

CLAassistant commented Mar 15, 2026

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
0 out of 2 committers have signed the CLA.

❌ PR Bot
❌ octo-patch


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

MiniMax offers OpenAI-compatible API with models like MiniMax-M2.5 and
MiniMax-M2.5-highspeed (204K context window). This commit adds:

- MiniMax provider section in README (setup guide, cost estimation)
- Example config file (configs/minimax_config.yaml)
- Updated test env to support MiniMax API key validation

Co-Authored-By: octo-patch <octo-patch@users.noreply.github.com>
@octo-patch octo-patch force-pushed the feature/add-minimax-provider branch from 5545f60 to 64dc033 Compare March 15, 2026 06:04
@octo-patch
Copy link
Author

I've updated the commit author to be properly linked to my GitHub account. The CLA check should pass now. @CLAassistant check

- Update MiniMax-M2.7 and MiniMax-M2.7-highspeed as default models
- Update config, README, and docs references
- Keep all previous models as alternatives
@octo-patch octo-patch changed the title Add MiniMax as a supported LLM provider feat: add MiniMax as a supported LLM provider (M2.7 default) Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants