What is MiniMax M2.1?
MiniMax M2.1 is an efficient open-source large language model optimized for multi-language coding, agentic workflows, office automation, and real-world complex tasks with low active parameters and high performance.
When was MiniMax M2.1 released?
MiniMax M2.1 was officially released on December 23, 2025, with open-source weights and API availability shortly after.
Is MiniMax M2.1 free to use?
Model weights are open-source for local use (free); API is pay-as-you-go starting at $0.30 per million input tokens, with limited-time free access to MiniMax Agent chat.
What are the key strengths of MiniMax M2.1?
It excels in multilingual programming (Rust, Java, etc.), agentic tool use, mobile/web development, 3D simulations, and office automation, with strong benchmarks like VIBE 88.6%.
How does MiniMax M2.1 pricing compare?
API costs $0.30 per million input tokens and $1.20 per million output tokens, significantly cheaper than many competitors like Claude models while offering similar or better performance in coding/agentic tasks.
Where can I access MiniMax M2.1?
Open-source weights on Hugging Face, API via platform.minimax.io, and chat interface at agent.minimax.io (limited free period).
What context window does MiniMax M2.1 have?
It supports a 196,608-token context window, enabling handling of large codebases, documents, or long conversations.
Is MiniMax M2.1 good for local deployment?
Yes, fully open-source with support for vLLM, Transformers, SGLang, and Ktransformers; runs efficiently on suitable GPUs due to sparse MoE design.




