Following in the footsteps of the recently released Gemma 4, MiniMax has now made its latest model, MiniMax M2.7, completely open-weight. In simple terms, developers can now download the model, run it on their own systems, and start building with it. This is in contrast with the model being a completely cloud-hosted AI service up until now. Needless to say, this instantly makes M2.7 more interesting than a normal model update. It shifts the story from “ohh, a new model” to “wait, I cant get it for myself for completely free?”
To add to the excitement, know this – M2.7 is not being pitched as just another chatbot. Even though it is now open-weight and can be run locally, its capabilities are not being lessened in any way whatsoever. MiniMax says that the AI model has specifically been built for complex, tool-using, agentic work. With the kind of firepower it carries, it will be able to perform tasks across software engineering and debugging to Excel, PowerPoint, and Word. And if the words of the MiniMax team are anything to go by, the M2.7 will also adhere to skills across long, complex workflows.
Of course, there are more features that the new open-weight MiniMax M2.7 brings with it. Here, we shall explore all of those and check out how the new M2.7 fares across everyday, real-world tasks. But first, here is more about the AI model itself.
What is MiniMax M2.7?
Before going open-weight, MiniMax M2.7 was already the company’s latest high-end model, built and tested for serious agentic work. In other words, the capability was already there. What has changed now is access. With the weights opened up, M2.7 moves from being primarily a model inside MiniMax’s own ecosystem to one that developers can download, run, and experiment with on their own systems. That makes this less of a fresh model launch and more of a major expansion in who gets to use it.
And that matters because M2.7 is not being positioned as a casual chatbot in the first place. MiniMax presents it as a model for complex workflows. These span across software engineering, debugging, terminal-style work, office deliverables, complex skills, and long, agentic workflows. So the open-weight release is more than just about convenience – it is about the real-world impact.
What the Open-Weight Release Actually Means
In practice, this update means developers can now get access to the M2.7 model weights and run the model themselves. This makes it far more hands-on than a purely hosted AI service. MiniMax has published the model on Hugging Face and also shared deployment guides, so this is clearly meant to be used, tested, and built around by developers directly.
That changes a few things immediately. You can now:
- Download the model weights
- Deploy it locally on your own setup
- Experiment with custom workflows instead of only using MiniMax’s cloud interface
- Plug it into your own agent systems and tools
- Test its software, office, and agentic capabilities more directly
- Fine-tune your usage around your own tasks and environment
In addition to the regular workflows, MiniMax highlights strengths such as high-fidelity Word, Excel, and PowerPoint editing. The model is claimed to display strong tool-use performance and 97% skill compliance across 40+ complex skills. With local deployment, who wouldn’t wish to try their hands on such firepower?
Not the Same as “Open-Source”
There is an important distinction here. Open-weight does not automatically mean open-source. Open-weight typically means that the model weights are now accessible. However, that does not necessarily mean the full training pipeline, datasets, and everything used to create the model are open as well.
On top of that, the Hugging Face license for M2.7 clearly specifies that commercial use is prohibited unless MiniMax gives prior written authorization. This is exactly why this should be described carefully as an open-weight release rather than a fully open-source one.
So the simplest way to put it is this: M2.7 is now much easier to download, run, and build around, but it is still a controlled release, not a no-limits open-source one, like Gemma 4.
Key Features of MiniMax M2.7 Open-weight
Well, at the risk of getting repetitive, here is the entire crux of the new model – it is a serious workhorse for developers and knowledge workers alike. It can code, use tools, stick to complex instructions, and handle office-style deliverables with far more depth than a regular chatbot.
Here are the key features of MiniMax M2.7:
- Open-weight availability: Developers can now download the model weights and run M2.7 themselves instead of relying only on MiniMax as a hosted service.
- Built for agentic workflows: MiniMax says M2.7 is designed for complex, tool-using, multi-step agentic work rather than basic one-shot prompting.
- Strong software engineering capabilities: The model is positioned for debugging, log analysis, code security, terminal work, machine learning tasks, and full project-style software workflows.
- Office-task execution: MiniMax highlights its ability to work across Word, Excel, and PowerPoint, including multi-round revisions and high-fidelity editing.
- High skill adherence: The company reports a 97% skill compliance rate across 40+ complex skills, suggesting it is built to stay on track during long workflows.
- Native support for Agent Teams: MiniMax says M2.7 can work with multi-agent setups, making it more suitable for orchestrated task systems.
- Self-evolution capability: One of its standout claims is that M2.7 can help improve the systems around it by analyzing failures, suggesting changes, and iterating through evaluation loops.
- Meant for real deliverables, not just chat: MiniMax presents it as a model capable of helping produce actual outputs like reports, models, presentations, and workflow-ready results.
Benchmark Performance
On benchmarks, MiniMax M2.7 seems to back up its positioning fairly well. The clearest signal is that it performs strongly across the three areas that matter most for a model like this. These are software engineering, office productivity, and agentic tool use. MiniMax’s reported scores of 56.22% on SWE-Pro, 55.6% on VIBE-Pro, and 57.0% on Terminal Bench 2 suggest that the model is not limited to basic code generation, but can handle broader engineering and repo-level tasks too.
The same trend shows up beyond coding. A 1495 ELO on GDPval-AA points to strong performance in document and office-style work, while 46.3% on Toolathon and a reported 97% skill compliance across 40 complex skills support MiniMax’s larger pitch that M2.7 is built for long, tool-using agentic workflows. In other words, the benchmark story here is not that M2.7 is good at one thing. It is that the model appears to be consistently capable across multiple kinds of real-world work.
How to Access Minimax M2.7 Open-Weight
Now that the model has gone open-weight, accessing MiniMax M2.7 open-weight is fairly straightforward. MiniMax offers it through the official Hugging Face repository and GitHub documentation, which means developers can either download the weights directly or follow the company’s own deployment guides to run it in their preferred setup.
Here are the main ways to access it:
1. Download the model from Hugging Face
MiniMax hosts M2.7 on its official Hugging Face page, where the model card, files, and usage details are available. You can check out the model here.
Note that the model contains 229 billion parameters. So, if you plan to download and run it locally, you will need a high configuration setup. In case you do not have that for now, you may wish to access the model through other methods listed below.
2. Run it locally with supported inference frameworks
MiniMax explicitly recommends serving M2.7 through:
You can find access to these through the official HuggingFace page for the AI model.
3. Get the weights from ModelScope
If needed, MiniMax also points users to ModelScope as another source for the model weights. You can find it here.
4. Use it through NVIDIA NIM
MiniMax notes that M2.7 is also available on NVIDIA NIM Endpoint, which can be useful for developers who prefer that serving route.
5. Access it through MiniMax’s own hosted routes
For people who do not want to download the model and deploy it locally, MiniMax also lists:
- MiniMax Agent
- MiniMax API
- Token Plan
Hands-on with MiniMax M2.7 Open Weight
Since the MiniMax M2.7 is a large model with 229 billion parameters and requires a large amount of computing power, we did not locally download and run it. We instead used the HuggingFace inference point to access the model.
Here is the process we followed:
1. Generate HF Token
First, ensure you have a valid HF_TOKEN set in your environment. You can get your token from your settings page. Note that running this may incur charges above the free tier.
Here, we take the following Python example to show how to run the model remotely on HF Inference Providers.
import os
os.environ['HF_TOKEN'] = 'YOUR_TOKEN_HERE'
2. Enter your Prompt
Once you have pasted your token within your environment, you can now proceed to prompt the AI model on the required task. Here is the prompt we used for our test:
Prompt:
import os
from openai import OpenAI
client = OpenAI(
base_url="https://router.huggingface.co/v1",
api_key=os.environ["HF_TOKEN"],
)
completion = client.chat.completions.create(
model="MiniMaxAI/MiniMax-M2.7",
messages=[
{
"role": "user",
"content": """
Write the code in python that will take a string and make this conversion given a number of rows:
string convert(string s, int numRows);
Example 1:
Input: s = "PAYPALISHIRING", numRows = 3
Output: "PAHNAPLSIIGYIR"
Example 2:
Input: s = "PAYPALISHIRING", numRows = 4
Output: "PINALSIGYAHRPI"
Explanation:
P I N
A L S I G
Y A H R
P I
Example 3:
Input: s = "A", numRows = 1
Output: "A"
Constraints:
1 <= s.length <= 1000
s consists of English letters (lower-case and upper-case), ',' and '.'.
1 <= numRows <= 1000
"""
}
],
)
3. Output
PASS: convert('PAYPALISHIRING',3) -> 'PAHNAPLSIIGYIR'
PASS: convert('PAYPALISHIRING',4) -> 'PINALSIGYAHRPI'
PASS: convert('A',1) -> 'A'
PASS: convert('ABC',2) -> 'ACB'
All tests passed.
As we can see, the model was able to accurately figure out the task at hand and come up with the right solution. More specifically, it understood the Zig-Zag Conversion problem and framed the perfect solution for it in Python. This demonstrates its capabilities in deep reasoning, logic, and coding, all in one go.
Conclusion
MiniMax M2.7 has entered an important arena with the new open-weight option. What makes it even more exciting is that this is not some stripped-down open model with limited usefulness. M2.7 arrives with clear strength across coding, tool use, and office-style work. In other words, we now have a whole new way to access a model that is built for real, everyday tasks.
That is exactly why M2.7 stands out. It combines accessibility with serious capability. And in a market where the most powerful AI still lives behind closed doors, that alone makes it worth paying attention to.
Login to continue reading and enjoy expert-curated content.
