README.md CHANGED
@@ -1,224 +1,221 @@
1
- ---
2
- pipeline_tag: text-generation
3
- license: apache-2.0
4
- library_name: transformers
5
- tags:
6
- - vllm
7
- ---
8
-
9
- <div align="center">
10
-
11
- <svg width="60%" height="auto" viewBox="0 0 144 48" fill="none" xmlns="http://www.w3.org/2000/svg">
12
- <path d="M26.6782 7.96523C26.6782 7.02436 25.913 6.26087 24.9739 6.26087C24.0348 6.26087 23.2695 7.0261 23.2695 7.96523V36.2139C23.2695 38.4 21.4904 40.1791 19.3043 40.1791C17.1183 40.1791 15.3391 38.4 15.3391 36.2139V18.0904C15.3391 17.1496 14.5739 16.3861 13.6348 16.3861C12.6956 16.3861 11.9304 17.1513 11.9304 18.0904V25.7722C11.9304 27.9583 10.1513 29.7374 7.96518 29.7374C5.7791 29.7374 4 27.9583 4 25.7722V22.9878C4 22.3635 4.50609 21.8574 5.13043 21.8574C5.75478 21.8574 6.26087 22.3635 6.26087 22.9878V25.7722C6.26087 26.713 7.02605 27.4765 7.96518 27.4765C8.90431 27.4765 9.66954 26.7113 9.66954 25.7722V18.0904C9.66954 15.9044 11.4487 14.1252 13.6348 14.1252C15.8209 14.1252 17.6 15.9044 17.6 18.0904V36.2139C17.6 37.1548 18.3652 37.9183 19.3043 37.9183C20.2435 37.9183 21.0087 37.153 21.0087 36.2139V25.1322V7.96523C21.0087 5.77914 22.7878 4 24.9739 4C27.16 4 28.9391 5.77914 28.9391 7.96523V31.3565C28.9391 31.9809 28.433 32.487 27.8087 32.487C27.1843 32.487 26.6782 31.9809 26.6782 31.3565V7.96523ZM47.6539 14.1252C45.4678 14.1252 43.6887 15.9044 43.6887 18.0904V33.2296C43.6887 34.1704 42.9235 34.9339 41.9843 34.9339C41.0452 34.9339 40.28 34.1687 40.28 33.2296V7.96523C40.28 5.77914 38.5008 4 36.3148 4C34.1287 4 32.3496 5.77914 32.3496 7.96523V40.0348C32.3496 40.9756 31.5843 41.7391 30.6452 41.7391C29.7061 41.7391 28.9409 40.9739 28.9409 40.0348V36.0643C28.9409 35.44 28.4348 34.9339 27.8104 34.9339C27.1861 34.9339 26.68 35.44 26.68 36.0643V40.0348C26.68 42.2209 28.4591 44 30.6452 44C32.8313 44 34.6104 42.2209 34.6104 40.0348V7.96523C34.6104 7.02436 35.3756 6.26087 36.3148 6.26087C37.2539 6.26087 38.0191 7.0261 38.0191 7.96523V33.2296C38.0191 35.4156 39.7982 37.1948 41.9843 37.1948C44.1704 37.1948 45.9496 35.4156 45.9496 33.2296V18.0904C45.9496 17.1496 46.7148 16.3861 47.6539 16.3861C48.593 16.3861 49.3582 17.1513 49.3582 18.0904V31.3565C49.3582 31.9809 49.8643 32.487 50.4887 32.487C51.113 32.487 51.6191 31.9809 51.6191 31.3565V18.0904C51.6191 15.9044 49.84 14.1252 47.6539 14.1252Z" fill="url(#paint0_linear_17_483)"/>
13
- <path d="M68.7671 16.5615H71.2541C71.3254 16.5615 71.3845 16.5859 71.435 16.6363C71.4836 16.6868 71.5097 16.7459 71.5097 16.8172V31.1824C71.5097 31.2537 71.4854 31.3128 71.435 31.3633C71.3845 31.4137 71.3254 31.4381 71.2541 31.4381H68.7671C68.6958 31.4381 68.6367 31.4137 68.5862 31.3633C68.5358 31.3146 68.5115 31.2537 68.5115 31.1824V21.812C68.5115 21.7563 68.4976 21.7268 68.4697 21.7268C68.4419 21.7268 68.4123 21.7476 68.3845 21.7911L66.1323 25.318C66.061 25.4311 65.9619 25.4885 65.8349 25.4885H64.581C64.4541 25.4885 64.3549 25.4328 64.2836 25.318L62.0315 21.7911C62.0036 21.7494 61.9741 21.7302 61.9462 21.7372C61.9184 21.7441 61.9045 21.7772 61.9045 21.8328V31.1824C61.9045 31.2537 61.8802 31.3128 61.8297 31.3633C61.7793 31.4137 61.7202 31.4381 61.6489 31.4381H59.1619C59.0906 31.4381 59.0315 31.4137 58.981 31.3633C58.9306 31.3146 58.9062 31.2537 58.9062 31.1824V16.8172C58.9062 16.7459 58.9306 16.6868 58.981 16.6363C59.0315 16.5859 59.0906 16.5615 59.1619 16.5615H61.6489C61.7758 16.5615 61.8749 16.6189 61.9462 16.732L65.1341 21.6833C65.1758 21.7685 65.2193 21.7685 65.261 21.6833L68.4697 16.732C68.541 16.6189 68.6402 16.5615 68.7671 16.5615Z" fill="currentColor"/>
14
- <path d="M74.1764 31.3633C74.1259 31.3146 74.1016 31.2537 74.1016 31.1824V16.8172C74.1016 16.7459 74.1259 16.6868 74.1764 16.6363C74.2268 16.5859 74.2859 16.5615 74.3572 16.5615H76.8442C76.9155 16.5615 76.9746 16.5859 77.0251 16.6363C77.0737 16.6868 77.0998 16.7459 77.0998 16.8172V31.1824C77.0998 31.2537 77.0755 31.3128 77.0251 31.3633C76.9746 31.4137 76.9155 31.4381 76.8442 31.4381H74.3572C74.2859 31.4381 74.2268 31.4137 74.1764 31.3633Z" fill="currentColor"/>
15
- <path d="M88.3066 16.6361C88.3553 16.5874 88.4162 16.5613 88.4875 16.5613H90.9744C91.0457 16.5613 91.1049 16.5857 91.1553 16.6361C91.204 16.6865 91.2301 16.7457 91.2301 16.817V31.1822C91.2301 31.2535 91.2057 31.3126 91.1553 31.363C91.1049 31.4135 91.0457 31.4378 90.9744 31.4378H88.5727C88.4301 31.4378 88.331 31.3822 88.2753 31.2674L82.771 22.1717C82.7431 22.13 82.7136 22.1109 82.6858 22.1178C82.6579 22.1248 82.644 22.1578 82.644 22.2135L82.6858 31.1805C82.6858 31.2518 82.6614 31.3109 82.611 31.3613C82.5606 31.4117 82.5014 31.4361 82.4301 31.4361H79.9431C79.8718 31.4361 79.8127 31.4117 79.7623 31.3613C79.7118 31.3126 79.6875 31.2518 79.6875 31.1805V16.8152C79.6875 16.7439 79.7118 16.6848 79.7623 16.6344C79.8127 16.5839 79.8718 16.5596 79.9431 16.5596H82.3449C82.4858 16.5596 82.5849 16.617 82.6423 16.73L88.124 25.7822C88.1518 25.8239 88.1797 25.8431 88.2092 25.8361C88.2371 25.8292 88.251 25.7978 88.251 25.7404L88.2301 16.8152C88.2301 16.7439 88.2545 16.6848 88.3049 16.6344L88.3066 16.6361Z" fill="currentColor"/>
16
- <path d="M93.8951 31.3633C93.8446 31.3146 93.8203 31.2537 93.8203 31.1824V16.8172C93.8203 16.7459 93.8446 16.6868 93.8951 16.6363C93.9455 16.5859 94.0047 16.5615 94.076 16.5615H96.5629C96.6342 16.5615 96.6934 16.5859 96.7438 16.6363C96.7925 16.6868 96.8186 16.7459 96.8186 16.8172V31.1824C96.8186 31.2537 96.7942 31.3128 96.7438 31.3633C96.6934 31.4137 96.6342 31.4381 96.5629 31.4381H94.076C94.0047 31.4381 93.9455 31.4137 93.8951 31.3633Z" fill="currentColor"/>
17
- <path d="M109.267 16.5615H111.754C111.825 16.5615 111.885 16.5859 111.935 16.6363C111.984 16.6868 112.01 16.7459 112.01 16.8172V31.1824C112.01 31.2537 111.985 31.3128 111.935 31.3633C111.885 31.4137 111.825 31.4381 111.754 31.4381H109.267C109.196 31.4381 109.137 31.4137 109.086 31.3633C109.036 31.3146 109.011 31.2537 109.011 31.1824V21.812C109.011 21.7563 108.998 21.7268 108.97 21.7268C108.942 21.7268 108.912 21.7476 108.885 21.7911L106.632 25.318C106.561 25.4311 106.462 25.4885 106.335 25.4885H105.081C104.954 25.4885 104.855 25.4328 104.784 25.318L102.531 21.7911C102.504 21.7494 102.474 21.7302 102.446 21.7372C102.418 21.7441 102.405 21.7772 102.405 21.8328V31.1824C102.405 31.2537 102.38 31.3128 102.33 31.3633C102.279 31.4137 102.22 31.4381 102.149 31.4381H99.6619C99.5906 31.4381 99.5315 31.4137 99.481 31.3633C99.4306 31.3146 99.4062 31.2537 99.4062 31.1824V16.8172C99.4062 16.7459 99.4306 16.6868 99.481 16.6363C99.5315 16.5859 99.5906 16.5615 99.6619 16.5615H102.149C102.276 16.5615 102.375 16.6189 102.446 16.732L105.634 21.6833C105.676 21.7685 105.719 21.7685 105.761 21.6833L108.97 16.732C109.041 16.6189 109.14 16.5615 109.267 16.5615Z" fill="currentColor"/>
18
- <path d="M123.782 31.2241L123.144 29.1424C123.116 29.0867 123.079 29.0572 123.038 29.0572H117.81C117.768 29.0572 117.732 29.085 117.704 29.1424L117.088 31.2241C117.046 31.3668 116.954 31.4363 116.812 31.4363H114.112C114.027 31.4363 113.963 31.412 113.921 31.3615C113.879 31.3128 113.871 31.2381 113.9 31.1389L118.49 16.7737C118.532 16.6328 118.624 16.5615 118.766 16.5615H122.102C122.243 16.5615 122.335 16.6328 122.379 16.7737L126.968 31.1389C126.982 31.1668 126.989 31.2033 126.989 31.245C126.989 31.372 126.911 31.4363 126.756 31.4363H124.057C123.916 31.4363 123.824 31.365 123.78 31.2241H123.782ZM118.554 26.7407H122.295C122.38 26.7407 122.408 26.6989 122.38 26.6137L120.467 20.3024C120.453 20.2467 120.432 20.2207 120.403 20.2276C120.375 20.2346 120.352 20.2589 120.339 20.3024L118.469 26.6137C118.455 26.6989 118.483 26.7407 118.554 26.7407Z" fill="currentColor"/>
19
- <path d="M128.222 31.353C128.18 31.2974 128.187 31.2261 128.243 31.1409L132.365 24.0643C132.393 24.0226 132.393 23.9791 132.365 23.9374L128.243 16.8609L128.201 16.7339C128.201 16.6209 128.28 16.5635 128.434 16.5635H131.133C131.274 16.5635 131.38 16.6209 131.452 16.7339L134.213 21.6C134.255 21.6852 134.299 21.6852 134.34 21.6L137.102 16.7339C137.173 16.6209 137.28 16.5635 137.42 16.5635H140.099C140.198 16.5635 140.269 16.5913 140.311 16.6487C140.353 16.7061 140.346 16.7756 140.29 16.8609L136.168 23.9374C136.154 23.9791 136.154 24.0226 136.168 24.0643L140.29 31.1409L140.332 31.2678C140.332 31.3809 140.253 31.4383 140.099 31.4383H137.42C137.278 31.4383 137.172 31.3826 137.102 31.2678L134.34 26.4226C134.299 26.3374 134.255 26.3374 134.213 26.4226L131.429 31.2678C131.358 31.3809 131.252 31.4383 131.111 31.4383H128.433C128.333 31.4383 128.262 31.4104 128.22 31.353H128.222Z" fill="currentColor"/>
20
- <defs>
21
- <linearGradient id="paint0_linear_17_483" x1="3.99826" y1="24" x2="51.6208" y2="24" gradientUnits="userSpaceOnUse">
22
- <stop stop-color="#E21680"/>
23
- <stop offset="1" stop-color="#FF633A"/>
24
- </linearGradient>
25
- </defs>
26
- </svg>
27
-
28
- </div>
29
- <hr>
30
-
31
- <div align="center" style="line-height: 1;">
32
- <a href="https://www.minimax.io" target="_blank" style="margin: 2px;">
33
- <img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
34
- </a>
35
- <a href="https://arxiv.org/abs/2506.13585" target="_blank" style="margin: 2px;">
36
- <img alt="Paper" src="https://img.shields.io/badge/📖_Paper-MiniMax--M1-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
37
- </a>
38
- <a href="https://chat.minimax.io/" target="_blank" style="margin: 2px;">
39
- <img alt="Chat" src="https://img.shields.io/badge/_MiniMax_Chat-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
40
- </a>
41
- <a href="https://www.minimax.io/platform" style="margin: 2px;">
42
- <img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
43
- </a>
44
- <a href="https://github.com/MiniMax-AI/MiniMax-MCP" style="margin: 2px;">
45
- <img alt="MCP" src="https://img.shields.io/badge/🚀_MCP-MiniMax_MCP-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
46
- </a>
47
- </div>
48
- <div align="center" style="line-height: 1;">
49
- <a href="https://huggingface.co/MiniMaxAI" target="_blank" style="margin: 2px;">
50
- <img alt="Hugging Face" src="https://img.shields.io/badge/🤗_Hugging_Face-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
51
- </a>
52
- <a href="https://github.com/MiniMax-AI/MiniMax-M1" target="_blank" style="margin: 2px;">
53
- <img alt="GitHub" src="https://img.shields.io/badge/🐙_GitHub-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
54
- </a>
55
- <a href="https://www.modelscope.cn/organization/MiniMax" target="_blank" style="margin: 2px;">
56
- <img alt="ModelScope" src="https://img.shields.io/badge/🤖️_ModelScope-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
57
- </a>
58
- <a href="https://github.com/MiniMax-AI/MiniMax-M1/blob/main/LICENSE" style="margin: 2px;">
59
- <img alt="License" src="https://img.shields.io/badge/⚖️_License-Apache_2.0-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
60
- </a>
61
- <a href="https://github.com/MiniMax-AI/MiniMax-AI.github.io/blob/main/images/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
62
- <img alt="WeChat" src="https://img.shields.io/badge/💬_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
63
- </a>
64
- </div>
65
-
66
- # MiniMax-M1
67
-
68
- ## 1. Model Overview
69
-
70
- We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
71
- MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning
72
- attention mechanism. The model is developed based on our previous [MiniMax-Text-01 model](https://huggingface.co/MiniMaxAI/MiniMax-Text-01),
73
- which contains a total of 456 billion parameters with 45.9 billion parameters activated
74
- per token. Consistent with MiniMax-Text-01, the M1 model natively supports a context length of 1
75
- million tokens, 8x the context size of DeepSeek R1. Furthermore, the lightning attention mechanism
76
- in MiniMax-M1 enables efficient scaling of test-time compute For example, compared to DeepSeek
77
- R1, M1 consumes 25% of the FLOPs at a generation length of 100K tokens. These properties make M1
78
- particularly suitable for complex tasks that require processing long inputs and thinking extensively.
79
- MiniMax-M1 is trained using large-scale reinforcement learning (RL) on diverse problems ranging from
80
- traditional mathematical reasoning to sandbox-based, real-world software engineering environments.
81
- We develop an efficient RL scaling framework for M1 highlighting two perspectives: (1) We propose
82
- CISPO, a novel algorithm that clips importance sampling weights instead of token updates, which
83
- outperforms other competitive RL variants; (2) Our hybrid-attention design naturally enhances the
84
- efficiency of RL, where we address unique challenges when scaling RL with the hybrid architecture. We
85
- train two versions of MiniMax-M1 models with [40K](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k) and
86
- [80K](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k) thinking budgets respectively. Experiments
87
- on standard benchmarks show that our models outperform other strong open-weight models such as
88
- the original DeepSeek-R1 and Qwen3-235B, particularly on complex software engineering, tool using,
89
- and long context tasks. With efficient scaling of test-time compute, MiniMax-M1 serves as a strong
90
- foundation for next-generation language model agents to reason and tackle real-world challenges.
91
-
92
- <p align="center">
93
- <img width="100%" src="figures/TextBench.png">
94
- <br>
95
- <small><em>Benchmark performance comparison of leading commercial and open-weight models across competition-level mathematics, coding, software engineering, agentic tool use, and long-context understanding tasks. We use the MiniMax-M1-80k model here for MiniMax-M1.</em></small>
96
- </p>
97
-
98
-
99
- ## 2. Evaluation
100
-
101
- **Performance of MiniMax-M1 on core benchmarks.**
102
-
103
-
104
- | **Category** | **Task** | **MiniMax-M1-80K** | **MiniMax-M1-40K** | **Qwen3-235B-A22B** | **DeepSeek-R1-0528** | **DeepSeek-R1** | **Seed-Thinking-v1.5** | **Claude 4 Opus** | **Gemini 2.5 Pro (06-05)** | **OpenAI-o3** |
105
- |:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
106
- | | *Extended Thinking* | *80K* | *40K* | *32k* | *64k* | *32k* | *32k* | *64k* | *64k* | *100k* |
107
- | ***Mathematics*** | AIME 2024 | 86.0 | 83.3 | 85.7 | 91.4 | 79.8 | 86.7 | 76.0 | 92.0 | 91.6 |
108
- | | AIME 2025 | 76.9 | 74.6 | 81.5 | 87.5 | 70.0 | 74.0 | 75.5 | 88.0 | 88.9 |
109
- | | MATH-500 | 96.8 | 96.0 | 96.2 | 98.0 | 97.3 | 96.7 | 98.2 | 98.8 | 98.1 |
110
- | ***General Coding*** | LiveCodeBench *(24/8~25/5)* | 65.0 | 62.3 | 65.9 | 73.1 | 55.9 | 67.5 | 56.6 | 77.1 | 75.8 |
111
- | | FullStackBench | 68.3 | 67.6 | 62.9 | 69.4 | 70.1 | 69.9 | 70.3 | -- | 69.3 |
112
- | ***Reasoning & Knowledge***| GPQA Diamond | 70.0 | 69.2 | 71.1 | 81.0 | 71.5 | 77.3 | 79.6 | 86.4 | 83.3 |
113
- | | HLE *(no tools)* | 8.4\* | 7.2\* | 7.6\* | 17.7\* | 8.6\* | 8.2 | 10.7 | 21.6 | 20.3 |
114
- | | ZebraLogic | 86.8 | 80.1 | 80.3 | 95.1 | 78.7 | 84.4 | 95.1 | 91.6 | 95.8 |
115
- | | MMLU-Pro | 81.1 | 80.6 | 83.0 | 85.0 | 84.0 | 87.0 | 85.0 | 86.0 | 85.0 |
116
- | ***Software Engineering***| SWE-bench Verified| 56.0 | 55.6 | 34.4 | 57.6 | 49.2 | 47.0 | 72.5 | 67.2 | 69.1 |
117
- | ***Long Context*** | OpenAI-MRCR *(128k)* | 73.4 | 76.1 | 27.7 | 51.5 | 35.8 | 54.3 | 48.9 | 76.8 | 56.5 |
118
- | | OpenAI-MRCR *(1M)* | 56.2 | 58.6 | -- | -- | -- | -- | -- | 58.8 | -- |
119
- | | LongBench-v2 | 61.5 | 61.0 | 50.1 | 52.1 | 58.3 | 52.5 | 55.6 | 65.0 | 58.8 |
120
- | ***Agentic Tool Use***| TAU-bench *(airline)* | 62.0 | 60.0 | 34.7 | 53.5 | -- | 44.0 | 59.6 | 50.0 | 52.0 |
121
- | | TAU-bench *(retail)* | 63.5 | 67.8 | 58.6 | 63.9 | -- | 55.7 | 81.4 | 67.0 | 73.9 |
122
- | ***Factuality*** | SimpleQA | 18.5 | 17.9 | 11.0 | 27.8 | 30.1 | 12.9 | -- | 54.0 | 49.4 |
123
- | ***General Assistant***| MultiChallenge | 44.7 | 44.7 | 40.0 | 45.0 | 40.7 | 43.0 | 45.8 | 51.8 | 56.5 |
124
-
125
- \* conducted on the text-only HLE subset.
126
-
127
- Our models are evaluated with `temperature=1.0`, `top_p=0.95`.
128
-
129
- ### SWE-bench methodology
130
- We report results derived from the Agentless scaffold. Departing from the original pipeline, our methodology employs a two-stage localization process (without any embedding-based retrieval mechanisms): initial coarse-grained file localization followed by fine-grained localization to specific files and code elements. The values for our models are calculated on the subset of n=486 verified tasks which work on our infrastructure. The excluded 14 test cases that were incompatible with our internal infrastructure are:
131
- `"astropy__astropy-7606"`,
132
- `"astropy__astropy-8707"`,
133
- `"astropy__astropy-8872"`,
134
- `"django__django-10097"`,
135
- `"matplotlib__matplotlib-20488"`,
136
- `"psf__requests-2317"`,
137
- `"psf__requests-2931"`,
138
- `"psf__requests-5414"`,
139
- `"pylint-dev__pylint-6528"`,
140
- `"pylint-dev__pylint-7277"`,
141
- `"sphinx-doc__sphinx-10435"`,
142
- `"sphinx-doc__sphinx-7985"`,
143
- `"sphinx-doc__sphinx-8269"`,
144
- `"sphinx-doc__sphinx-8475"`
145
-
146
- ### TAU-bench methodology
147
- We evaluate TAU-Bench with GPT-4.1 as user model and without any custom tools. The maximum number of interaction steps is 40.
148
- Our general system prompt is:
149
- ```
150
- - In each round, you need to carefully examine the tools provided to you to determine if any can be used.
151
- - You must adhere to all of the policies. Pay attention to the details in the terms. Solutions for most situations can be found within these policies.
152
- ```
153
-
154
- ## 3. Recommendations for Minimax-M1 Model Usage
155
-
156
- To achieve the best results with the Minimax-M1 model, we suggest focusing on two key points: Inference Parameters and the System Prompt.
157
-
158
- ### 3.1. Inference Parameters
159
- - Temperature: **`1.0`**
160
- - Top_p: **`0.95`**
161
-
162
- This setting is optimal for encouraging creativity and diversity in the model's responses. It allows the model to explore a wider range of linguistic possibilities, preventing outputs that are too rigid or repetitive, while still maintaining strong logical coherence.
163
-
164
- ### 3.2. System Prompt
165
- Tailoring your system prompt to the specific task is crucial for guiding the model effectively. Below are suggested settings for different scenarios.
166
-
167
- #### A. General-Purpose Scenarios
168
- For common tasks like summarization, translation, Q&A, or creative writing:
169
- ```
170
- You are a helpful assistant.
171
- ```
172
- #### B. Web Development Scenarios
173
- For complex tasks like generating code for web pages:
174
- ```
175
- You are a web development engineer, writing web pages according to the instructions below. You are a powerful code editing assistant capable of writing code and creating artifacts in conversations with users, or modifying and updating existing artifacts as requested by users.
176
- All code is written in a single code block to form a complete code file for display, without separating HTML and JavaScript code. An artifact refers to a runnable complete code snippet, you prefer to integrate and output such complete runnable code rather than breaking it down into several code blocks. For certain types of code, they can render graphical interfaces in a UI window. After generation, please check the code execution again to ensure there are no errors in the output.
177
- Output only the HTML, without any additional descriptive text. Make the UI looks modern and beautiful.
178
- ```
179
- #### C. Mathematical Scenarios
180
- When dealing with problems that require calculation or logical deduction:
181
- ```
182
- Please reason step by step, and put your final answer within \boxed{}.
183
- ```
184
-
185
- ## 4. Deployment Guide
186
-
187
- Download the model from HuggingFace repository:
188
- - [MiniMax-M1-40k](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k)
189
- - [MiniMax-M1-80k](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k)
190
-
191
- For production deployment, we recommend using [vLLM](https://docs.vllm.ai/en/latest/) to serve MiniMax-M1. vLLM provides excellent performance for serving large language models with the following features:
192
- - 🔥 Outstanding service throughout performance
193
- - ⚡ Efficient and intelligent memory management
194
- - 📦 Powerful batch request processing capability
195
- - ⚙️ Deeply optimized underlying performance
196
-
197
- For detailed vLLM deployment instructions, please refer to our [vLLM Deployment Guide](./docs/vllm_deployment_guide.md). Special Note: Using vLLM versions below 0.9.2 may result in incompatibility or incorrect precision for the model.
198
- Alternatively, you can also deploy using Transformers directly. For detailed Transformers deployment instructions, you can see our [MiniMax-M1 Transformers Deployment Guide](./docs/transformers_deployment_guide.md).
199
-
200
-
201
- ## 5. Function Calling
202
-
203
- The MiniMax-M1 model supports function calling capabilities, enabling the model to identify when external functions need to be called and output function call parameters in a structured format. [MiniMax-M1 Function Call Guide](./docs/function_call_guide.md) provides detailed instructions on how to use the function calling feature of MiniMax-M1.
204
-
205
-
206
- ## 6. Chatbot & API
207
- For general use and evaluation, we provide a [Chatbot](https://chat.minimax.io/) with online search capabilities and the [online API](https://www.minimax.io/platform/) for developers. For general use and evaluation, we provide the [MiniMax MCP Server](https://github.com/MiniMax-AI/MiniMax-MCP) with video generation, image generation, speech synthesis, and voice cloning for developers.
208
-
209
-
210
- ## 7. Citation
211
- ```
212
- @misc{minimax2025minimaxm1scalingtesttimecompute,
213
- title={MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention},
214
- author={MiniMax},
215
- year={2025},
216
- eprint={2506.13585},
217
- archivePrefix={arXiv},
218
- primaryClass={cs.CL},
219
- url={https://arxiv.org/abs/2506.13585},
220
- }
221
- ```
222
-
223
- ## 8. Contact Us
224
  Contact us at [[email protected]](mailto:[email protected]).
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ license: apache-2.0
4
+ ---
5
+
6
+ <div align="center">
7
+
8
+ <svg width="60%" height="auto" viewBox="0 0 144 48" fill="none" xmlns="http://www.w3.org/2000/svg">
9
+ <path d="M26.6782 7.96523C26.6782 7.02436 25.913 6.26087 24.9739 6.26087C24.0348 6.26087 23.2695 7.0261 23.2695 7.96523V36.2139C23.2695 38.4 21.4904 40.1791 19.3043 40.1791C17.1183 40.1791 15.3391 38.4 15.3391 36.2139V18.0904C15.3391 17.1496 14.5739 16.3861 13.6348 16.3861C12.6956 16.3861 11.9304 17.1513 11.9304 18.0904V25.7722C11.9304 27.9583 10.1513 29.7374 7.96518 29.7374C5.7791 29.7374 4 27.9583 4 25.7722V22.9878C4 22.3635 4.50609 21.8574 5.13043 21.8574C5.75478 21.8574 6.26087 22.3635 6.26087 22.9878V25.7722C6.26087 26.713 7.02605 27.4765 7.96518 27.4765C8.90431 27.4765 9.66954 26.7113 9.66954 25.7722V18.0904C9.66954 15.9044 11.4487 14.1252 13.6348 14.1252C15.8209 14.1252 17.6 15.9044 17.6 18.0904V36.2139C17.6 37.1548 18.3652 37.9183 19.3043 37.9183C20.2435 37.9183 21.0087 37.153 21.0087 36.2139V25.1322V7.96523C21.0087 5.77914 22.7878 4 24.9739 4C27.16 4 28.9391 5.77914 28.9391 7.96523V31.3565C28.9391 31.9809 28.433 32.487 27.8087 32.487C27.1843 32.487 26.6782 31.9809 26.6782 31.3565V7.96523ZM47.6539 14.1252C45.4678 14.1252 43.6887 15.9044 43.6887 18.0904V33.2296C43.6887 34.1704 42.9235 34.9339 41.9843 34.9339C41.0452 34.9339 40.28 34.1687 40.28 33.2296V7.96523C40.28 5.77914 38.5008 4 36.3148 4C34.1287 4 32.3496 5.77914 32.3496 7.96523V40.0348C32.3496 40.9756 31.5843 41.7391 30.6452 41.7391C29.7061 41.7391 28.9409 40.9739 28.9409 40.0348V36.0643C28.9409 35.44 28.4348 34.9339 27.8104 34.9339C27.1861 34.9339 26.68 35.44 26.68 36.0643V40.0348C26.68 42.2209 28.4591 44 30.6452 44C32.8313 44 34.6104 42.2209 34.6104 40.0348V7.96523C34.6104 7.02436 35.3756 6.26087 36.3148 6.26087C37.2539 6.26087 38.0191 7.0261 38.0191 7.96523V33.2296C38.0191 35.4156 39.7982 37.1948 41.9843 37.1948C44.1704 37.1948 45.9496 35.4156 45.9496 33.2296V18.0904C45.9496 17.1496 46.7148 16.3861 47.6539 16.3861C48.593 16.3861 49.3582 17.1513 49.3582 18.0904V31.3565C49.3582 31.9809 49.8643 32.487 50.4887 32.487C51.113 32.487 51.6191 31.9809 51.6191 31.3565V18.0904C51.6191 15.9044 49.84 14.1252 47.6539 14.1252Z" fill="url(#paint0_linear_17_483)"/>
10
+ <path d="M68.7671 16.5615H71.2541C71.3254 16.5615 71.3845 16.5859 71.435 16.6363C71.4836 16.6868 71.5097 16.7459 71.5097 16.8172V31.1824C71.5097 31.2537 71.4854 31.3128 71.435 31.3633C71.3845 31.4137 71.3254 31.4381 71.2541 31.4381H68.7671C68.6958 31.4381 68.6367 31.4137 68.5862 31.3633C68.5358 31.3146 68.5115 31.2537 68.5115 31.1824V21.812C68.5115 21.7563 68.4976 21.7268 68.4697 21.7268C68.4419 21.7268 68.4123 21.7476 68.3845 21.7911L66.1323 25.318C66.061 25.4311 65.9619 25.4885 65.8349 25.4885H64.581C64.4541 25.4885 64.3549 25.4328 64.2836 25.318L62.0315 21.7911C62.0036 21.7494 61.9741 21.7302 61.9462 21.7372C61.9184 21.7441 61.9045 21.7772 61.9045 21.8328V31.1824C61.9045 31.2537 61.8802 31.3128 61.8297 31.3633C61.7793 31.4137 61.7202 31.4381 61.6489 31.4381H59.1619C59.0906 31.4381 59.0315 31.4137 58.981 31.3633C58.9306 31.3146 58.9062 31.2537 58.9062 31.1824V16.8172C58.9062 16.7459 58.9306 16.6868 58.981 16.6363C59.0315 16.5859 59.0906 16.5615 59.1619 16.5615H61.6489C61.7758 16.5615 61.8749 16.6189 61.9462 16.732L65.1341 21.6833C65.1758 21.7685 65.2193 21.7685 65.261 21.6833L68.4697 16.732C68.541 16.6189 68.6402 16.5615 68.7671 16.5615Z" fill="currentColor"/>
11
+ <path d="M74.1764 31.3633C74.1259 31.3146 74.1016 31.2537 74.1016 31.1824V16.8172C74.1016 16.7459 74.1259 16.6868 74.1764 16.6363C74.2268 16.5859 74.2859 16.5615 74.3572 16.5615H76.8442C76.9155 16.5615 76.9746 16.5859 77.0251 16.6363C77.0737 16.6868 77.0998 16.7459 77.0998 16.8172V31.1824C77.0998 31.2537 77.0755 31.3128 77.0251 31.3633C76.9746 31.4137 76.9155 31.4381 76.8442 31.4381H74.3572C74.2859 31.4381 74.2268 31.4137 74.1764 31.3633Z" fill="currentColor"/>
12
+ <path d="M88.3066 16.6361C88.3553 16.5874 88.4162 16.5613 88.4875 16.5613H90.9744C91.0457 16.5613 91.1049 16.5857 91.1553 16.6361C91.204 16.6865 91.2301 16.7457 91.2301 16.817V31.1822C91.2301 31.2535 91.2057 31.3126 91.1553 31.363C91.1049 31.4135 91.0457 31.4378 90.9744 31.4378H88.5727C88.4301 31.4378 88.331 31.3822 88.2753 31.2674L82.771 22.1717C82.7431 22.13 82.7136 22.1109 82.6858 22.1178C82.6579 22.1248 82.644 22.1578 82.644 22.2135L82.6858 31.1805C82.6858 31.2518 82.6614 31.3109 82.611 31.3613C82.5606 31.4117 82.5014 31.4361 82.4301 31.4361H79.9431C79.8718 31.4361 79.8127 31.4117 79.7623 31.3613C79.7118 31.3126 79.6875 31.2518 79.6875 31.1805V16.8152C79.6875 16.7439 79.7118 16.6848 79.7623 16.6344C79.8127 16.5839 79.8718 16.5596 79.9431 16.5596H82.3449C82.4858 16.5596 82.5849 16.617 82.6423 16.73L88.124 25.7822C88.1518 25.8239 88.1797 25.8431 88.2092 25.8361C88.2371 25.8292 88.251 25.7978 88.251 25.7404L88.2301 16.8152C88.2301 16.7439 88.2545 16.6848 88.3049 16.6344L88.3066 16.6361Z" fill="currentColor"/>
13
+ <path d="M93.8951 31.3633C93.8446 31.3146 93.8203 31.2537 93.8203 31.1824V16.8172C93.8203 16.7459 93.8446 16.6868 93.8951 16.6363C93.9455 16.5859 94.0047 16.5615 94.076 16.5615H96.5629C96.6342 16.5615 96.6934 16.5859 96.7438 16.6363C96.7925 16.6868 96.8186 16.7459 96.8186 16.8172V31.1824C96.8186 31.2537 96.7942 31.3128 96.7438 31.3633C96.6934 31.4137 96.6342 31.4381 96.5629 31.4381H94.076C94.0047 31.4381 93.9455 31.4137 93.8951 31.3633Z" fill="currentColor"/>
14
+ <path d="M109.267 16.5615H111.754C111.825 16.5615 111.885 16.5859 111.935 16.6363C111.984 16.6868 112.01 16.7459 112.01 16.8172V31.1824C112.01 31.2537 111.985 31.3128 111.935 31.3633C111.885 31.4137 111.825 31.4381 111.754 31.4381H109.267C109.196 31.4381 109.137 31.4137 109.086 31.3633C109.036 31.3146 109.011 31.2537 109.011 31.1824V21.812C109.011 21.7563 108.998 21.7268 108.97 21.7268C108.942 21.7268 108.912 21.7476 108.885 21.7911L106.632 25.318C106.561 25.4311 106.462 25.4885 106.335 25.4885H105.081C104.954 25.4885 104.855 25.4328 104.784 25.318L102.531 21.7911C102.504 21.7494 102.474 21.7302 102.446 21.7372C102.418 21.7441 102.405 21.7772 102.405 21.8328V31.1824C102.405 31.2537 102.38 31.3128 102.33 31.3633C102.279 31.4137 102.22 31.4381 102.149 31.4381H99.6619C99.5906 31.4381 99.5315 31.4137 99.481 31.3633C99.4306 31.3146 99.4062 31.2537 99.4062 31.1824V16.8172C99.4062 16.7459 99.4306 16.6868 99.481 16.6363C99.5315 16.5859 99.5906 16.5615 99.6619 16.5615H102.149C102.276 16.5615 102.375 16.6189 102.446 16.732L105.634 21.6833C105.676 21.7685 105.719 21.7685 105.761 21.6833L108.97 16.732C109.041 16.6189 109.14 16.5615 109.267 16.5615Z" fill="currentColor"/>
15
+ <path d="M123.782 31.2241L123.144 29.1424C123.116 29.0867 123.079 29.0572 123.038 29.0572H117.81C117.768 29.0572 117.732 29.085 117.704 29.1424L117.088 31.2241C117.046 31.3668 116.954 31.4363 116.812 31.4363H114.112C114.027 31.4363 113.963 31.412 113.921 31.3615C113.879 31.3128 113.871 31.2381 113.9 31.1389L118.49 16.7737C118.532 16.6328 118.624 16.5615 118.766 16.5615H122.102C122.243 16.5615 122.335 16.6328 122.379 16.7737L126.968 31.1389C126.982 31.1668 126.989 31.2033 126.989 31.245C126.989 31.372 126.911 31.4363 126.756 31.4363H124.057C123.916 31.4363 123.824 31.365 123.78 31.2241H123.782ZM118.554 26.7407H122.295C122.38 26.7407 122.408 26.6989 122.38 26.6137L120.467 20.3024C120.453 20.2467 120.432 20.2207 120.403 20.2276C120.375 20.2346 120.352 20.2589 120.339 20.3024L118.469 26.6137C118.455 26.6989 118.483 26.7407 118.554 26.7407Z" fill="currentColor"/>
16
+ <path d="M128.222 31.353C128.18 31.2974 128.187 31.2261 128.243 31.1409L132.365 24.0643C132.393 24.0226 132.393 23.9791 132.365 23.9374L128.243 16.8609L128.201 16.7339C128.201 16.6209 128.28 16.5635 128.434 16.5635H131.133C131.274 16.5635 131.38 16.6209 131.452 16.7339L134.213 21.6C134.255 21.6852 134.299 21.6852 134.34 21.6L137.102 16.7339C137.173 16.6209 137.28 16.5635 137.42 16.5635H140.099C140.198 16.5635 140.269 16.5913 140.311 16.6487C140.353 16.7061 140.346 16.7756 140.29 16.8609L136.168 23.9374C136.154 23.9791 136.154 24.0226 136.168 24.0643L140.29 31.1409L140.332 31.2678C140.332 31.3809 140.253 31.4383 140.099 31.4383H137.42C137.278 31.4383 137.172 31.3826 137.102 31.2678L134.34 26.4226C134.299 26.3374 134.255 26.3374 134.213 26.4226L131.429 31.2678C131.358 31.3809 131.252 31.4383 131.111 31.4383H128.433C128.333 31.4383 128.262 31.4104 128.22 31.353H128.222Z" fill="currentColor"/>
17
+ <defs>
18
+ <linearGradient id="paint0_linear_17_483" x1="3.99826" y1="24" x2="51.6208" y2="24" gradientUnits="userSpaceOnUse">
19
+ <stop stop-color="#E21680"/>
20
+ <stop offset="1" stop-color="#FF633A"/>
21
+ </linearGradient>
22
+ </defs>
23
+ </svg>
24
+
25
+ </div>
26
+ <hr>
27
+
28
+ <div align="center" style="line-height: 1;">
29
+ <a href="https://www.minimax.io" target="_blank" style="margin: 2px;">
30
+ <img alt="Homepage" src="https://img.shields.io/badge/_Homepage-MiniMax-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
31
+ </a>
32
+ <a href="https://arxiv.org/abs/2506.13585" target="_blank" style="margin: 2px;">
33
+ <img alt="Paper" src="https://img.shields.io/badge/📖_Paper-MiniMax--M1-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
34
+ </a>
35
+ <a href="https://chat.minimax.io/" target="_blank" style="margin: 2px;">
36
+ <img alt="Chat" src="https://img.shields.io/badge/_MiniMax_Chat-FF4040?style=flat-square&labelColor=2C3E50&logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHhtbG5zOnhsaW5rPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5L3hsaW5rIiB2aWV3Qm94PSIwIDAgNDkwLjE2IDQxMS43Ij48ZGVmcz48c3R5bGU+LmNscy0xe2ZpbGw6I2ZmZjt9PC9zdHlsZT48L2RlZnM+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMjMzLjQ1LDQwLjgxYTE3LjU1LDE3LjU1LDAsMSwwLTM1LjEsMFYzMzEuNTZhNDAuODIsNDAuODIsMCwwLDEtODEuNjMsMFYxNDVhMTcuNTUsMTcuNTUsMCwxLDAtMzUuMDksMHY3OS4wNmE0MC44Miw0MC44MiwwLDAsMS04MS42MywwVjE5NS40MmExMS42MywxMS42MywwLDAsMSwyMy4yNiwwdjI4LjY2YTE3LjU1LDE3LjU1LDAsMCwwLDM1LjEsMFYxNDVBNDAuODIsNDAuODIsMCwwLDEsMTQwLDE0NVYzMzEuNTZhMTcuNTUsMTcuNTUsMCwwLDAsMzUuMSwwVjIxNy41aDBWNDAuODFhNDAuODEsNDAuODEsMCwxLDEsODEuNjIsMFYyODEuNTZhMTEuNjMsMTEuNjMsMCwxLDEtMjMuMjYsMFptMjE1LjksNjMuNEE0MC44Niw0MC44NiwwLDAsMCw0MDguNTMsMTQ1VjMwMC44NWExNy41NSwxNy41NSwwLDAsMS0zNS4wOSwwdi0yNjBhNDAuODIsNDAuODIsMCwwLDAtODEuNjMsMFYzNzAuODlhMTcuNTUsMTcuNTUsMCwwLDEtMzUuMSwwVjMzMGExMS42MywxMS42MywwLDEsMC0yMy4yNiwwdjQwLjg2YTQwLjgxLDQwLjgxLDAsMCwwLDgxLjYyLDBWNDAuODFhMTcuNTUsMTcuNTUsMCwwLDEsMzUuMSwwdjI2MGE0MC44Miw0MC44MiwwLDAsMCw4MS42MywwVjE0NWExNy41NSwxNy41NSwwLDEsMSwzNS4xLDBWMjgxLjU2YTExLjYzLDExLjYzLDAsMCwwLDIzLjI2LDBWMTQ1QTQwLjg1LDQwLjg1LDAsMCwwLDQ0OS4zNSwxMDQuMjFaIi8+PC9zdmc+&logoWidth=20" style="display: inline-block; vertical-align: middle;"/>
37
+ </a>
38
+ <a href="https://www.minimax.io/platform" style="margin: 2px;">
39
+ <img alt="API" src="https://img.shields.io/badge/⚡_API-Platform-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
40
+ </a>
41
+ <a href="https://github.com/MiniMax-AI/MiniMax-MCP" style="margin: 2px;">
42
+ <img alt="MCP" src="https://img.shields.io/badge/🚀_MCP-MiniMax_MCP-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
43
+ </a>
44
+ </div>
45
+ <div align="center" style="line-height: 1;">
46
+ <a href="https://huggingface.co/MiniMaxAI" target="_blank" style="margin: 2px;">
47
+ <img alt="Hugging Face" src="https://img.shields.io/badge/🤗_Hugging_Face-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
48
+ </a>
49
+ <a href="https://github.com/MiniMax-AI/MiniMax-M1" target="_blank" style="margin: 2px;">
50
+ <img alt="GitHub" src="https://img.shields.io/badge/🐙_GitHub-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
51
+ </a>
52
+ <a href="https://www.modelscope.cn/organization/MiniMax" target="_blank" style="margin: 2px;">
53
+ <img alt="ModelScope" src="https://img.shields.io/badge/🤖️_ModelScope-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
54
+ </a>
55
+ <a href="https://github.com/MiniMax-AI/MiniMax-M1/blob/main/LICENSE" style="margin: 2px;">
56
+ <img alt="License" src="https://img.shields.io/badge/⚖️_License-Apache_2.0-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
57
+ </a>
58
+ <a href="https://github.com/MiniMax-AI/MiniMax-01/blob/main/figures/wechat-qrcode.jpeg" target="_blank" style="margin: 2px;">
59
+ <img alt="WeChat" src="https://img.shields.io/badge/💬_WeChat-MiniMax-FF4040?style=flat-square&labelColor=2C3E50" style="display: inline-block; vertical-align: middle;"/>
60
+ </a>
61
+ </div>
62
+
63
+ # MiniMax-M1
64
+
65
+ ## 1. Model Overview
66
+
67
+ We introduce MiniMax-M1, the world's first open-weight, large-scale hybrid-attention reasoning model.
68
+ MiniMax-M1 is powered by a hybrid Mixture-of-Experts (MoE) architecture combined with a lightning
69
+ attention mechanism. The model is developed based on our previous [MiniMax-Text-01 model](https://huggingface.co/MiniMaxAI/MiniMax-Text-01),
70
+ which contains a total of 456 billion parameters with 45.9 billion parameters activated
71
+ per token. Consistent with MiniMax-Text-01, the M1 model natively supports a context length of 1
72
+ million tokens, 8x the context size of DeepSeek R1. Furthermore, the lightning attention mechanism
73
+ in MiniMax-M1 enables efficient scaling of test-time compute For example, compared to DeepSeek
74
+ R1, M1 consumes 25% of the FLOPs at a generation length of 100K tokens. These properties make M1
75
+ particularly suitable for complex tasks that require processing long inputs and thinking extensively.
76
+ MiniMax-M1 is trained using large-scale reinforcement learning (RL) on diverse problems ranging from
77
+ traditional mathematical reasoning to sandbox-based, real-world software engineering environments.
78
+ We develop an efficient RL scaling framework for M1 highlighting two perspectives: (1) We propose
79
+ CISPO, a novel algorithm that clips importance sampling weights instead of token updates, which
80
+ outperforms other competitive RL variants; (2) Our hybrid-attention design naturally enhances the
81
+ efficiency of RL, where we address unique challenges when scaling RL with the hybrid architecture. We
82
+ train two versions of MiniMax-M1 models with [40K](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k) and
83
+ [80K](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k) thinking budgets respectively. Experiments
84
+ on standard benchmarks show that our models outperform other strong open-weight models such as
85
+ the original DeepSeek-R1 and Qwen3-235B, particularly on complex software engineering, tool using,
86
+ and long context tasks. With efficient scaling of test-time compute, MiniMax-M1 serves as a strong
87
+ foundation for next-generation language model agents to reason and tackle real-world challenges.
88
+
89
+ <p align="center">
90
+ <img width="100%" src="figures/TextBench.png">
91
+ <br>
92
+ <small><em>Benchmark performance comparison of leading commercial and open-weight models across competition-level mathematics, coding, software engineering, agentic tool use, and long-context understanding tasks. We use the MiniMax-M1-80k model here for MiniMax-M1.</em></small>
93
+ </p>
94
+
95
+
96
+ ## 2. Evaluation
97
+
98
+ **Performance of MiniMax-M1 on core benchmarks.**
99
+
100
+
101
+ | **Category** | **Task** | **MiniMax-M1-80K** | **MiniMax-M1-40K** | **Qwen3-235B-A22B** | **DeepSeek-R1-0528** | **DeepSeek-R1** | **Seed-Thinking-v1.5** | **Claude 4 Opus** | **Gemini 2.5 Pro (06-05)** | **OpenAI-o3** |
102
+ |:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
103
+ | | *Extended Thinking* | *80K* | *40K* | *32k* | *64k* | *32k* | *32k* | *64k* | *64k* | *100k* |
104
+ | ***Mathematics*** | AIME 2024 | 86.0 | 83.3 | 85.7 | 91.4 | 79.8 | 86.7 | 76.0 | 92.0 | 91.6 |
105
+ | | AIME 2025 | 76.9 | 74.6 | 81.5 | 87.5 | 70.0 | 74.0 | 75.5 | 88.0 | 88.9 |
106
+ | | MATH-500 | 96.8 | 96.0 | 96.2 | 98.0 | 97.3 | 96.7 | 98.2 | 98.8 | 98.1 |
107
+ | ***General Coding*** | LiveCodeBench *(24/8~25/5)* | 65.0 | 62.3 | 65.9 | 73.1 | 55.9 | 67.5 | 56.6 | 77.1 | 75.8 |
108
+ | | FullStackBench | 68.3 | 67.6 | 62.9 | 69.4 | 70.1 | 69.9 | 70.3 | -- | 69.3 |
109
+ | ***Reasoning & Knowledge***| GPQA Diamond | 70.0 | 69.2 | 71.1 | 81.0 | 71.5 | 77.3 | 79.6 | 86.4 | 83.3 |
110
+ | | HLE *(no tools)* | 8.4\* | 7.2\* | 7.6\* | 17.7\* | 8.6\* | 8.2 | 10.7 | 21.6 | 20.3 |
111
+ | | ZebraLogic | 86.8 | 80.1 | 80.3 | 95.1 | 78.7 | 84.4 | 95.1 | 91.6 | 95.8 |
112
+ | | MMLU-Pro | 81.1 | 80.6 | 83.0 | 85.0 | 84.0 | 87.0 | 85.0 | 86.0 | 85.0 |
113
+ | ***Software Engineering***| SWE-bench Verified| 56.0 | 55.6 | 34.4 | 57.6 | 49.2 | 47.0 | 72.5 | 67.2 | 69.1 |
114
+ | ***Long Context*** | OpenAI-MRCR *(128k)* | 73.4 | 76.1 | 27.7 | 51.5 | 35.8 | 54.3 | 48.9 | 76.8 | 56.5 |
115
+ | | OpenAI-MRCR *(1M)* | 56.2 | 58.6 | -- | -- | -- | -- | -- | 58.8 | -- |
116
+ | | LongBench-v2 | 61.5 | 61.0 | 50.1 | 52.1 | 58.3 | 52.5 | 55.6 | 65.0 | 58.8 |
117
+ | ***Agentic Tool Use***| TAU-bench *(airline)* | 62.0 | 60.0 | 34.7 | 53.5 | -- | 44.0 | 59.6 | 50.0 | 52.0 |
118
+ | | TAU-bench *(retail)* | 63.5 | 67.8 | 58.6 | 63.9 | -- | 55.7 | 81.4 | 67.0 | 73.9 |
119
+ | ***Factuality*** | SimpleQA | 18.5 | 17.9 | 11.0 | 27.8 | 30.1 | 12.9 | -- | 54.0 | 49.4 |
120
+ | ***General Assistant***| MultiChallenge | 44.7 | 44.7 | 40.0 | 45.0 | 40.7 | 43.0 | 45.8 | 51.8 | 56.5 |
121
+
122
+ \* conducted on the text-only HLE subset.
123
+
124
+ Our models are evaluated with `temperature=1.0`, `top_p=0.95`.
125
+
126
+ ### SWE-bench methodology
127
+ We report results derived from the Agentless scaffold. Departing from the original pipeline, our methodology employs a two-stage localization process (without any embedding-based retrieval mechanisms): initial coarse-grained file localization followed by fine-grained localization to specific files and code elements. The values for our models are calculated on the subset of n=486 verified tasks which work on our infrastructure. The excluded 14 test cases that were incompatible with our internal infrastructure are:
128
+ `"astropy__astropy-7606"`,
129
+ `"astropy__astropy-8707"`,
130
+ `"astropy__astropy-8872"`,
131
+ `"django__django-10097"`,
132
+ `"matplotlib__matplotlib-20488"`,
133
+ `"psf__requests-2317"`,
134
+ `"psf__requests-2931"`,
135
+ `"psf__requests-5414"`,
136
+ `"pylint-dev__pylint-6528"`,
137
+ `"pylint-dev__pylint-7277"`,
138
+ `"sphinx-doc__sphinx-10435"`,
139
+ `"sphinx-doc__sphinx-7985"`,
140
+ `"sphinx-doc__sphinx-8269"`,
141
+ `"sphinx-doc__sphinx-8475"`
142
+
143
+ ### TAU-bench methodology
144
+ We evaluate TAU-Bench with GPT-4.1 as user model and without any custom tools. The maximum number of interaction steps is 40.
145
+ Our general system prompt is:
146
+ ```
147
+ - In each round, you need to carefully examine the tools provided to you to determine if any can be used.
148
+ - You must adhere to all of the policies. Pay attention to the details in the terms. Solutions for most situations can be found within these policies.
149
+ ```
150
+
151
+ ## 3. Recommendations for Minimax-M1 Model Usage
152
+
153
+ To achieve the best results with the Minimax-M1 model, we suggest focusing on two key points: Inference Parameters and the System Prompt.
154
+
155
+ ### 3.1. Inference Parameters
156
+ - Temperature: **`1.0`**
157
+ - Top_p: **`0.95`**
158
+
159
+ This setting is optimal for encouraging creativity and diversity in the model's responses. It allows the model to explore a wider range of linguistic possibilities, preventing outputs that are too rigid or repetitive, while still maintaining strong logical coherence.
160
+
161
+ ### 3.2. System Prompt
162
+ Tailoring your system prompt to the specific task is crucial for guiding the model effectively. Below are suggested settings for different scenarios.
163
+
164
+ #### A. General-Purpose Scenarios
165
+ For common tasks like summarization, translation, Q&A, or creative writing:
166
+ ```
167
+ You are a helpful assistant.
168
+ ```
169
+ #### B. Web Development Scenarios
170
+ For complex tasks like generating code for web pages:
171
+ ```
172
+ You are a web development engineer, writing web pages according to the instructions below. You are a powerful code editing assistant capable of writing code and creating artifacts in conversations with users, or modifying and updating existing artifacts as requested by users.
173
+ All code is written in a single code block to form a complete code file for display, without separating HTML and JavaScript code. An artifact refers to a runnable complete code snippet, you prefer to integrate and output such complete runnable code rather than breaking it down into several code blocks. For certain types of code, they can render graphical interfaces in a UI window. After generation, please check the code execution again to ensure there are no errors in the output.
174
+ Output only the HTML, without any additional descriptive text. Make the UI looks modern and beautiful.
175
+ ```
176
+ #### C. Mathematical Scenarios
177
+ When dealing with problems that require calculation or logical deduction:
178
+ ```
179
+ Please reason step by step, and put your final answer within \boxed{}.
180
+ ```
181
+
182
+ ## 4. Deployment Guide
183
+
184
+ Download the model from HuggingFace repository:
185
+ - [MiniMax-M1-40k](https://huggingface.co/MiniMaxAI/MiniMax-M1-40k)
186
+ - [MiniMax-M1-80k](https://huggingface.co/MiniMaxAI/MiniMax-M1-80k)
187
+
188
+ For production deployment, we recommend using [vLLM](https://docs.vllm.ai/en/latest/) to serve MiniMax-M1. vLLM provides excellent performance for serving large language models with the following features:
189
+ - 🔥 Outstanding service throughout performance
190
+ - ⚡ Efficient and intelligent memory management
191
+ - 📦 Powerful batch request processing capability
192
+ - ⚙️ Deeply optimized underlying performance
193
+
194
+ For detailed vLLM deployment instructions, please refer to our [vLLM Deployment Guide](./docs/vllm_deployment_guide.md).
195
+ Alternatively, you can also deploy using Transformers directly. For detailed Transformers deployment instructions, you can see our [MiniMax-M1 Transformers Deployment Guide](./docs/transformers_deployment_guide.md).
196
+
197
+
198
+ ## 5. Function Calling
199
+
200
+ The MiniMax-M1 model supports function calling capabilities, enabling the model to identify when external functions need to be called and output function call parameters in a structured format. [MiniMax-M1 Function Call Guide](./docs/function_call_guide.md) provides detailed instructions on how to use the function calling feature of MiniMax-M1.
201
+
202
+
203
+ ## 6. Chatbot & API
204
+ For general use and evaluation, we provide a [Chatbot](https://chat.minimax.io/) with online search capabilities and the [online API](https://www.minimax.io/platform/) for developers. For general use and evaluation, we provide the [MiniMax MCP Server](https://github.com/MiniMax-AI/MiniMax-MCP) with video generation, image generation, speech synthesis, and voice cloning for developers.
205
+
206
+
207
+ ## 7. Citation
208
+ ```
209
+ @misc{minimax2025minimaxm1scalingtesttimecompute,
210
+ title={MiniMax-M1: Scaling Test-Time Compute Efficiently with Lightning Attention},
211
+ author={MiniMax},
212
+ year={2025},
213
+ eprint={2506.13585},
214
+ archivePrefix={arXiv},
215
+ primaryClass={cs.CL},
216
+ url={https://arxiv.org/abs/2506.13585},
217
+ }
218
+ ```
219
+
220
+ ## 8. Contact Us
 
 
 
221
  Contact us at [[email protected]](mailto:[email protected]).
docs/function_call_guide.md CHANGED
@@ -8,122 +8,9 @@ The MiniMax-M1 model supports function calling capabilities, enabling the model
8
 
9
  ## 🚀 Quick Start
10
 
11
- ### Using vLLM for Function Calls (Recommended)
12
 
13
- In actual deployment, to support native Function Calling (tool calling) capabilities similar to OpenAI API, the MiniMax-M1 model integrates a dedicated `tool_call_parser=minimax` parser, avoiding additional regex parsing of model output.
14
-
15
- #### Environment Setup and vLLM Recompilation
16
-
17
- Since this feature has not been officially released in the PyPI version, compilation from source code is required. The following is an example process based on the official vLLM Docker image `vllm/vllm-openai:v0.8.3`:
18
-
19
- ```bash
20
- IMAGE=vllm/vllm-openai:v0.8.3
21
- DOCKER_RUN_CMD="--network=host --privileged --ipc=host --ulimit memlock=-1 --shm-size=32gb --rm --gpus all --ulimit stack=67108864"
22
-
23
- # Run docker
24
- sudo docker run -it -v $MODEL_DIR:$MODEL_DIR \
25
- -v $CODE_DIR:$CODE_DIR \
26
- --name vllm_function_call \
27
- $DOCKER_RUN_CMD \
28
- --entrypoint /bin/bash \
29
- $IMAGE
30
- ```
31
-
32
- #### Compiling vLLM Source Code
33
-
34
- After entering the container, execute the following commands to get the source code and reinstall:
35
-
36
- ```bash
37
- cd $CODE_DIR
38
- git clone https://github.com/vllm-project/vllm.git
39
- cd vllm
40
- pip install -e .
41
- ```
42
-
43
- #### Starting vLLM API Service
44
-
45
- ```bash
46
- export SAFETENSORS_FAST_GPU=1
47
- export VLLM_USE_V1=0
48
-
49
- python3 -m vllm.entrypoints.openai.api_server \
50
- --model MiniMax-M1-80k \
51
- --tensor-parallel-size 8 \
52
- --trust-remote-code \
53
- --quantization experts_int8 \
54
- --enable-auto-tool-choice \
55
- --tool-call-parser minimax \
56
- --chat-template vllm/examples/tool_chat_template_minimax_m1.jinja \
57
- --max_model_len 4096 \
58
- --dtype bfloat16 \
59
- --gpu-memory-utilization 0.85
60
- ```
61
-
62
- **⚠️ Note:**
63
- - `--tool-call-parser minimax` is a key parameter for enabling the MiniMax-M1 custom parser
64
- - `--enable-auto-tool-choice` enables automatic tool selection
65
- - `--chat-template` template file needs to be adapted for tool calling format
66
-
67
- #### Function Call Test Script Example
68
-
69
- The following Python script implements a weather query function call example based on OpenAI SDK:
70
-
71
- ```python
72
- from openai import OpenAI
73
- import json
74
-
75
- client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")
76
-
77
- def get_weather(location: str, unit: str):
78
- return f"Getting the weather for {location} in {unit}..."
79
-
80
- tool_functions = {"get_weather": get_weather}
81
-
82
- tools = [{
83
- "type": "function",
84
- "function": {
85
- "name": "get_weather",
86
- "description": "Get the current weather in a given location",
87
- "parameters": {
88
- "type": "object",
89
- "properties": {
90
- "location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
91
- "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
92
- },
93
- "required": ["location", "unit"]
94
- }
95
- }
96
- }]
97
-
98
- response = client.chat.completions.create(
99
- model=client.models.list().data[0].id,
100
- messages=[{"role": "user", "content": "What's the weather like in San Francisco? use celsius."}],
101
- tools=tools,
102
- tool_choice="auto"
103
- )
104
-
105
- print(response)
106
-
107
- tool_call = response.choices[0].message.tool_calls[0].function
108
- print(f"Function called: {tool_call.name}")
109
- print(f"Arguments: {tool_call.arguments}")
110
- print(f"Result: {get_weather(**json.loads(tool_call.arguments))}")
111
- ```
112
-
113
- **Output Example:**
114
- ```
115
- Function called: get_weather
116
- Arguments: {"location": "San Francisco, CA", "unit": "celsius"}
117
- Result: Getting the weather for San Francisco, CA in celsius...
118
- ```
119
-
120
- ### Manual Parsing of Model Output
121
-
122
- If you cannot use vLLM's built-in parser, or need to use other inference frameworks (such as transformers, TGI, etc.), you can use the following method to manually parse the model's raw output. This method requires you to parse the XML tag format of the model output yourself.
123
-
124
- #### Using Transformers Example
125
-
126
- The following is a complete example using the transformers library:
127
 
128
  ```python
129
  from transformers import AutoTokenizer
@@ -131,19 +18,21 @@ from transformers import AutoTokenizer
131
  def get_default_tools():
132
  return [
133
  {
134
- "name": "get_current_weather",
135
- "description": "Get the latest weather for a location",
136
- "parameters": {
137
- "type": "object",
138
- "properties": {
139
- "location": {
140
- "type": "string",
141
- "description": "A certain city, such as Beijing, Shanghai"
142
- }
143
- },
 
 
 
 
144
  }
145
- "required": ["location"],
146
- "type": "object"
147
  }
148
  ]
149
 
@@ -165,27 +54,6 @@ text = tokenizer.apply_chat_template(
165
  add_generation_prompt=True,
166
  tools=tools
167
  )
168
-
169
- # Send request (using any inference service here)
170
- import requests
171
- payload = {
172
- "model": "MiniMaxAI/MiniMax-M1-40k",
173
- "prompt": text,
174
- "max_tokens": 4000
175
- }
176
- response = requests.post(
177
- "http://localhost:8000/v1/completions",
178
- headers={"Content-Type": "application/json"},
179
- json=payload,
180
- stream=False,
181
- )
182
-
183
- # Model output needs manual parsing
184
- raw_output = response.json()["choices"][0]["text"]
185
- print("Raw output:", raw_output)
186
-
187
- # Use the parsing function below to process the output
188
- function_calls = parse_function_calls(raw_output)
189
  ```
190
 
191
  ## 🛠️ Function Call Definition
@@ -234,21 +102,22 @@ Function calls need to be defined in the `tools` field of the request body. Each
234
  When processed internally by the model, function definitions are converted to a special format and concatenated to the input text:
235
 
236
  ```
237
- <begin_of_document><beginning_of_sentence>system ai_setting=MiniMax AI
238
- MiniMax AI是由上海稀宇科技有限公司(MiniMax)自主研发的AI助理。<end_of_sentence>
239
- <beginning_of_sentence>system tool_setting=tools
240
  You are provided with these tools:
241
  <tools>
242
- {"name": "search_web", "description": "搜索函数。", "parameters": {"properties": {"query_list": {"description": "进行搜索的关键词,列表元素个数为1", "items": {"type": "string"}, "type": "array"}, "query_tag": {"description": "query的分类", "items": {"type": "string"}, "type": "array"}}, "required": ["query_list", "query_tag"], "type": "object"}}
243
  </tools>
 
244
  If you need to call tools, please respond with <tool_calls></tool_calls> XML tags, and provide tool-name and json-object of arguments, following the format below:
245
  <tool_calls>
246
  {"name": <tool-name>, "arguments": <args-json-object>}
247
  ...
248
- </tool_calls><end_of_sentence>
249
- <beginning_of_sentence>user name=用户
250
- OpenAI Gemini 的最近一次发布会都是什么时候?<end_of_sentence>
251
- <beginning_of_sentence>ai name=MiniMax AI
252
  ```
253
 
254
  ### Model Output Format
@@ -265,15 +134,16 @@ Okay, I will search for the OpenAI and Gemini latest release.
265
  </tool_calls>
266
  ```
267
 
268
- ## 📥 Manual Parsing of Function Call Results
269
 
270
  ### Parsing Function Calls
271
 
272
- When manual parsing is required, you need to parse the XML tag format of the model output:
273
 
274
  ```python
275
  import re
276
  import json
 
277
  def parse_function_calls(content: str):
278
  """
279
  Parse function calls from model output
@@ -323,33 +193,23 @@ def execute_function_call(function_name: str, arguments: dict):
323
  # Build function execution result
324
  return {
325
  "role": "tool",
326
- "content": [
327
- {
328
- "name": function_name,
329
- "type": "text",
330
- "text": json.dumps({
331
- "location": location,
332
- "temperature": "25",
333
- "unit": "celsius",
334
- "weather": "Sunny"
335
- }, ensure_ascii=False)
336
- }
337
- ]
338
- }
339
  elif function_name == "search_web":
340
  query_list = arguments.get("query_list", [])
341
  query_tag = arguments.get("query_tag", [])
342
  # Simulate search results
343
  return {
344
  "role": "tool",
345
- "content": [
346
- {
347
- "name": function_name,
348
- "type": "text",
349
- "text": f"Search keywords: {query_list}, Categories: {query_tag}\nSearch results: Relevant information found"
350
- }
351
- ]
352
- }
353
 
354
  return None
355
  ```
@@ -360,65 +220,51 @@ After successfully parsing function calls, you should add the function execution
360
 
361
  #### Single Result
362
 
363
- If the model calls the `search_web` function, you can refer to the following format to add execution results, with the `name` field being the specific function name.
364
 
365
  ```json
366
  {
367
- "role": "tool",
368
- "content": [
369
- {
370
- "name": "search_web",
371
- "type": "text",
372
- "text": "test_result"
373
- }
374
  ]
375
  }
376
  ```
377
 
378
  Corresponding model input format:
379
  ```
380
- <beginning_of_sentence>tool name=tools
381
- tool name: search_web
382
- tool result: test_result
383
- <end_of_sentence>
384
  ```
385
 
386
- #### Multiple Results
387
 
388
- If the model calls both `search_web` and `get_current_weather` functions simultaneously, you can refer to the following format to add execution results, with `content` containing multiple results.
 
 
389
 
390
  ```json
391
  {
392
- "role": "tool",
393
- "content": [
394
- {
395
- "name": "search_web",
396
- "type": "text",
397
- "text": "test_result1"
398
- },
399
- {
400
- "name": "get_current_weather",
401
- "type": "text",
402
- "text": "test_result2"
403
- }
404
  ]
405
  }
406
  ```
407
 
408
  Corresponding model input format:
409
  ```
410
- <beginning_of_sentence>tool name=tools
411
- tool name: search_web
412
- tool result: test_result1
413
- tool name: get_current_weather
414
- tool result: test_result2<end_of_sentence>
415
- ```
416
-
417
- While we recommend following the above formats, as long as the input returned to the model is easy to understand, the specific content of `name` and `text` is entirely up to you.
418
 
419
- ## 📚 References
 
 
420
 
421
- - [MiniMax-M1 Model Repository](https://github.com/MiniMaxAI/MiniMax-M1)
422
- - [vLLM Project Homepage](https://github.com/vllm-project/vllm)
423
- - [vLLM Function Calling PR](https://github.com/vllm-project/vllm/pull/20297)
424
- - [OpenAI Python SDK](https://github.com/openai/openai-python)
 
8
 
9
  ## 🚀 Quick Start
10
 
11
+ ### Using Chat Template
12
 
13
+ MiniMax-M1 uses a specific chat template format to handle function calls. The chat template is defined in `tokenizer_config.json`, and you can use it in your code through the template.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
  ```python
16
  from transformers import AutoTokenizer
 
18
  def get_default_tools():
19
  return [
20
  {
21
+ {
22
+ "name": "get_current_weather",
23
+ "description": "Get the latest weather for a location",
24
+ "parameters": {
25
+ "type": "object",
26
+ "properties": {
27
+ "location": {
28
+ "type": "string",
29
+ "description": "A certain city, such as Beijing, Shanghai"
30
+ }
31
+ },
32
+ }
33
+ "required": ["location"],
34
+ "type": "object"
35
  }
 
 
36
  }
37
  ]
38
 
 
54
  add_generation_prompt=True,
55
  tools=tools
56
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
57
  ```
58
 
59
  ## 🛠️ Function Call Definition
 
102
  When processed internally by the model, function definitions are converted to a special format and concatenated to the input text:
103
 
104
  ```
105
+ ]~!b[]~b]system ai_setting=MiniMax AI
106
+ MiniMax AI is an AI assistant independently developed by MiniMax. [e~[
107
+ ]~b]system tool_setting=tools
108
  You are provided with these tools:
109
  <tools>
110
+ {"name": "search_web", "description": "Search function.", "parameters": {"properties": {"query_list": {"description": "Keywords for search, with list element count of 1.", "items": {"type": "string"}, "type": "array"}, "query_tag": {"description": "Classification of the query", "items": {"type": "string"}, "type": "array"}}, "required": ["query_list", "query_tag"], "type": "object"}}
111
  </tools>
112
+
113
  If you need to call tools, please respond with <tool_calls></tool_calls> XML tags, and provide tool-name and json-object of arguments, following the format below:
114
  <tool_calls>
115
  {"name": <tool-name>, "arguments": <args-json-object>}
116
  ...
117
+ </tool_calls>[e~[
118
+ ]~b]user name=User
119
+ When were the most recent launch events for OpenAI and Gemini?[e~[
120
+ ]~b]ai name=MiniMax AI
121
  ```
122
 
123
  ### Model Output Format
 
134
  </tool_calls>
135
  ```
136
 
137
+ ## 📥 Function Call Result Processing
138
 
139
  ### Parsing Function Calls
140
 
141
+ You can use the following code to parse function calls from the model output:
142
 
143
  ```python
144
  import re
145
  import json
146
+
147
  def parse_function_calls(content: str):
148
  """
149
  Parse function calls from model output
 
193
  # Build function execution result
194
  return {
195
  "role": "tool",
196
+ "name": function_name,
197
+ "content": json.dumps({
198
+ "location": location,
199
+ "temperature": "25",
200
+ "unit": "celsius",
201
+ "weather": "Sunny"
202
+ }, ensure_ascii=False)
203
+ }
 
 
 
 
 
204
  elif function_name == "search_web":
205
  query_list = arguments.get("query_list", [])
206
  query_tag = arguments.get("query_tag", [])
207
  # Simulate search results
208
  return {
209
  "role": "tool",
210
+ "name": function_name,
211
+ "content": f"Search keywords: {query_list}, Categories: {query_tag}\nSearch results: Relevant information found"
212
+ }
 
 
 
 
 
213
 
214
  return None
215
  ```
 
220
 
221
  #### Single Result
222
 
223
+ If the model decides to call `search_web`, we suggest you to return the function result in the following format, with the `name` field set to the specific tool name.
224
 
225
  ```json
226
  {
227
+ "data": [
228
+ {
229
+ "role": "tool",
230
+ "name": "search_web",
231
+ "content": "search_result"
232
+ }
 
233
  ]
234
  }
235
  ```
236
 
237
  Corresponding model input format:
238
  ```
239
+ ]~b]tool name=search_web
240
+ search_result[e~[
 
 
241
  ```
242
 
 
243
 
244
+ #### Multiple Result
245
+ If the model decides to call `search_web` and `get_current_weather` at the same time, we suggest you to return the multiple function results in the following format, with the `name` field set to "tools", and use the `content` field to contain multiple results.
246
+
247
 
248
  ```json
249
  {
250
+ "data": [
251
+ {
252
+ "role": "tool",
253
+ "name": "tools",
254
+ "content": "Tool name: search_web\nTool result: test_result1\n\nTool name: get_current_weather\nTool result: test_result2"
255
+ }
 
 
 
 
 
 
256
  ]
257
  }
258
  ```
259
 
260
  Corresponding model input format:
261
  ```
262
+ ]~b]tool name=tools
263
+ Tool name: search_web
264
+ Tool result: test_result1
 
 
 
 
 
265
 
266
+ Tool name: get_current_weather
267
+ Tool result: test_result2[e~[
268
+ ```
269
 
270
+ While we suggest following the above formats, as long as the model input is easy to understand, the specific values of `name` and `content` is entirely up to the caller.
 
 
 
docs/function_call_guide_cn.md CHANGED
@@ -6,122 +6,9 @@ MiniMax-M1 模型支持函数调用功能,使模型能够识别何时需要调
6
 
7
  ## 🚀 快速开始
8
 
9
- ### 使用 vLLM 进行 Function Calls(推荐)
10
 
11
- 在实际部署过程中,为了支持类似 OpenAI API 的原生 Function Calling(工具调用)能力,MiniMax-M1 模型集成了专属 `tool_call_parser=minimax` 解析器,从而避免对模型输出结果进行额外的正则解析处理。
12
-
13
- #### 环境准备与重新编译 vLLM
14
-
15
- 由于该功能尚未正式发布在 PyPI 版本中,需基于源码进行编译。以下为基于 vLLM 官方 Docker 镜像 `vllm/vllm-openai:v0.8.3` 的示例流程:
16
-
17
- ```bash
18
- IMAGE=vllm/vllm-openai:v0.8.3
19
- DOCKER_RUN_CMD="--network=host --privileged --ipc=host --ulimit memlock=-1 --shm-size=32gb --rm --gpus all --ulimit stack=67108864"
20
-
21
- # 运行 docker
22
- sudo docker run -it -v $MODEL_DIR:$MODEL_DIR \
23
- -v $CODE_DIR:$CODE_DIR \
24
- --name vllm_function_call \
25
- $DOCKER_RUN_CMD \
26
- --entrypoint /bin/bash \
27
- $IMAGE
28
- ```
29
-
30
- #### 编译 vLLM 源码
31
-
32
- 进入容器后,执行以下命令以获取源码并重新安装:
33
-
34
- ```bash
35
- cd $CODE_DIR
36
- git clone https://github.com/vllm-project/vllm.git
37
- cd vllm
38
- pip install -e .
39
- ```
40
-
41
- #### 启动 vLLM API 服务
42
-
43
- ```bash
44
- export SAFETENSORS_FAST_GPU=1
45
- export VLLM_USE_V1=0
46
-
47
- python3 -m vllm.entrypoints.openai.api_server \
48
- --model MiniMax-M1-80k \
49
- --tensor-parallel-size 8 \
50
- --trust-remote-code \
51
- --quantization experts_int8 \
52
- --enable-auto-tool-choice \
53
- --tool-call-parser minimax \
54
- --chat-template vllm/examples/tool_chat_template_minimax_m1.jinja \
55
- --max_model_len 4096 \
56
- --dtype bfloat16 \
57
- --gpu-memory-utilization 0.85
58
- ```
59
-
60
- **⚠️ 注意:**
61
- - `--tool-call-parser minimax` 为关键参数,用于启用 MiniMax-M1 自定义解析器
62
- - `--enable-auto-tool-choice` 启用自动工具选择
63
- - `--chat-template` 模板文件需要适配 tool calling 格式
64
-
65
- #### Function Call 测试脚本示例
66
-
67
- 以下 Python 脚本基于 OpenAI SDK 实现了一个天气查询函数的调用示例:
68
-
69
- ```python
70
- from openai import OpenAI
71
- import json
72
-
73
- client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")
74
-
75
- def get_weather(location: str, unit: str):
76
- return f"Getting the weather for {location} in {unit}..."
77
-
78
- tool_functions = {"get_weather": get_weather}
79
-
80
- tools = [{
81
- "type": "function",
82
- "function": {
83
- "name": "get_weather",
84
- "description": "Get the current weather in a given location",
85
- "parameters": {
86
- "type": "object",
87
- "properties": {
88
- "location": {"type": "string", "description": "City and state, e.g., 'San Francisco, CA'"},
89
- "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
90
- },
91
- "required": ["location", "unit"]
92
- }
93
- }
94
- }]
95
-
96
- response = client.chat.completions.create(
97
- model=client.models.list().data[0].id,
98
- messages=[{"role": "user", "content": "What's the weather like in San Francisco? use celsius."}],
99
- tools=tools,
100
- tool_choice="auto"
101
- )
102
-
103
- print(response)
104
-
105
- tool_call = response.choices[0].message.tool_calls[0].function
106
- print(f"Function called: {tool_call.name}")
107
- print(f"Arguments: {tool_call.arguments}")
108
- print(f"Result: {get_weather(**json.loads(tool_call.arguments))}")
109
- ```
110
-
111
- **输出示例:**
112
- ```
113
- Function called: get_weather
114
- Arguments: {"location": "San Francisco, CA", "unit": "celsius"}
115
- Result: Getting the weather for San Francisco, CA in celsius...
116
- ```
117
-
118
- ### 手动解析模型输出
119
-
120
- 如果您无法使用 vLLM 的内置解析器,或者需要使用其他推理框架(如 transformers、TGI 等),可以使用以下方法手动解析模型的原始输出。这种方法需要您自己解析模型输出的 XML 标签格式。
121
-
122
- #### 使用 Transformers 的示例
123
-
124
- 以下是使用 transformers 库的完整示例:
125
 
126
  ```python
127
  from transformers import AutoTokenizer
@@ -129,19 +16,21 @@ from transformers import AutoTokenizer
129
  def get_default_tools():
130
  return [
131
  {
132
- "name": "get_current_weather",
133
- "description": "Get the latest weather for a location",
134
- "parameters": {
135
- "type": "object",
136
- "properties": {
137
- "location": {
138
- "type": "string",
139
- "description": "A certain city, such as Beijing, Shanghai"
140
- }
141
- },
 
 
 
 
142
  }
143
- "required": ["location"],
144
- "type": "object"
145
  }
146
  ]
147
 
@@ -163,27 +52,6 @@ text = tokenizer.apply_chat_template(
163
  add_generation_prompt=True,
164
  tools=tools
165
  )
166
-
167
- # 发送请求(这里使用任何推理服务)
168
- import requests
169
- payload = {
170
- "model": "MiniMaxAI/MiniMax-M1-40k",
171
- "prompt": text,
172
- "max_tokens": 4000
173
- }
174
- response = requests.post(
175
- "http://localhost:8000/v1/completions",
176
- headers={"Content-Type": "application/json"},
177
- json=payload,
178
- stream=False,
179
- )
180
-
181
- # 模型输出需要手动解析
182
- raw_output = response.json()["choices"][0]["text"]
183
- print("原始输出:", raw_output)
184
-
185
- # 使用下面的解析函数处理输出
186
- function_calls = parse_function_calls(raw_output)
187
  ```
188
 
189
  ## 🛠️ 函数调用的定义
@@ -232,21 +100,22 @@ function_calls = parse_function_calls(raw_output)
232
  在模型内部处理时,函数定义会被转换为特殊格式并拼接到输入文本中:
233
 
234
  ```
235
- <begin_of_document><beginning_of_sentence>system ai_setting=MiniMax AI
236
- MiniMax AI是由上海稀宇科技有限公司(MiniMax)自主研发的AI助理。<end_of_sentence>
237
- <beginning_of_sentence>system tool_setting=tools
238
  You are provided with these tools:
239
  <tools>
240
  {"name": "search_web", "description": "搜索函数。", "parameters": {"properties": {"query_list": {"description": "进行搜索的关键词,列表元素个数为1。", "items": {"type": "string"}, "type": "array"}, "query_tag": {"description": "query的分类", "items": {"type": "string"}, "type": "array"}}, "required": ["query_list", "query_tag"], "type": "object"}}
241
  </tools>
 
242
  If you need to call tools, please respond with <tool_calls></tool_calls> XML tags, and provide tool-name and json-object of arguments, following the format below:
243
  <tool_calls>
244
  {"name": <tool-name>, "arguments": <args-json-object>}
245
  ...
246
- </tool_calls><end_of_sentence>
247
- <beginning_of_sentence>user name=用户
248
- OpenAI 和 Gemini 的最近一次发布会都是什么时候?<end_of_sentence>
249
- <beginning_of_sentence>ai name=MiniMax AI
250
  ```
251
 
252
  ### 模型输出格式
@@ -263,15 +132,16 @@ Okay, I will search for the OpenAI and Gemini latest release.
263
  </tool_calls>
264
  ```
265
 
266
- ## 📥 手动解析函数调用结果
267
 
268
  ### 解析函数调用
269
 
270
- 当需要手动解析时,您需要解析模型输出的 XML 标签格式:
271
 
272
  ```python
273
  import re
274
  import json
 
275
  def parse_function_calls(content: str):
276
  """
277
  解析模型输出中的函数调用
@@ -321,33 +191,23 @@ def execute_function_call(function_name: str, arguments: dict):
321
  # 构建函数执行结果
322
  return {
323
  "role": "tool",
324
- "content": [
325
- {
326
- "name": function_name,
327
- "type": "text",
328
- "text": json.dumps({
329
- "location": location,
330
- "temperature": "25",
331
- "unit": "celsius",
332
- "weather": "晴朗"
333
- }, ensure_ascii=False)
334
- }
335
- ]
336
- }
337
  elif function_name == "search_web":
338
  query_list = arguments.get("query_list", [])
339
  query_tag = arguments.get("query_tag", [])
340
  # 模拟搜索结果
341
  return {
342
  "role": "tool",
343
- "content": [
344
- {
345
- "name": function_name,
346
- "type": "text",
347
- "text": f"搜索关键词: {query_list}, 分类: {query_tag}\n搜索结果: 相关信息已找到"
348
- }
349
- ]
350
- }
351
 
352
  return None
353
  ```
@@ -362,61 +222,46 @@ def execute_function_call(function_name: str, arguments: dict):
362
 
363
  ```json
364
  {
365
- "role": "tool",
366
- "content": [
367
- {
368
- "name": "search_web",
369
- "type": "text",
370
- "text": "test_result"
371
- }
372
  ]
373
  }
374
  ```
375
 
376
  对应如下的模型输入格式:
377
  ```
378
- <beginning_of_sentence>tool name=tools
379
- tool name: search_web
380
- tool result: test_result
381
- <end_of_sentence>
382
  ```
383
 
384
- #### 多个结果
385
 
386
- 假如模型同时调用了 `search_web` 和 `get_current_weather` 函数,您可以参考如下格式添加执行结果,`content`包含多个结果。
 
387
 
388
  ```json
389
  {
390
- "role": "tool",
391
- "content": [
392
- {
393
- "name": "search_web",
394
- "type": "text",
395
- "text": "test_result1"
396
- },
397
- {
398
- "name": "get_current_weather",
399
- "type": "text",
400
- "text": "test_result2"
401
- }
402
  ]
403
  }
404
  ```
405
 
406
  对应如下的模型输入格式:
407
  ```
408
- <beginning_of_sentence>tool name=tools
409
- tool name: search_web
410
- tool result: test_result1
411
- tool name: get_current_weather
412
- tool result: test_result2<end_of_sentence>
413
- ```
414
-
415
- 虽然我们建议您参考以上格式,但只要返回给模型的输入易于理解,`name` 和 `text` 的具体内容完全由您自主决定。
416
 
417
- ## 📚 参考资料
 
 
418
 
419
- - [MiniMax-M1 模型仓库](https://github.com/MiniMaxAI/MiniMax-M1)
420
- - [vLLM 项目主页](https://github.com/vllm-project/vllm)
421
- - [vLLM Function Calling PR](https://github.com/vllm-project/vllm/pull/20297)
422
- - [OpenAI Python SDK](https://github.com/openai/openai-python)
 
6
 
7
  ## 🚀 快速开始
8
 
9
+ ### 聊天模板使用
10
 
11
+ MiniMax-M1 使用特定的聊天模板格式处理函数调用。聊天模板定义在 `tokenizer_config.json` 中,你可以在代码中通过 template 来进行使用。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  ```python
14
  from transformers import AutoTokenizer
 
16
  def get_default_tools():
17
  return [
18
  {
19
+ {
20
+ "name": "get_current_weather",
21
+ "description": "Get the latest weather for a location",
22
+ "parameters": {
23
+ "type": "object",
24
+ "properties": {
25
+ "location": {
26
+ "type": "string",
27
+ "description": "A certain city, such as Beijing, Shanghai"
28
+ }
29
+ },
30
+ }
31
+ "required": ["location"],
32
+ "type": "object"
33
  }
 
 
34
  }
35
  ]
36
 
 
52
  add_generation_prompt=True,
53
  tools=tools
54
  )
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
55
  ```
56
 
57
  ## 🛠️ 函数调用的定义
 
100
  在模型内部处理时,函数定义会被转换为特殊格式并拼接到输入文本中:
101
 
102
  ```
103
+ ]~!b[]~b]system ai_setting=MiniMax AI
104
+ MiniMax AI是由上海稀宇科技有限公司(MiniMax)自主研发的AI助理。[e~[
105
+ ]~b]system tool_setting=tools
106
  You are provided with these tools:
107
  <tools>
108
  {"name": "search_web", "description": "搜索函数。", "parameters": {"properties": {"query_list": {"description": "进行搜索的关键词,列表元素个数为1。", "items": {"type": "string"}, "type": "array"}, "query_tag": {"description": "query的分类", "items": {"type": "string"}, "type": "array"}}, "required": ["query_list", "query_tag"], "type": "object"}}
109
  </tools>
110
+
111
  If you need to call tools, please respond with <tool_calls></tool_calls> XML tags, and provide tool-name and json-object of arguments, following the format below:
112
  <tool_calls>
113
  {"name": <tool-name>, "arguments": <args-json-object>}
114
  ...
115
+ </tool_calls>[e~[
116
+ ]~b]user name=用户
117
+ OpenAI 和 Gemini 的最近一次发布会都是什么时候?[e~[
118
+ ]~b]ai name=MiniMax AI
119
  ```
120
 
121
  ### 模型输出格式
 
132
  </tool_calls>
133
  ```
134
 
135
+ ## 📥 函数调用结果处理
136
 
137
  ### 解析函数调用
138
 
139
+ 您可以使用以下代码解析模型输出的函数调用:
140
 
141
  ```python
142
  import re
143
  import json
144
+
145
  def parse_function_calls(content: str):
146
  """
147
  解析模型输出中的函数调用
 
191
  # 构建函数执行结果
192
  return {
193
  "role": "tool",
194
+ "name": function_name,
195
+ "content": json.dumps({
196
+ "location": location,
197
+ "temperature": "25",
198
+ "unit": "celsius",
199
+ "weather": "晴朗"
200
+ }, ensure_ascii=False)
201
+ }
 
 
 
 
 
202
  elif function_name == "search_web":
203
  query_list = arguments.get("query_list", [])
204
  query_tag = arguments.get("query_tag", [])
205
  # 模拟搜索结果
206
  return {
207
  "role": "tool",
208
+ "name": function_name,
209
+ "content": f"搜索关键词: {query_list}, 分类: {query_tag}\n搜索结果: 相关信息已找到"
210
+ }
 
 
 
 
 
211
 
212
  return None
213
  ```
 
222
 
223
  ```json
224
  {
225
+ "data": [
226
+ {
227
+ "role": "tool",
228
+ "name": "search_web",
229
+ "content": "search_result"
230
+ }
 
231
  ]
232
  }
233
  ```
234
 
235
  对应如下的模型输入格式:
236
  ```
237
+ ]~b]tool name=search_web
238
+ search_result[e~[
 
 
239
  ```
240
 
 
241
 
242
+ #### 多个结果
243
+ 假如模型同时调用了 `search_web` 和 `get_current_weather` 函数,您可以参考如下格式添加执行结果,`name` 字段为"tools",`content`包含多个结果。
244
 
245
  ```json
246
  {
247
+ "data": [
248
+ {
249
+ "role": "tool",
250
+ "name": "tools",
251
+ "content": "Tool name: search_web\nTool result: test_result1\n\nTool name: get_current_weather\nTool result: test_result2"
252
+ }
 
 
 
 
 
 
253
  ]
254
  }
255
  ```
256
 
257
  对应如下的模型输入格式:
258
  ```
259
+ ]~b]tool name=tools
260
+ Tool name: search_web
261
+ Tool result: test_result1
 
 
 
 
 
262
 
263
+ Tool name: get_current_weather
264
+ Tool result: test_result2[e~[
265
+ ```
266
 
267
+ 虽然我们建议您参考以上格式,但只要返回给模型的输入易于理解,`name` `content` 的具体内容完全由您自主决定。
 
 
 
docs/vllm_deployment_guide.md CHANGED
@@ -41,19 +41,20 @@ git clone https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
41
 
42
  ## 🛠️ Deployment Options
43
 
44
- ### Option: Deploy Using Docker (Recommended)
45
 
46
  To ensure consistency and stability of the deployment environment, we recommend using Docker for deployment.
47
 
48
  ⚠️ **Version Requirements**:
49
- - MiniMax-M1 model requires vLLM version 0.9.2 or later for full support
50
- - Special Note: Using vLLM versions below 0.9.2 may result in incompatibility or incorrect precision for the model:
51
- - For details, see: [Fix minimax model cache & lm_head precision #19592](https://github.com/vllm-project/vllm/pull/19592)
 
 
 
 
52
 
53
  1. Get the container image:
54
-
55
- Currently, the official vLLM Docker image for version v0.9.2 has not been released yet.
56
- As an example, we will demonstrate how to manually build vLLM using version v0.8.3.
57
  ```bash
58
  docker pull vllm/vllm-openai:v0.8.3
59
  ```
@@ -76,12 +77,21 @@ sudo docker run -it \
76
  --name $NAME \
77
  $DOCKER_RUN_CMD \
78
  $IMAGE /bin/bash
 
79
 
80
- # install vLLM
81
- cd $CODE_DIR
82
- git clone https://github.com/vllm-project/vllm.git
83
- cd vllm
84
- pip install -e .
 
 
 
 
 
 
 
 
85
  ```
86
 
87
  💡 If you are using other environment configurations, please refer to the [vLLM Installation Guide](https://docs.vllm.ai/en/latest/getting_started/installation.html)
 
41
 
42
  ## 🛠️ Deployment Options
43
 
44
+ ### Option 1: Deploy Using Docker (Recommended)
45
 
46
  To ensure consistency and stability of the deployment environment, we recommend using Docker for deployment.
47
 
48
  ⚠️ **Version Requirements**:
49
+ - MiniMax-M1 model requires vLLM version 0.8.3 or later for full support
50
+ - If you are using a Docker image with vLLM version lower than the required version, you will need to:
51
+ 1. Update to the latest vLLM code
52
+ 2. Recompile vLLM from source. Follow the compilation instructions in Solution 2 of the Common Issues section
53
+ - Special Note: For vLLM versions between 0.8.3 and 0.9.2, you need to modify the model configuration:
54
+ 1. Open `config.json`
55
+ 2. Change `config['architectures'] = ["MiniMaxM1ForCausalLM"]` to `config['architectures'] = ["MiniMaxText01ForCausalLM"]`
56
 
57
  1. Get the container image:
 
 
 
58
  ```bash
59
  docker pull vllm/vllm-openai:v0.8.3
60
  ```
 
77
  --name $NAME \
78
  $DOCKER_RUN_CMD \
79
  $IMAGE /bin/bash
80
+ ```
81
 
82
+
83
+ ### Option 2: Direct Installation of vLLM
84
+
85
+ If your environment meets the following requirements:
86
+
87
+ - CUDA 12.1
88
+ - PyTorch 2.1
89
+
90
+ You can directly install vLLM
91
+
92
+ Installation command:
93
+ ```bash
94
+ pip install vllm
95
  ```
96
 
97
  💡 If you are using other environment configurations, please refer to the [vLLM Installation Guide](https://docs.vllm.ai/en/latest/getting_started/installation.html)
docs/vllm_deployment_guide_cn.md CHANGED
@@ -39,18 +39,17 @@ git clone https://huggingface.co/MiniMaxAI/MiniMax-M1-80k
39
 
40
  ## 🛠️ 部署方案
41
 
42
- ### 方案:使用 Docker 部署(推荐)
43
 
44
  为确保部署环境的一致性和稳定性,我们推荐使用 Docker 进行部署。
45
 
46
  ⚠️ **版本要求**:
47
- - 基础要求:vLLM 版本必须 ≥ 0.9.2,以确保对 MiniMax-M1 模型的完整支持
48
- - 特殊说明:如果使用低于 0.9.2 的 vLLM 版本,会遇见无法支持该模型或者精度不正确的情况:
49
- - 详情见:[Fix minimax model cache & lm_head precision #19592](https://github.com/vllm-project/vllm/pull/19592)
 
50
 
51
  1. 获取容器镜像:
52
-
53
- 目前 vLLM 官方还未推出v0.9.2版本 docker,我们以 v0.8.3 为例子进行手动编译 vLLM:
54
  ```bash
55
  docker pull vllm/vllm-openai:v0.8.3
56
  ```
@@ -73,12 +72,21 @@ sudo docker run -it \
73
  --name $NAME \
74
  $DOCKER_RUN_CMD \
75
  $IMAGE /bin/bash
 
76
 
77
- # 编译 vLLM
78
- cd $CODE_DIR
79
- git clone https://github.com/vllm-project/vllm.git
80
- cd vllm
81
- pip install -e .
 
 
 
 
 
 
 
 
82
  ```
83
 
84
  💡 如果您使用其他环境配置,请参考 [vLLM 安装指南](https://docs.vllm.ai/en/latest/getting_started/installation.html)
 
39
 
40
  ## 🛠️ 部署方案
41
 
42
+ ### 方案一:使用 Docker 部署(推荐)
43
 
44
  为确保部署环境的一致性和稳定性,我们推荐使用 Docker 进行部署。
45
 
46
  ⚠️ **版本要求**:
47
+ - 基础要求:vLLM 版本必须 ≥ 0.8.3,以确保对 MiniMax-M1 模型的完整支持
48
+ - 特殊说明:如果使用 vLLM 0.8.3 至 0.9.2 之间的版本,需要修改模型配置文件:
49
+ - 打开 `config.json`
50
+ - 将 `config['architectures'] = ["MiniMaxM1ForCausalLM"]` 修改为 `config['architectures'] = ["MiniMaxText01ForCausalLM"]`
51
 
52
  1. 获取容器镜像:
 
 
53
  ```bash
54
  docker pull vllm/vllm-openai:v0.8.3
55
  ```
 
72
  --name $NAME \
73
  $DOCKER_RUN_CMD \
74
  $IMAGE /bin/bash
75
+ ```
76
 
77
+
78
+ ### 方案二:直接安装 vLLM
79
+
80
+ 如果您的环境满足以下要求:
81
+
82
+ - CUDA 12.1
83
+ - PyTorch 2.1
84
+
85
+ 可以直接安装 vLLM
86
+
87
+ 安装命令:
88
+ ```bash
89
+ pip install vllm
90
  ```
91
 
92
  💡 如果您使用其他环境配置,请参考 [vLLM 安装指南](https://docs.vllm.ai/en/latest/getting_started/installation.html)
figures/wechat-qrcode.jpeg ADDED