Midm-LLM commited on
Commit
27c03f4
·
verified ·
1 Parent(s): f69e3f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,7 +13,7 @@ library_name: transformers
13
 
14
  <p align="center">
15
  <br>
16
- <span style="font-size: 60px; font-weight: bold;">Mi:dm 2.0-Mini</span>
17
  </br>
18
  </p>
19
  <p align="center">
@@ -59,11 +59,11 @@ library_name: transformers
59
 
60
  Mi:dm 2.0 is released in two versions:
61
 
62
- - **Mi:dm 2.0-Base**
63
  An 11.5B parameter dense model designed to balance model size and performance.
64
  It extends an 8B-scale model by applying the Depth-up Scaling (DuS) method, making it suitable for real-world applications that require both performance and versatility.
65
 
66
- - **Mi:dm 2.0-Mini**
67
  A lightweight 2.3B parameter dense model optimized for on-device environments and systems with limited GPU resources.
68
  It was derived from the Base model through pruning and distillation to enable compact deployment.
69
 
 
13
 
14
  <p align="center">
15
  <br>
16
+ <span style="font-size: 60px; font-weight: bold;">Mi:dm 2.0 Mini</span>
17
  </br>
18
  </p>
19
  <p align="center">
 
59
 
60
  Mi:dm 2.0 is released in two versions:
61
 
62
+ - **Mi:dm 2.0 Base**
63
  An 11.5B parameter dense model designed to balance model size and performance.
64
  It extends an 8B-scale model by applying the Depth-up Scaling (DuS) method, making it suitable for real-world applications that require both performance and versatility.
65
 
66
+ - **Mi:dm 2.0 Mini**
67
  A lightweight 2.3B parameter dense model optimized for on-device environments and systems with limited GPU resources.
68
  It was derived from the Base model through pruning and distillation to enable compact deployment.
69