Midm-LLM commited on
Commit
7565af1
·
verified ·
1 Parent(s): 0f431e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,7 +15,7 @@ library_name: transformers
15
 
16
  <p align="center">
17
  <br>
18
- <span style="font-size: 60px; font-weight: bold;">Mi:dm 2.0-Base</span>
19
  </br>
20
  </p>
21
 
@@ -62,11 +62,11 @@ library_name: transformers
62
 
63
  Mi:dm 2.0 is released in two versions:
64
 
65
- - **Mi:dm 2.0-Base**
66
  An 11.5B parameter dense model designed to balance model size and performance.
67
  It extends an 8B-scale model by applying the Depth-up Scaling (DuS) method, making it suitable for real-world applications that require both performance and versatility.
68
 
69
- - **Mi:dm 2.0-Mini**
70
  A lightweight 2.3B parameter dense model optimized for on-device environments and systems with limited GPU resources.
71
  It was derived from the Base model through pruning and distillation to enable compact deployment.
72
 
 
15
 
16
  <p align="center">
17
  <br>
18
+ <span style="font-size: 60px; font-weight: bold;">Mi:dm 2.0 Base</span>
19
  </br>
20
  </p>
21
 
 
62
 
63
  Mi:dm 2.0 is released in two versions:
64
 
65
+ - **Mi:dm 2.0 Base**
66
  An 11.5B parameter dense model designed to balance model size and performance.
67
  It extends an 8B-scale model by applying the Depth-up Scaling (DuS) method, making it suitable for real-world applications that require both performance and versatility.
68
 
69
+ - **Mi:dm 2.0 Mini**
70
  A lightweight 2.3B parameter dense model optimized for on-device environments and systems with limited GPU resources.
71
  It was derived from the Base model through pruning and distillation to enable compact deployment.
72