Update README.md
Browse files
README.md
CHANGED
@@ -28,7 +28,8 @@ If you wish to use Baichuan-7B (for inference, finetuning, etc.), we recommend u
|
|
28 |
- Among models of the same size, Baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
|
29 |
- Baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
|
30 |
- Unlike LLaMA, which completely prohibits commercial use, Baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
|
31 |
-
|
|
|
32 |
inference code
|
33 |
```python
|
34 |
import torch
|
|
|
28 |
- Among models of the same size, Baichuan-7B has achieved the current state-of-the-art (SOTA) level, as evidenced by the following MMLU metrics.
|
29 |
- Baichuan-7B is trained on proprietary bilingual Chinese-English corpora, optimized for Chinese, and achieves SOTA performance on C-Eval.
|
30 |
- Unlike LLaMA, which completely prohibits commercial use, Baichuan-7B employs a more lenient open-source license, allowing for commercial purposes.
|
31 |
+
|
32 |
+
## How to Get Started with the Model
|
33 |
inference code
|
34 |
```python
|
35 |
import torch
|