Update README.md to reflect the breaking name change
Browse files
README.md
CHANGED
@@ -1,16 +1,17 @@
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
---
|
4 |
-
# SEA-LION-7B-Instruct-
|
5 |
|
6 |
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
|
7 |
The size of the models range from 3 billion to 7 billion parameters.
|
8 |
This is the card for the SEA-LION 7B Instruct (Non-Commercial) model.
|
9 |
|
10 |
-
For more details on the base model, please refer to the [base model's model card](https://huggingface.co/aisingapore/
|
11 |
|
12 |
-
|
13 |
|
|
|
14 |
|
15 |
## Model Details
|
16 |
|
@@ -88,8 +89,8 @@ The tokenizer type is Byte-Pair Encoding (BPE).
|
|
88 |
|
89 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
90 |
|
91 |
-
tokenizer = AutoTokenizer.from_pretrained("aisingapore/
|
92 |
-
model = AutoModelForCausalLM.from_pretrained("aisingapore/
|
93 |
|
94 |
prompt_template = "### USER:\n{human_prompt}\n\n### RESPONSE:\n"
|
95 |
prompt = """Apa sentimen dari kalimat berikut ini?
|
|
|
1 |
---
|
2 |
license: cc-by-nc-sa-4.0
|
3 |
---
|
4 |
+
# SEA-LION-7B-Instruct-Research
|
5 |
|
6 |
SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
|
7 |
The size of the models range from 3 billion to 7 billion parameters.
|
8 |
This is the card for the SEA-LION 7B Instruct (Non-Commercial) model.
|
9 |
|
10 |
+
For more details on the base model, please refer to the [base model's model card](https://huggingface.co/aisingapore/sea-lion-7b).
|
11 |
|
12 |
+
For the commercially permissive model, please refer to the [SEA-LION-7B-Instruct](https://huggingface.co/aisingapore/sea-lion-7b-instruct).
|
13 |
|
14 |
+
SEA-LION stands for <i>Southeast Asian Languages In One Network</i>.
|
15 |
|
16 |
## Model Details
|
17 |
|
|
|
89 |
|
90 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
91 |
|
92 |
+
tokenizer = AutoTokenizer.from_pretrained("aisingapore/sea-lion-7b-instruct-nc", trust_remote_code=True)
|
93 |
+
model = AutoModelForCausalLM.from_pretrained("aisingapore/sea-lion-7b-instruct-nc", trust_remote_code=True)
|
94 |
|
95 |
prompt_template = "### USER:\n{human_prompt}\n\n### RESPONSE:\n"
|
96 |
prompt = """Apa sentimen dari kalimat berikut ini?
|