GGUF
sea
multilingual
Inference Endpoints
conversational
aashish1904 commited on
Commit
0c152d6
•
1 Parent(s): 6c98719

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +8 -6
README.md CHANGED
@@ -31,7 +31,7 @@ This is quantized version of [SeaLLMs/SeaLLMs-v3-7B-Chat](https://huggingface.co
31
  <p align="center">
32
  <a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
33
  &nbsp;&nbsp;
34
- <a href="https://huggingface.co/SeaLLMs/SeaLLM3-7B-Chat" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
35
  &nbsp;&nbsp;
36
  <a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
37
  &nbsp;&nbsp;
@@ -51,7 +51,9 @@ We introduce **SeaLLMs-v3**, the latest series of the SeaLLMs (Large Language Mo
51
 
52
  SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
53
 
54
- This page introduces the SeaLLMs-v3-7B-Chat model, specifically fine-tuned to follow human instructions effectively for task completion, making it directly applicable to your applications.
 
 
55
 
56
 
57
  ### Get started with `Transformers`
@@ -64,11 +66,11 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
64
  device = "cuda" # the device to load the model onto
65
 
66
  model = AutoModelForCausalLM.from_pretrained(
67
- "SeaLLMs/SeaLLM3-7B-chat",
68
  torch_dtype=torch.bfloat16,
69
  device_map=device
70
  )
71
- tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM3-7B-chat")
72
 
73
  # prepare messages to model
74
  prompt = "Hiii How are you?"
@@ -100,11 +102,11 @@ from transformers import TextStreamer
100
  device = "cuda" # the device to load the model onto
101
 
102
  model = AutoModelForCausalLM.from_pretrained(
103
- "SeaLLMs/SeaLLM3-7B-chat",
104
  torch_dtype=torch.bfloat16,
105
  device_map=device
106
  )
107
- tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLM3-7B-chat")
108
 
109
  # prepare messages to model
110
  messages = [
 
31
  <p align="center">
32
  <a href="https://damo-nlp-sg.github.io/SeaLLMs/" target="_blank" rel="noopener">Website</a>
33
  &nbsp;&nbsp;
34
+ <a href="https://huggingface.co/SeaLLMs/SeaLLMs-v3-7B-Chat" target="_blank" rel="noopener"> 🤗 Tech Memo</a>
35
  &nbsp;&nbsp;
36
  <a href="https://huggingface.co/spaces/SeaLLMs/SeaLLM-Chat" target="_blank" rel="noopener"> 🤗 DEMO</a>
37
  &nbsp;&nbsp;
 
51
 
52
  SeaLLMs is tailored for handling a wide range of languages spoken in the SEA region, including English, Chinese, Indonesian, Vietnamese, Thai, Tagalog, Malay, Burmese, Khmer, Lao, Tamil, and Javanese.
53
 
54
+ This page introduces the **SeaLLMs-v3-7B-Chat** model, specifically fine-tuned to follow human instructions effectively for task completion, making it directly applicable to your applications.
55
+
56
+ You may also refer to the [SeaLLMs-v3-1.5B-Chat](https://huggingface.co/SeaLLMs/SeaLLMs-v3-1.5B-Chat) model which requires much lower computational resources and can be easily loaded locally.
57
 
58
 
59
  ### Get started with `Transformers`
 
66
  device = "cuda" # the device to load the model onto
67
 
68
  model = AutoModelForCausalLM.from_pretrained(
69
+ "SeaLLMs/SeaLLMs-v3-7B-Chat", # can change to "SeaLLMs/SeaLLMs-v3-1.5B-Chat" if your resource is limited
70
  torch_dtype=torch.bfloat16,
71
  device_map=device
72
  )
73
+ tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-7B-Chat")
74
 
75
  # prepare messages to model
76
  prompt = "Hiii How are you?"
 
102
  device = "cuda" # the device to load the model onto
103
 
104
  model = AutoModelForCausalLM.from_pretrained(
105
+ "SeaLLMs/SeaLLMs-v3-7B-Chat", # can change to "SeaLLMs/SeaLLMs-v3-1.5B-Chat" if your resource is limited
106
  torch_dtype=torch.bfloat16,
107
  device_map=device
108
  )
109
+ tokenizer = AutoTokenizer.from_pretrained("SeaLLMs/SeaLLMs-v3-7B-Chat")
110
 
111
  # prepare messages to model
112
  messages = [