indiejoseph commited on
Commit
f101dce
·
1 Parent(s): ce489d3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -0
README.md ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama2
3
+ datasets:
4
+ - indiejoseph/ted-transcriptions-cantonese
5
+ - indiejoseph/wikipedia-zh-yue-qa
6
+ - indiejoseph/wikipedia-zh-yue-summaries
7
+ - indiejoseph/ted-translation-zhhk-zhcn
8
+ - OpenAssistant/oasst1
9
+ language:
10
+ - yue
11
+ ---
12
+
13
+ # Cantonese Llama 2 7b v1
14
+
15
+ ## Model Introduction
16
+ This model has been fine-tuned on [cantonese-llama-2-7b](https://huggingface.co/indiejoseph/cantonese-llama-2-7b), which is a second pretrained model based on Meta's llama2
17
+ The fine-tuning process utilized a dataset consisting of [OpenAssistant/oasst1](https://huggingface.co/datasets/OpenAssistant/oasst1)(with all Simplified Chinese removed),[indiejoseph/ted-transcriptions-cantonese](https://huggingface.co/datasets/indiejoseph/ted-transcriptions-cantonese), [indiejoseph/wikipedia-zh-yue-qa](https://huggingface.co/datasets/indiejoseph/wikipedia-zh-yue-qa), [indiejoseph/wikipedia-zh-yue-summaries](https://huggingface.co/datasets/indiejoseph/wikipedia-zh-yue-summaries), [indiejoseph/ted-translation-zhhk-zhcn](https://huggingface.co/datasets/indiejoseph/ted-translation-zhhk-zhcn).
18
+
19
+ ## Usage
20
+ ```python
21
+ from transformers import AutoModelForCausalLM, AutoTokenizer
22
+
23
+ model = AutoModelForCausalLM.from_pretrained("cantonese-llama-2-7b-oasst-v1/", device_map="auto")
24
+ tokenizer = AutoTokenizer.from_pretrained("cantonese-llama-2-7b-oasst-v1/")
25
+
26
+ template = """A chat between a curious user and an artificial intelligence assistant.
27
+ The assistant gives helpful, detailed, and polite answers to the user's questions.
28
+
29
+ Human: {}
30
+
31
+ Assistant:
32
+ """
33
+
34
+ tokenizer.pad_token = "[PAD]"
35
+ tokenizer.padding_side = "left"
36
+
37
+ def inference(input_texts):
38
+ inputs = tokenizer([template.format(text) for text in input_texts], return_tensors="pt", padding=True, truncation=True, max_length=512).to('cuda')
39
+
40
+ # Generate
41
+ generate_ids = model.generate(**inputs, max_new_tokens=512)
42
+ outputs = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
43
+ outputs = [out.split('Assistant:')[1].strip() for out in outputs]
44
+
45
+ return outputs
46
+
47
+
48
+ print(inference("香港現任特首係邊個?"))
49
+ # Output: 香港現任特首係李家超。
50
+
51
+ print(inference("2019年香港發生咗咩事?"))
52
+ # Output: 2019年香港發生咗反修例運動。
53
+ ```