Update README.md
Browse files
README.md
CHANGED
@@ -8,4 +8,28 @@ widget:
|
|
8 |
- text: ๏ มัจฉาใน
|
9 |
- text: ๏ มิตรแท้
|
10 |
- text: ๏ แม้นชีวี
|
11 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
8 |
- text: ๏ มัจฉาใน
|
9 |
- text: ๏ มิตรแท้
|
10 |
- text: ๏ แม้นชีวี
|
11 |
+
---
|
12 |
+
# PhraAphaiManee-LM (แต่งกลอนสไตล์พระอภัยมณี ด้วย GPT-2)
|
13 |
+
|
14 |
+
PhraAphaiManee-LM or GPT-2 for Thai poem (PhraAphaiManee-Style).
|
15 |
+
I use [GPT-2 for Thai lyrics](https://huggingface.co/tupleblog/generate-thai-lyrics),
|
16 |
+
which is based on [GPT-2 base Thai](https://huggingface.co/flax-community/gpt2-base-thai) as a pre-trained model for
|
17 |
+
[PhraAphaiManee (พระอภัยมณี)](https://vajirayana.org/%e0%b8%9e%e0%b8%a3%e0%b8%b0%e0%b8%ad%e0%b8%a0%e0%b8%b1%e0%b8%a2%e0%b8%a1%e0%b8%93%e0%b8%b5) dataset.
|
18 |
+
|
19 |
+
## Example use
|
20 |
+
''' py
|
21 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM,pipeline
|
22 |
+
|
23 |
+
tokenizer = AutoTokenizer.from_pretrained("Kongfha/PhraAphaiManee-LM")
|
24 |
+
|
25 |
+
model = AutoModelForCausalLM.from_pretrained("Kongfha/PhraAphaiManee-LM")
|
26 |
+
|
27 |
+
generate = pipeline("text-generation",model=model,tokenizer=tokenizer)
|
28 |
+
text = "๏ สัมผัสเส้นขอบฟ้าชลาลัย"
|
29 |
+
|
30 |
+
generated_text = generate(text,max_length=140,top_k=25,temperature=1) # parameters can be varied
|
31 |
+
|
32 |
+
print(f"Input: {text}")
|
33 |
+
print(f"Output:\n {generated_text[0]['generated_text']}")
|
34 |
+
'''
|
35 |
+
|