File size: 2,454 Bytes
92695cb
1c70441
 
 
 
 
 
59bc228
 
 
 
 
 
 
 
 
 
 
 
 
1c70441
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bbdb94c
1c70441
 
 
 
 
 
 
 
 
bbdb94c
1c70441
 
 
 
 
 
 
 
 
 
 
 
 
5521afc
1c70441
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
tags:
- generated_from_trainer
datasets:
- jed351/shikoto_zh_hk
metrics:
- accuracy
model-index:
- name: gpt2-shikoto
  results:
  - task:
      name: Causal Language Modeling
      type: text-generation
    dataset:
      name: jed351/shikoto_zh_hk
      type: jed351/shikoto_zh_hk
    metrics:
    - name: Loss
      type: loss
      value: 3.0956006050109863
license: openrail
---


# gpt2-shikoto

This model was trained on a dataset I obtained from an online novel site. 
**Please be aware that the stories (training data) might contain inappropriate content. This model is intended for research purposes only.**



The base model can be found [here](https://huggingface.co/jed351/gpt2-base-zh-hk), which was obtained by 
patching a [GPT2 Chinese model](https://huggingface.co/ckiplab/gpt2-base-chinese) and its tokenizer with Cantonese characters.
Refer to the base model for info on the patching process.

Besides language modeling, another aim of this experiment was to test the accelerate library by offloading certain workloads to CPU as well as finding the optimal training iterations.

The perplexity of this model is 16.12 after 400,000 steps. Comparing to the previous [attempt](https://huggingface.co/jed351/gpt2_tiny_zh-hk-shikoto) 27.02 after 400,000 steps. 
It took around the same time duration to train this model but I only used 1 GPU here.


## Training procedure

Please refer to the [script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) 
provided by Huggingface. 


The model was trained for 400,000 steps on 1 NVIDIA Quadro RTX6000 for around 30 hours at the Research Computing Services of Imperial College London. 




### How to use it?
```
from transformers import AutoTokenizer
from transformers import TextGenerationPipeline, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("jed351/gpt2-base-zh-hk")
model = AutoModelForCausalLM.from_pretrained("jed351/gpt2_base_zh-hk-shikoto")
# try messing around with the parameters
generator = TextGenerationPipeline(model, tokenizer, 
                                   max_new_tokens=200, 
                                   no_repeat_ngram_size=3) #, device=0) #if you have a GPU
input_string = "your input" 
output = generator(input_string)
string = output[0]['generated_text'].replace(' ', '')
print(string)
```

### Framework versions

- Transformers 4.26.0.dev0
- Pytorch 1.13.1
- Datasets 2.8.0
- Tokenizers 0.13.2