|
--- |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- jed351/shikoto_zh_hk |
|
metrics: |
|
- accuracy |
|
model-index: |
|
- name: gpt2-shikoto |
|
results: |
|
- task: |
|
name: Causal Language Modeling |
|
type: text-generation |
|
dataset: |
|
name: jed351/shikoto_zh_hk |
|
type: jed351/shikoto_zh_hk |
|
metrics: |
|
- name: Loss |
|
type: loss |
|
value: 3.0956006050109863 |
|
license: openrail |
|
--- |
|
|
|
|
|
# gpt2-shikoto |
|
|
|
This model was trained on a dataset I obtained from an online novel site. |
|
**Please be aware that the stories (training data) might contain inappropriate content. This model is intended for research purposes only.** |
|
|
|
|
|
|
|
The base model can be found [here](https://huggingface.co/jed351/gpt2-base-zh-hk), which was obtained by |
|
patching a [GPT2 Chinese model](https://huggingface.co/ckiplab/gpt2-base-chinese) and its tokenizer with Cantonese characters. |
|
Refer to the base model for info on the patching process. |
|
|
|
Besides language modeling, another aim of this experiment was to test the accelerate library by offloading certain workloads to CPU as well as finding the optimal training iterations. |
|
|
|
The perplexity of this model is 16.12 after 400,000 steps. Comparing to the previous [attempt](https://huggingface.co/jed351/gpt2_tiny_zh-hk-shikoto) 27.02 after 400,000 steps. |
|
It took around the same time duration to train this model but I only used 1 GPU here. |
|
|
|
|
|
## Training procedure |
|
|
|
Please refer to the [script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling) |
|
provided by Huggingface. |
|
|
|
|
|
The model was trained for 400,000 steps on 1 NVIDIA Quadro RTX6000 for around 30 hours at the Research Computing Services of Imperial College London. |
|
|
|
|
|
|
|
|
|
### How to use it? |
|
``` |
|
from transformers import AutoTokenizer |
|
from transformers import TextGenerationPipeline, AutoModelForCausalLM |
|
tokenizer = AutoTokenizer.from_pretrained("jed351/gpt2-base-zh-hk") |
|
model = AutoModelForCausalLM.from_pretrained("jed351/gpt2_base_zh-hk-shikoto") |
|
# try messing around with the parameters |
|
generator = TextGenerationPipeline(model, tokenizer, |
|
max_new_tokens=200, |
|
no_repeat_ngram_size=3) #, device=0) #if you have a GPU |
|
input_string = "your input" |
|
output = generator(input_string) |
|
string = output[0]['generated_text'].replace(' ', '') |
|
print(string) |
|
``` |
|
|
|
### Framework versions |
|
|
|
- Transformers 4.26.0.dev0 |
|
- Pytorch 1.13.1 |
|
- Datasets 2.8.0 |
|
- Tokenizers 0.13.2 |