File size: 1,872 Bytes
28880b5 744c3c2 28880b5 5667178 28880b5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 |
---
language:
- en
library_name: pytorch
tags:
- language-model
- gpt2
- transformer
- wikitext-103
model-index:
- name: gpt2_wt103-40m_12-layer
results:
- task:
type: language-modeling
dataset:
type: wikitext
name: Wikitext-103
metrics:
- type: perplexity
value: 40.6
---
# Model description
paper: [Characterizing Verbatim Short-Term Memory in Neural Language Models](https://arxiv.org/abs/2210.13569)
This is a gpt2-small-like decoder-only transformer model trained on a the [wikitext-103 dataset](https://paperswithcode.com/dataset/wikitext-103).
# Usage
You can download and load the model as follows:
```python
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained("Kristijan/gpt2_wt103_12-layer")
```
Alternatively, if you've downloaded the checkpoint files in this repository, you could also do:
```python
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained(path_to_folder_with_checkpoint_files)
```
## BPE Tokenizer
You should first pretokenize your text using the [MosesTokenizer](https://pypi.org/project/mosestokenizer/):
```python
from mosestokenizer import MosesTokenizer
with MosesTokenizer('en') as pretokenize:
pretokenized_text = " ".join(pretokenize(text_string))
```
Then, to BPE tokenize your text for this model, you should use the [tokenizer trained on Wikitext-103](https://huggingface.co/Kristijan/wikitext-103_tokenizer_v2):
```python
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("Kristijan/wikitext-103-tokenizer_v2")
tokenized_text = tokenizer.tokenize(pretokenized_text)
```
# Intended uses
This checkpoint is intended for research purposes, for example those interested in studying the behavior of transformer language models trained on smaller datasets. |