File size: 1,964 Bytes
394c993 173cff3 8f04d13 f75f323 8f04d13 f75f323 8f04d13 21e13bc 394c993 107cd34 c3dd68e e922b87 a0f7b11 85e3474 107cd34 a0f7b11 107cd34 03f842c 107cd34 a0f7b11 ab8c4fe a0f7b11 ab8c4fe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
language: en
tags:
- azbert
- pretraining
- fill-mask
widget:
- text: "$f$ $($ $x$ [MASK] $y$ $)$"
example_title: "mathy"
- text: "$x$ [MASK] $x$ $equal$ $2$ $x$"
example_title: "mathy"
- text: "Proof by [MASK] that $n$ $fact$ $gt$ $3$ $n$ for $n$ $gt$ $6$"
example_title: "mathy"
- text: "Proof by induction that $n$ [MASK] $gt$ $3$ $n$ for $n$ $gt$ $6$"
example_title: "mathy"
- text: "The goal of life is [MASK]."
example_title: "philosophical"
license: mit
---
## About
This [repository](https://github.com/approach0/azbert) is a boilerplate to push a mask-filling model to the HuggingFace Model Hub.
### Upload to huggingface
Download your tokenizer, model checkpoints, and optionally the training logs (`events.out.*`) to the `./ckpt` directory (do not include any large files except `pytorch_model.bin` and log files `events.out.*`).
Optionally, test model using the MLM task:
```sh
pip install pya0 # for math token preprocessing
# testing local checkpoints:
python test.py ./ckpt/math-tokenizer ./ckpt/2-2-0/encoder.ckpt
# testing Model Hub checkpoints:
python test.py approach0/coco-mae-220 approach0/coco-mae-220
```
> **Note**
> Modify the test examples in `test.txt` to play with it.
> The test file is tab-separated, the first column is additional positions you want to mask for the right-side sentence (useful for masking tokens in math markups).
> A zero means no additional mask positions.
To upload to huggingface, use the `upload2hgf.sh` script.
Before runnig this script, be sure to check:
* `git-lfs` is installed
* having git-remote named `hgf` reference to `https://huggingface.co/your/repo`
* model contains all the files needed: `config.json` and `pytorch_model.bin`
* tokenizer contains all the files needed: `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, `vocab.txt` and `tokenizer.json`
* no `tokenizer_file` field in `tokenizer_config.json` (sometimes it is located locally at `~/.cache`)
|