tags: | |
- summarization | |
- summary | |
- booksum | |
- long-document | |
- long-form | |
license: | |
- apache-2.0 | |
- bsd-3-clause | |
datasets: | |
- kmfoda/booksum | |
metrics: | |
- rouge | |
inference: false | |
model-index: | |
- name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 | |
results: | |
- task: | |
type: summarization | |
name: Summarization | |
dataset: | |
name: samsum | |
type: samsum | |
config: samsum | |
split: test | |
metrics: | |
- name: ROUGE-1 | |
type: rouge | |
value: 24.4101 | |
verified: true | |
- name: ROUGE-2 | |
type: rouge | |
value: 5.003 | |
verified: true | |
- name: ROUGE-L | |
type: rouge | |
value: 17.2544 | |
verified: true | |
- name: ROUGE-LSUM | |
type: rouge | |
value: 20.9183 | |
verified: true | |
- name: loss | |
type: loss | |
value: 3.194674015045166 | |
verified: true | |
- name: gen_len | |
type: gen_len | |
value: 58.9951 | |
verified: true | |
- task: | |
type: summarization | |
name: Summarization | |
dataset: | |
name: billsum | |
type: billsum | |
config: default | |
split: test | |
metrics: | |
- name: ROUGE-1 | |
type: rouge | |
value: 37.3648 | |
verified: true | |
- name: ROUGE-2 | |
type: rouge | |
value: 12.3316 | |
verified: true | |
- name: ROUGE-L | |
type: rouge | |
value: 22.075 | |
verified: true | |
- name: ROUGE-LSUM | |
type: rouge | |
value: 31.1679 | |
verified: true | |
- name: loss | |
type: loss | |
value: 2.745267391204834 | |
verified: true | |
- name: gen_len | |
type: gen_len | |
value: 157.3126 | |
verified: true | |
# long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 | |
> Evaluating some metric results before merging with the "main" wip version | |
This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12) on the `kmfoda/booksum`. | |
The "base" checkpoint that I update when a training session is productive is [here](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP) | |
## Model description | |
More information needed | |
## Intended uses & limitations | |
More information needed | |
## Training and evaluation data | |
More information needed | |
## Training procedure | |
### Training hyperparameters | |
The following hyperparameters were used during training: | |
- learning_rate: 0.0006 | |
- train_batch_size: 2 | |
- eval_batch_size: 1 | |
- seed: 42 | |
- distributed_type: multi-GPU | |
- gradient_accumulation_steps: 64 | |
- total_train_batch_size: 128 | |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 | |
- lr_scheduler_type: cosine | |
- lr_scheduler_warmup_ratio: 0.01 | |
- num_epochs: 1.1 | |
### Framework versions | |
- Transformers 4.21.2 | |
- Pytorch 1.10.0+cu113 | |
- Datasets 2.4.0 | |
- Tokenizers 0.12.1 | |