metadata
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- summarize_from_feedback
metrics:
- rouge
pipeline_tag: summarization
base_model: google/flan-t5-large
model-index:
- name: flan-t5-large-finetuned-openai-summarize_from_feedback
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: summarize_from_feedback
type: summarize_from_feedback
config: comparisons
split: train
args: comparisons
metrics:
- type: rouge
value: 30.2401
name: Rouge1
- type: rouge
value: 11.4916
name: Rouge2
- type: rouge
value: 24.6485
name: RougeL
- type: rouge
value: 26.1801
name: RougeLSum
flan-t5-large-finetuned-openai-summarize_from_feedback
This model is a fine-tuned version of google/flan-t5-large on the summarize_from_feedback dataset. It achieves the following results on the evaluation set:
- Loss: 2.3118
- Rouge1: 30.2401
- Rouge2: 11.4916
- Rougel: 24.6485
- Rougelsum: 26.1801
- Gen Len: 18.8428
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
Training results
See Tensorboard
Citation
@misc {manuel_romero_2023,
author = { {Manuel Romero} },
title = { flan-t5-large-finetuned-openai-summarize_from_feedback (Revision 51666f9) },
year = 2023,
url = { https://huggingface.co/mrm8488/flan-t5-large-finetuned-openai-summarize_from_feedback },
doi = { 10.57967/hf/0266 },
publisher = { Hugging Face }
}
Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2