mt5-small-mlsum / README.md
LeoCordoba's picture
fix lang
dbad905
|
raw
history blame
3.01 kB
---
language: es
tags:
- summarization
- sagemaker
- mt5
- spanish
license: apache-2.0
datasets:
- mlsum - es
model-index:
- name: mt5-small-mlsum
results:
- task:
name: Abstractive Text Summarization
type: abstractive-text-summarization
dataset:
name: "MLSUM: MultiLingual SUMmarization dataset (Spanish)"
type: mlsum
metrics:
- name: Validation ROGUE-1
type: rogue-1
value: 26.4352
- name: Validation ROGUE-2
type: rogue-2
value: 8.9293
- name: Validation ROGUE-L
type: rogue-l
value: 21.2622
- name: Validation ROGUE-LSUM
type: rogue-lsum
value: 21.5518
- name: Test ROGUE-1
type: rogue-1
value: 26.0756
- name: Test ROGUE-2
type: rogue-2
value: 8.4669
- name: Test ROGUE-L
type: rogue-l
value: 20.8167
- name: Validation ROGUE-LSUM
type: rogue-lsum
value: 21.0822
widget:
- text: |
Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
---
## mt5-small-mlsum
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
For more information look at:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
## Hyperparameters
{
"dataset_config": "es",
"dataset_name": "mlsum",
"do_eval": true,
"do_predict": true,
"do_train": true,
"fp16": true,
"max_target_length": 64,
"model_name_or_path": "google/mt5-small",
"num_train_epochs": 10,
"output_dir": "/opt/ml/checkpoints",
"per_device_eval_batch_size": 4,
"per_device_train_batch_size": 4,
"predict_with_generate": true,
"sagemaker_container_log_level": 20,
"sagemaker_program": "run_summarization.py",
"save_strategy": "epoch",
"seed": 7,
"summary_column": "summary",
"text_column": "text"
}
## Usage
## Results
| metric | score |
| --- | ----- |
| eval_rouge1 | 26.4352 |
| eval_rouge2 | 8.9293 |
| eval_rougeL | 21.2622 |
| eval_rougeLsum | 21.5518 |
| test_rouge1 | 26.0756 |
| test_rouge2 | 8.4669 |
| test_rougeL | 20.8167 |
| test_rougeLsum | 21.0822 |