File size: 3,770 Bytes
3456945 618c352 807745b 3456945 65f8cc1 3456945 65f8cc1 3456945 38a5b89 65f8cc1 38a5b89 65f8cc1 3456945 65f8cc1 38a5b89 65f8cc1 3456945 65f8cc1 3456945 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
---
language: en
tags:
- sagemaker
- bart
- summarization
license: apache-2.0
datasets:
- samsum
widget:
- text: "Jeff: Can I train a \U0001F917 Transformers model on Amazon SageMaker? \n\
Philipp: Sure you can use the new Hugging Face Deep Learning Container. \nJeff:\
\ ok.\nJeff: and how can I get started? \nJeff: where can I find documentation?\
\ \nPhilipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face\
\ "
model-index:
- name: philschmid/bart-base-samsum
results:
- task:
type: summarization
name: Summarization
dataset:
name: samsum
type: samsum
config: samsum
split: test
metrics:
- name: ROUGE-1
type: rouge
value: 45.3438
verified: true
- name: ROUGE-2
type: rouge
value: 21.6953
verified: true
- name: ROUGE-L
type: rouge
value: 38.1365
verified: true
- name: ROUGE-LSUM
type: rouge
value: 41.5913
verified: true
- name: loss
type: loss
value: 1.5832244157791138
verified: true
- name: gen_len
type: gen_len
value: 17.9927
verified: true
---
## `bart-base-samsum`
This model was trained using Amazon SageMaker and the new Hugging Face Deep Learning container.
You can find the notebook [here]() and the referring blog post [here]().
For more information look at:
- [🤗 Transformers Documentation: Amazon SageMaker](https://huggingface.co/transformers/sagemaker.html)
- [Example Notebooks](https://github.com/huggingface/notebooks/tree/master/sagemaker)
- [Amazon SageMaker documentation for Hugging Face](https://docs.aws.amazon.com/sagemaker/latest/dg/hugging-face.html)
- [Python SDK SageMaker documentation for Hugging Face](https://sagemaker.readthedocs.io/en/stable/frameworks/huggingface/index.html)
- [Deep Learning Container](https://github.com/aws/deep-learning-containers/blob/master/available_images.md#huggingface-training-containers)
## Hyperparameters
```json
{
"dataset_name": "samsum",
"do_eval": true,
"do_train": true,
"fp16": true,
"learning_rate": 5e-05,
"model_name_or_path": "facebook/bart-base",
"num_train_epochs": 3,
"output_dir": "/opt/ml/model",
"per_device_eval_batch_size": 8,
"per_device_train_batch_size": 8,
"seed": 7
}
```
## Train results
| key | value |
| --- | ----- |
| epoch | 3 |
| init_mem_cpu_alloc_delta | 180190 |
| init_mem_cpu_peaked_delta | 18282 |
| init_mem_gpu_alloc_delta | 558658048 |
| init_mem_gpu_peaked_delta | 0 |
| train_mem_cpu_alloc_delta | 6658519 |
| train_mem_cpu_peaked_delta | 642937 |
| train_mem_gpu_alloc_delta | 2267624448 |
| train_mem_gpu_peaked_delta | 10355728896 |
| train_runtime | 98.4931 |
| train_samples | 14732 |
| train_samples_per_second | 3.533 |
## Eval results
| key | value |
| --- | ----- |
| epoch | 3 |
| eval_loss | 1.5356481075286865 |
| eval_mem_cpu_alloc_delta | 659047 |
| eval_mem_cpu_peaked_delta | 18254 |
| eval_mem_gpu_alloc_delta | 0 |
| eval_mem_gpu_peaked_delta | 300285440 |
| eval_runtime | 0.3116 |
| eval_samples | 818 |
| eval_samples_per_second | 2625.337 |
## Usage
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="philschmid/bart-base-samsum")
conversation = '''Jeff: Can I train a 🤗 Transformers model on Amazon SageMaker?
Philipp: Sure you can use the new Hugging Face Deep Learning Container.
Jeff: ok.
Jeff: and how can I get started?
Jeff: where can I find documentation?
Philipp: ok, ok you can find everything here. https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face
'''
nlp(conversation)
```
|