File size: 1,937 Bytes
be67b9a
116de86
 
 
 
 
 
 
 
 
1af2083
fa5abff
116de86
 
 
 
 
 
 
 
 
871bf26
 
 
 
 
 
 
 
 
 
 
116de86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
871bf26
116de86
 
 
871bf26
 
 
 
 
 
116de86
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
---
license: mit
base_model: facebook/bart-large-cnn
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: test-dialogue-summarization
  results: []
pipeline_tag: summarization
library_name: transformers
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# test-dialogue-summarization

This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
eval_loss: 0.8548385500907898,
eval_rouge1: 66.4768,
eval_rouge2: 48.5059,
eval_rougeL: 55.6107,
eval_rougeLsum: 64.379,
eval_gen_len: 135.19,
eval_runtime: 106.4023,
eval_samples_per_second: 0.94,
eval_steps_per_second: 0.235,
epoch: 5.0
 
## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5

### Training results

Epoch	Training Loss	Validation Loss	Rouge1	Rouge2	Rougel	Rougelsum	Gen Len
1	No log	0.968213	59.682700	35.068600	44.651000	56.618200	137.666700
2	No log	0.961468	61.080300	37.609500	47.390200	58.380500	134.193300
3	No log	0.965955	62.082900	39.734400	48.736800	59.302500	135.833300
4	No log	0.975513	63.494900	42.147500	50.690800	60.831800	134.246700
5	No log	0.983745	64.556600	43.555200	51.977700	61.979700	134.180000


### Framework versions

- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.2
- Tokenizers 0.13.3