Update README.md
Browse files
README.md
CHANGED
@@ -18,6 +18,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
18 |
|
19 |
# long-t5-tglobal-base-synthsumm_direct
|
20 |
|
|
|
|
|
|
|
|
|
|
|
21 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the None dataset.
|
22 |
It achieves the following results on the evaluation set:
|
23 |
- Loss: 1.4378
|
@@ -27,17 +32,6 @@ It achieves the following results on the evaluation set:
|
|
27 |
- Rougelsum: 43.0271
|
28 |
- Gen Len: 84.5231
|
29 |
|
30 |
-
## Model description
|
31 |
-
|
32 |
-
More information needed
|
33 |
-
|
34 |
-
## Intended uses & limitations
|
35 |
-
|
36 |
-
More information needed
|
37 |
-
|
38 |
-
## Training and evaluation data
|
39 |
-
|
40 |
-
More information needed
|
41 |
|
42 |
## Training procedure
|
43 |
|
|
|
18 |
|
19 |
# long-t5-tglobal-base-synthsumm_direct
|
20 |
|
21 |
+
|
22 |
+
This was fine-tuned on a synthetic dataset of curated long-context text and `GPT-3.5-turbo-1106` summaries spanning several domains, including "random" long-context examples from redpajama, the pile, etc. Try it in the [gradio demo](https://huggingface.co/spaces/pszemraj/document-summarization)
|
23 |
+
|
24 |
+
## Model description
|
25 |
+
|
26 |
This model is a fine-tuned version of [google/long-t5-tglobal-base](https://huggingface.co/google/long-t5-tglobal-base) on the None dataset.
|
27 |
It achieves the following results on the evaluation set:
|
28 |
- Loss: 1.4378
|
|
|
32 |
- Rougelsum: 43.0271
|
33 |
- Gen Len: 84.5231
|
34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
## Training procedure
|
37 |
|