xiaowu0162
commited on
Commit
•
4ed2f43
1
Parent(s):
28337fa
Update README.md
Browse files
README.md
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
-
<span style="color:blue">**Note: please check [DeepKPG](https://github.com/uclanlp/DeepKPG#scibart) for using this model in huggingface, including setting up the newly trained tokenizer.**</span
|
6 |
|
7 |
Paper: [Pre-trained Language Models for Keyphrase Generation: A Thorough Empirical Study](https://arxiv.org/abs/2212.10233)
|
8 |
|
@@ -22,7 +22,7 @@ Paper: [Pre-trained Language Models for Keyphrase Generation: A Thorough Empiric
|
|
22 |
Pre-training Corpus: [S2ORC (titles and abstracts)](https://github.com/allenai/s2orc)
|
23 |
|
24 |
Pre-training Details:
|
25 |
-
-
|
26 |
- Batch size: 2048
|
27 |
- Total steps: 250k
|
28 |
- Learning rate: 3e-4
|
|
|
2 |
license: mit
|
3 |
---
|
4 |
|
5 |
+
<span style="color:blue">**Note: please check [DeepKPG](https://github.com/uclanlp/DeepKPG#scibart) for using this model in huggingface, including setting up the newly trained tokenizer.**</span>
|
6 |
|
7 |
Paper: [Pre-trained Language Models for Keyphrase Generation: A Thorough Empirical Study](https://arxiv.org/abs/2212.10233)
|
8 |
|
|
|
22 |
Pre-training Corpus: [S2ORC (titles and abstracts)](https://github.com/allenai/s2orc)
|
23 |
|
24 |
Pre-training Details:
|
25 |
+
- Pre-trained **from scratch** with a science vocabulary
|
26 |
- Batch size: 2048
|
27 |
- Total steps: 250k
|
28 |
- Learning rate: 3e-4
|