Update README.md
Browse files
README.md
CHANGED
@@ -25,4 +25,18 @@ datasets:
|
|
25 |
**Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
|
26 |
|
27 |
|
28 |
-
This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
**Longformer** uses a combination of a sliding window (*local*) attention and *global* attention. Global attention is user-configured based on the task to allow the model to learn task-specific representations.
|
26 |
|
27 |
|
28 |
+
This model was made following the research done by [Iz Beltagy and Matthew E. Peters and Arman Cohan](https://arxiv.org/abs/2004.05150).
|
29 |
+
|
30 |
+
## Citation
|
31 |
+
If you want to cite this model you can use this:
|
32 |
+
|
33 |
+
```bibtex
|
34 |
+
@misc{mromero2022longformer-base-4096-spanish,
|
35 |
+
title={Spanish LongFormer by Manuel Romero},
|
36 |
+
author={Romero, Manuel},
|
37 |
+
publisher={Hugging Face},
|
38 |
+
journal={Hugging Face Hub},
|
39 |
+
howpublished={\url{https://huggingface.co/mrm8488/longformer-base-4096-spanish}},
|
40 |
+
year={2022}
|
41 |
+
}
|
42 |
+
```
|