Update README.md with new model card content
Browse files
README.md
CHANGED
@@ -24,6 +24,23 @@ warranties or conditions of any kind. The underlying model is provided by a
|
|
24 |
third party and subject to a separate license, available
|
25 |
[here](https://github.com/facebookresearch/fairseq/).
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
|
28 |
__Arguments__
|
29 |
|
|
|
24 |
third party and subject to a separate license, available
|
25 |
[here](https://github.com/facebookresearch/fairseq/).
|
26 |
|
27 |
+
## Links
|
28 |
+
* [BART Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/bart-quickstart-notebook)
|
29 |
+
* [BART API Documentation](https://keras.io/keras_hub/api/models/bart/)
|
30 |
+
* [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
|
31 |
+
* [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
|
32 |
+
|
33 |
+
## Presets
|
34 |
+
|
35 |
+
The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
|
36 |
+
| Preset name | Parameters | Description |
|
37 |
+
|----------------|------------|--------------------------------------------------|
|
38 |
+
| bart_base_en | 139.42M | 6-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl |
|
39 |
+
| bart_large_en | 406.29M | 12-layer BART model where case is maintained. Trained on BookCorpus, English Wikipedia and CommonCrawl. |
|
40 |
+
| bart_large_en_cnn | 406.29M | The bart_large_en backbone model fine-tuned on the CNN+DM summarization dataset. |
|
41 |
+
|
42 |
+
|
43 |
+
|
44 |
|
45 |
__Arguments__
|
46 |
|