Divyasreepat commited on
Commit
60c36be
·
verified ·
1 Parent(s): 11160e7

Update README.md with new model card content

Browse files
Files changed (1) hide show
  1. README.md +29 -0
README.md CHANGED
@@ -27,6 +27,35 @@ To load preset architectures and weights, use the `from_preset` constructor.
27
  Disclaimer: Pre-trained models are provided on an "as is" basis, without
28
  warranties or conditions of any kind.
29
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  __Arguments__
32
 
 
27
  Disclaimer: Pre-trained models are provided on an "as is" basis, without
28
  warranties or conditions of any kind.
29
 
30
+ ## Links
31
+
32
+ * [ALBERT Quickstart Notebook](https://www.kaggle.com/code/laxmareddypatlolla/albert-quickstart-notebook)
33
+ * [ALBERT API Documentation](https://keras.io/keras_hub/api/models/albert/)
34
+ * [ALBERT Model Card](https://huggingface.co/docs/transformers/en/model_doc/albert)
35
+ * [KerasHub Beginner Guide](https://keras.io/guides/keras_hub/getting_started/)
36
+ * [KerasHub Model Publishing Guide](https://keras.io/guides/keras_hub/upload/)
37
+
38
+ ## Installation
39
+
40
+ Keras and KerasHub can be installed with:
41
+
42
+ ```
43
+ pip install -U -q keras-hub
44
+ pip install -U -q keras
45
+ ```
46
+
47
+ Jax, TensorFlow, and Torch come preinstalled in Kaggle Notebooks. For instructions on installing them in another environment see the [Keras Getting Started](https://keras.io/getting_started/) page.
48
+
49
+ ## Presets
50
+
51
+ The following model checkpoints are provided by the Keras team. Full code examples for each are available below.
52
+ | Preset name | Parameters | Description |
53
+ |----------------|------------|--------------------------------------------------|
54
+ | albert_base_en_uncased | 11.68M | 12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus.|
55
+ | albert_large_en_uncased| 17.68M | 24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
56
+ | albert_extra_large_en_uncased | 58.72M | 24-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
57
+ | albert_extra_extra_large_en_uncased| 222.60M | 12-layer ALBERT model where all input is lowercased. Trained on English Wikipedia + BooksCorpus. |
58
+
59
 
60
  __Arguments__
61