prasadsachin
commited on
Commit
•
10252c1
1
Parent(s):
3dd9bd1
Update README.md
Browse files
README.md
CHANGED
@@ -1,5 +1,6 @@
|
|
1 |
---
|
2 |
library_name: keras-hub
|
|
|
3 |
---
|
4 |
## Model Overview
|
5 |
ELECTRA model is a pretraining approach for language models published by Google. Two transformer models are trained, a generator and a discriminator. The generator replaces tokens in a sequence and is trained as a masked language model. The discriminator is trained to discern what tokens have been replaced. This method of pretraining is more efficient than comparable methods like masked language modeling, especially for small models.
|
@@ -35,5 +36,4 @@ The following model checkpoints are provided by the Keras team. Full code exampl
|
|
35 |
| `electra_base_discriminator_uncased_en` | 109.48M | 12-layer base ELECTRA discriminator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
36 |
| `electra_base_generator_uncased_en` | 33.58M | 12-layer base ELECTRA generator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
37 |
| `electra_large_discriminator_uncased_en` | 335.14M | 24-layer large ELECTRA discriminator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
38 |
-
| `electra_large_generator_uncased_en` | 51.07M | 24-layer large ELECTRA generator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
39 |
-
|
|
|
1 |
---
|
2 |
library_name: keras-hub
|
3 |
+
pipeline_tag: feature-extraction
|
4 |
---
|
5 |
## Model Overview
|
6 |
ELECTRA model is a pretraining approach for language models published by Google. Two transformer models are trained, a generator and a discriminator. The generator replaces tokens in a sequence and is trained as a masked language model. The discriminator is trained to discern what tokens have been replaced. This method of pretraining is more efficient than comparable methods like masked language modeling, especially for small models.
|
|
|
36 |
| `electra_base_discriminator_uncased_en` | 109.48M | 12-layer base ELECTRA discriminator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
37 |
| `electra_base_generator_uncased_en` | 33.58M | 12-layer base ELECTRA generator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
38 |
| `electra_large_discriminator_uncased_en` | 335.14M | 24-layer large ELECTRA discriminator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
39 |
+
| `electra_large_generator_uncased_en` | 51.07M | 24-layer large ELECTRA generator model. All inputs are lowercased. Trained on English Wikipedia + BooksCorpus. |
|
|