Update README.md
Browse files
README.md
CHANGED
@@ -33,7 +33,7 @@ This model is a fine-tuned version of [bert-tiny](prajjwal1/bert-tiny) on [amazo
|
|
33 |
## Model description
|
34 |
|
35 |
TinyBERT is 7.5 times smaller and 9.4 times faster on inference compared to its teacher BERT model (while DistilBERT is 40% smaller and 1.6 times faster than BERT).
|
36 |
-
|
37 |
|
38 |
## Intended uses & limitations
|
39 |
While this model may not be as accurate as the distilbert model, its performance should be enough for most use cases.
|
|
|
33 |
## Model description
|
34 |
|
35 |
TinyBERT is 7.5 times smaller and 9.4 times faster on inference compared to its teacher BERT model (while DistilBERT is 40% smaller and 1.6 times faster than BERT).
|
36 |
+
This model was trained using the entire dataset (3.6M of samples) in constrast to the [distilbert model](https://huggingface.co/AdamCodd/distilbert-base-uncased-finetuned-sentiment-amazon) which was trained on only 10% of the dataset.
|
37 |
|
38 |
## Intended uses & limitations
|
39 |
While this model may not be as accurate as the distilbert model, its performance should be enough for most use cases.
|