Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
question-answering | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-legal_data
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.9101
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 26 | 5.3529 |
| No log | 2.0 | 52 | 5.4226 |
| No log | 3.0 | 78 | 5.2550 |
| No log | 4.0 | 104 | 5.1011 |
| No log | 5.0 | 130 | 5.1857 |
| No log | 6.0 | 156 | 5.5119 |
| No log | 7.0 | 182 | 5.4480 |
| No log | 8.0 | 208 | 5.6993 |
| No log | 9.0 | 234 | 5.9614 |
| No log | 10.0 | 260 | 5.6987 |
| No log | 11.0 | 286 | 5.6679 |
| No log | 12.0 | 312 | 5.9850 |
| No log | 13.0 | 338 | 5.6065 |
| No log | 14.0 | 364 | 5.3162 |
| No log | 15.0 | 390 | 5.7856 |
| No log | 16.0 | 416 | 5.5786 |
| No log | 17.0 | 442 | 5.6028 |
| No log | 18.0 | 468 | 5.7649 |
| No log | 19.0 | 494 | 5.5382 |
| 1.8345 | 20.0 | 520 | 6.3654 |
| 1.8345 | 21.0 | 546 | 5.3575 |
| 1.8345 | 22.0 | 572 | 5.3808 |
| 1.8345 | 23.0 | 598 | 5.9340 |
| 1.8345 | 24.0 | 624 | 6.1475 |
| 1.8345 | 25.0 | 650 | 6.2188 |
| 1.8345 | 26.0 | 676 | 5.7651 |
| 1.8345 | 27.0 | 702 | 6.2629 |
| 1.8345 | 28.0 | 728 | 6.1356 |
| 1.8345 | 29.0 | 754 | 5.9255 |
| 1.8345 | 30.0 | 780 | 6.4252 |
| 1.8345 | 31.0 | 806 | 5.6967 |
| 1.8345 | 32.0 | 832 | 6.4324 |
| 1.8345 | 33.0 | 858 | 6.5087 |
| 1.8345 | 34.0 | 884 | 6.1113 |
| 1.8345 | 35.0 | 910 | 6.7443 |
| 1.8345 | 36.0 | 936 | 6.6970 |
| 1.8345 | 37.0 | 962 | 6.5578 |
| 1.8345 | 38.0 | 988 | 6.1963 |
| 0.2251 | 39.0 | 1014 | 6.4893 |
| 0.2251 | 40.0 | 1040 | 6.6347 |
| 0.2251 | 41.0 | 1066 | 6.7106 |
| 0.2251 | 42.0 | 1092 | 6.8129 |
| 0.2251 | 43.0 | 1118 | 6.6386 |
| 0.2251 | 44.0 | 1144 | 6.4134 |
| 0.2251 | 45.0 | 1170 | 6.6883 |
| 0.2251 | 46.0 | 1196 | 6.6406 |
| 0.2251 | 47.0 | 1222 | 6.3065 |
| 0.2251 | 48.0 | 1248 | 7.0281 |
| 0.2251 | 49.0 | 1274 | 7.3646 |
| 0.2251 | 50.0 | 1300 | 7.1086 |
| 0.2251 | 51.0 | 1326 | 6.4749 |
| 0.2251 | 52.0 | 1352 | 6.3303 |
| 0.2251 | 53.0 | 1378 | 6.2919 |
| 0.2251 | 54.0 | 1404 | 6.3855 |
| 0.2251 | 55.0 | 1430 | 6.9501 |
| 0.2251 | 56.0 | 1456 | 6.8714 |
| 0.2251 | 57.0 | 1482 | 6.9856 |
| 0.0891 | 58.0 | 1508 | 6.9910 |
| 0.0891 | 59.0 | 1534 | 6.9293 |
| 0.0891 | 60.0 | 1560 | 7.3493 |
| 0.0891 | 61.0 | 1586 | 7.1834 |
| 0.0891 | 62.0 | 1612 | 7.0479 |
| 0.0891 | 63.0 | 1638 | 6.7674 |
| 0.0891 | 64.0 | 1664 | 6.7553 |
| 0.0891 | 65.0 | 1690 | 7.3074 |
| 0.0891 | 66.0 | 1716 | 6.8071 |
| 0.0891 | 67.0 | 1742 | 7.6622 |
| 0.0891 | 68.0 | 1768 | 6.9555 |
| 0.0891 | 69.0 | 1794 | 7.0153 |
| 0.0891 | 70.0 | 1820 | 7.2085 |
| 0.0891 | 71.0 | 1846 | 6.7582 |
| 0.0891 | 72.0 | 1872 | 6.7989 |
| 0.0891 | 73.0 | 1898 | 6.7012 |
| 0.0891 | 74.0 | 1924 | 7.0088 |
| 0.0891 | 75.0 | 1950 | 7.1024 |
| 0.0891 | 76.0 | 1976 | 6.6968 |
| 0.058 | 77.0 | 2002 | 7.5249 |
| 0.058 | 78.0 | 2028 | 6.9199 |
| 0.058 | 79.0 | 2054 | 7.1995 |
| 0.058 | 80.0 | 2080 | 6.9349 |
| 0.058 | 81.0 | 2106 | 7.4025 |
| 0.058 | 82.0 | 2132 | 7.4199 |
| 0.058 | 83.0 | 2158 | 6.8081 |
| 0.058 | 84.0 | 2184 | 7.4777 |
| 0.058 | 85.0 | 2210 | 7.1990 |
| 0.058 | 86.0 | 2236 | 7.0062 |
| 0.058 | 87.0 | 2262 | 7.5724 |
| 0.058 | 88.0 | 2288 | 6.9362 |
| 0.058 | 89.0 | 2314 | 7.1368 |
| 0.058 | 90.0 | 2340 | 7.2183 |
| 0.058 | 91.0 | 2366 | 6.8684 |
| 0.058 | 92.0 | 2392 | 7.1433 |
| 0.058 | 93.0 | 2418 | 7.2161 |
| 0.058 | 94.0 | 2444 | 7.1442 |
| 0.058 | 95.0 | 2470 | 7.3098 |
| 0.058 | 96.0 | 2496 | 7.1264 |
| 0.0512 | 97.0 | 2522 | 6.9424 |
| 0.0512 | 98.0 | 2548 | 6.9155 |
| 0.0512 | 99.0 | 2574 | 6.9038 |
| 0.0512 | 100.0 | 2600 | 6.9101 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu102
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-legal_data", "results": []}]} | MariamD/distilbert-base-uncased-finetuned-legal_data | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
question-answering | transformers | {"language": "english", "datasets": ["legal dataset"], "pipeline_tag": "question-answering"} | MariamD/my-t5-qa-legal | null | [
"transformers",
"pytorch",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | transformers | {} | MariamD/t5-base-QA-legal_data | null | [
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MariamD/t5-base-qa-legal | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mariana2kkk/Mariana | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MarianaSahagun/testmodel | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mariellll/Mon | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Marina/wav2vec2-base-timit-demo-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mario209/DialoGPT-small-RickandMorty | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MarioPenguin/amazon_beto | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MarioPenguin/bert-base-cased-english | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-model-english
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1408
- Train Sparse Categorical Accuracy: 0.9512
- Validation Loss: nan
- Validation Sparse Categorical Accuracy: 0.0
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.2775 | 0.8887 | nan | 0.0 | 0 |
| 0.1702 | 0.9390 | nan | 0.0 | 1 |
| 0.1300 | 0.9555 | nan | 0.0 | 2 |
| 0.1346 | 0.9544 | nan | 0.0 | 3 |
| 0.1408 | 0.9512 | nan | 0.0 | 4 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "bert-model-english", "results": []}]} | MarioPenguin/bert-model-english | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bert-model-english1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0274
- Train Accuracy: 0.9914
- Validation Loss: 0.3493
- Validation Accuracy: 0.9303
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0366 | 0.9885 | 0.3013 | 0.9299 | 0 |
| 0.0261 | 0.9912 | 0.3445 | 0.9351 | 1 |
| 0.0274 | 0.9914 | 0.3493 | 0.9303 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "bert-model-english1", "results": []}]} | MarioPenguin/bert-model-english1 | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | MarioPenguin/beto_amazon | null | [
"transformers",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# beto_amazon_posneu
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1277
- Train Accuracy: 0.9550
- Validation Loss: 0.3439
- Validation Accuracy: 0.8905
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3195 | 0.8712 | 0.3454 | 0.8580 | 0 |
| 0.1774 | 0.9358 | 0.3258 | 0.8802 | 1 |
| 0.1277 | 0.9550 | 0.3439 | 0.8905 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"tags": ["generated_from_keras_callback"], "model-index": [{"name": "beto_amazon_posneu", "results": []}]} | MarioPenguin/beto_amazon_posneu | null | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MarioPenguin/finetuned-model-english | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-model
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8601
- Accuracy: 0.6117
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 84 | 0.8663 | 0.5914 |
| No log | 2.0 | 168 | 0.8601 | 0.6117 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
| {"tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "finetuned-model", "results": []}]} | MarioPenguin/finetuned-model | null | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# roberta-model-english
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1140
- Train Accuracy: 0.9596
- Validation Loss: 0.2166
- Validation Accuracy: 0.9301
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2922 | 0.8804 | 0.2054 | 0.9162 | 0 |
| 0.1710 | 0.9352 | 0.1879 | 0.9353 | 1 |
| 0.1140 | 0.9596 | 0.2166 | 0.9301 | 2 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.7.0
- Tokenizers 0.11.0
| {"license": "mit", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "roberta-model-english", "results": []}]} | MarioPenguin/roberta-model-english | null | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Marius/bert-base-german-cased-BerlinBert | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Marius/bert-base-german-cased-GermanBert | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Marius/bert-base-german-cased-finetuned-twitterpolde | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | MarkusDressel/cord | null | [
"transformers",
"pytorch",
"layoutlmv2",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Marshall/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Marshall/distilbert-base-uncased-finetuned-squad1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | MarshallCharles/bartlargemnli | null | [
"transformers",
"pytorch",
"bart",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | # albertZero
albertZero is a PyTorch model with a prediction head fine-tuned for SQuAD 2.0.
Based on Hugging Face's albert-base-v2, albertZero employs a novel method to speed up fine-tuning. It re-initializes weights of final linear layer in the shared albert transformer block, resulting in a 2% point improvement during the early epochs of fine-tuning.
## Usage
albertZero can be loaded like this:
```python
tokenizer = AutoTokenizer.from_pretrained('MarshallHo/albertZero-squad2-base-v2')
model = AutoModel.from_pretrained('MarshallHo/albertZero-squad2-base-v2')
```
or
```python
from transformers import AlbertModel, AlbertTokenizer, AlbertForQuestionAnswering, AlbertPreTrainedModel
mytokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnsweringAVPool.from_pretrained('albert-base-v2')
model.load_state_dict(torch.load('albertZero-squad2-base-v2.bin'))
```
## References
The goal of [ALBERT](https://arxiv.org/abs/1909.11942) is to reduce the memory requirement of the groundbreaking
language model [BERT](https://arxiv.org/abs/1810.04805), while providing a similar level of performance. ALBERT mainly uses 2 methods to reduce the number of parameters – parameter sharing and factorized embedding.
The field of NLP has undergone major improvements in recent years. The
replacement of recurrent architectures by attention-based models has allowed NLP tasks such as
question-answering to approach human level performance. In order to push the limits further, the
[SQuAD2.0](https://arxiv.org/abs/1806.03822) dataset was created in 2018 with 50,000 additional unanswerable questions, addressing a major weakness of the original version of the dataset.
At the time of writing, near the top of the [SQuAD2.0 leaderboard](https://rajpurkar.github.io/SQuAD-explorer/) is Shanghai Jiao Tong University’s [Retro-Reader](http://arxiv.org/abs/2001.09694).
We have re-implemented their non-ensemble ALBERT model with the SQUAD2.0 prediction head.
## Acknowledgments
Thanks to the generosity of the team at Hugging Face and all the groups referenced above ! | {} | MarshallHo/albertZero-squad2-base-v2 | null | [
"arxiv:1909.11942",
"arxiv:1810.04805",
"arxiv:1806.03822",
"arxiv:2001.09694",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Neo-GPT-Title-Generation-Electric-Car
Title generator based on Neo-GPT 125M fine-tuned on a dataset of 39k url's title. All urls are selected on the TOP 10 google on a list of Keywords about "Electric car" - "Electric car for sale".
# Pipeline example
```python
import pandas as pd
from transformers import AutoModelForMaskedLM
from transformers import GPT2Tokenizer, TrainingArguments, AutoModelForCausalLM, AutoConfig
model = AutoModelForCausalLM.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car')
tokenizer = GPT2Tokenizer.from_pretrained('Martian/Neo-GPT-Title-Generation-Electric-Car', bos_token='<|startoftext|>',
eos_token='<|endoftext|>', pad_token='<|pad|>')
prompt = "<|startoftext|> Electric car"
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(input_ids, do_sample=True, top_k=100, min_length = 30, max_length=150, top_p=0.90, num_return_sequences=20, skip_special_tokens=True)
list_title_gen = []
for i, sample_output in enumerate(gen_tokens):
title = tokenizer.decode(sample_output, skip_special_tokens=True)
list_title_gen.append(title)
for i in list_title_gen:
try:
list_title_gen[list_title_gen.index(i)] = i.split(' | ')[0]
except:
continue
try:
list_title_gen[list_title_gen.index(i)] = i.split(' - ')[0]
except:
continue
try:
list_title_gen[list_title_gen.index(i)] = i.split(' — ')[0]
except:
continue
list_title_gen = [sub.replace('�', ' ').replace('\\r',' ').replace('\
',' ').replace('\\t', ' ').replace('\\xa0', '') for sub in list_title_gen]
list_title_gen = [sub if sub != '<|startoftext|> Electric car' else '' for sub in list_title_gen]
for i in list_title_gen:
print(i)
```
# Todo
- Improve the quality of the training sample
- Add more data
| {"language": ["en"], "widget": [{"text": "Tesla range"}, {"text": "Nissan Leaf is"}, {"text": "Tesla is"}, {"text": "The best electric car"}]} | Martian/Neo-GPT-Title-Generation-Electric-Car | null | [
"transformers",
"pytorch",
"gpt_neo",
"text-generation",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
fill-mask | transformers | {} | Martinlabla/bert_cn_finetunning | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {} | Martinlabla/bert_finetuning_test_mine_result | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers | # wav2vec2-large-xlsr-53-breton
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
lang = "br"
test_dataset = load_dataset("common_voice", lang, split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton")
model = Wav2Vec2ForCTC.from_pretrained("Marxav/wav2vec2-large-xlsr-53-breton")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = re.sub("ʼ", "'", batch["sentence"])
batch["sentence"] = re.sub("’", "'", batch["sentence"])
batch["sentence"] = re.sub('‘', "'", batch["sentence"])
return batch
nb_samples = 2
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset["speech"][:nb_samples], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:nb_samples])
```
The above code leads to the following prediction for the first two samples:
* Prediction: ["neller ket dont a-benn eus netra la vez ser merc'hed evel sich", 'an eil hag egile']
* Reference: ["N'haller ket dont a-benn eus netra pa vezer nec'het evel-se.", 'An eil hag egile.']
The model can be evaluated as follows on the {language} test data of Common Voice.
```python
import re
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
lang = 'br'
test_dataset = load_dataset("common_voice", lang, split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton')
model = Wav2Vec2ForCTC.from_pretrained('Marxav/wav2vec2-large-xlsr-53-breton')
model.to("cuda")
chars_to_ignore_regex = '[\\,\,\?\.\!\;\:\"\“\%\”\�\(\)\/\«\»\½\…]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the aduio files as arrays
def speech_file_to_array_fn(batch):
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower()
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() + " "
batch["sentence"] = re.sub("ʼ", "'", batch["sentence"])
batch["sentence"] = re.sub("’", "'", batch["sentence"])
batch["sentence"] = re.sub('‘', "'", batch["sentence"])
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(remove_special_characters)
test_dataset = test_dataset.map(speech_file_to_array_fn)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"])))
```
**Test Result**: 43.43%
## Training
The Common Voice `train`, `validation` datasets were used for training. | {"language": "br", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech", "xlsr-fine-tuning-week"], "datasets": ["common_voice"], "model-index": [{"name": "XLSR Wav2Vec2 Breton by Marxav", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice br", "type": "common_voice", "args": "br"}, "metrics": [{"type": "wer", "value": 43.43, "name": "Test WER"}]}]}]} | Marxav/wav2vec2-large-xlsr-53-breton | null | [
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"xlsr-fine-tuning-week",
"br",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# GPT2 - RUS | {"language": "ru", "tags": ["text-generation"]} | Mary222/GPT2_RU_GAME | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | Mary222/GPT2_Vit | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# GPT2 - RUS | {"language": "ru", "tags": ["text-generation"]} | Mary222/GPT2_standard | null | [
"transformers",
"pytorch",
"gpt2",
"feature-extraction",
"text-generation",
"ru",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# GPT2 - RUS | {"language": "ru", "tags": ["text-generation"]} | Mary222/MADE_AI_Dungeon_model_RUS | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers | {} | Mary222/Models_testing_ai | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# GPT2 - RUS | {"language": "ru", "tags": ["text-generation"]} | Mary222/SBERBANK_RUS | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"ru",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# LSTM
| {"language": "ru", "license": "apache-2.0", "tags": ["text-generation"], "datasets": ["bookcorpus", "wikipedia"]} | Mary222/made-ai-dungeon | null | [
"transformers",
"text-generation",
"ru",
"dataset:bookcorpus",
"dataset:wikipedia",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MaryKKeller/model_name | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/Helsinki-NLPopus-mt-en-ro-finetuned-en-to-ro | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-ar-en-finetuned-13-9-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-ar-en-finetuned-27-9-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-ar-en-finetuned-3-1-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-ar-en-finetuned-31-12-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetuned-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on the opus_wikipedia dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["opus_wikipedia"]} | MaryaAI/opus-mt-ar-en-finetuned-ar-to-en | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:opus_wikipedia",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MaryaAI/opus-mt-ar-en-finetuned-opus-wiki-15-9-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-ar-en-finetunedTanzil-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-ar-en-finetunedTanzil-v4-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# opus-mt-ar-en-finetunedTanzil-v5-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8101
- Validation Loss: 0.9477
- Train Bleu: 9.3241
- Train Gen Len: 88.73
- Train Rouge1: 56.4906
- Train Rouge2: 34.2668
- Train Rougel: 53.2279
- Train Rougelsum: 53.7836
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Bleu | Train Gen Len | Train Rouge1 | Train Rouge2 | Train Rougel | Train Rougelsum | Epoch |
|:----------:|:---------------:|:----------:|:-------------:|:------------:|:------------:|:------------:|:---------------:|:-----:|
| 0.8735 | 0.9809 | 11.0863 | 78.68 | 56.4557 | 33.3673 | 53.4828 | 54.1197 | 0 |
| 0.8408 | 0.9647 | 9.8543 | 88.955 | 57.3797 | 34.3539 | 53.8783 | 54.3714 | 1 |
| 0.8101 | 0.9477 | 9.3241 | 88.73 | 56.4906 | 34.2668 | 53.2279 | 53.7836 | 2 |
### Framework versions
- Transformers 4.17.0.dev0
- TensorFlow 2.7.0
- Datasets 1.18.4.dev0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "opus-mt-ar-en-finetunedTanzil-v5-ar-to-en", "results": []}]} | MaryaAI/opus-mt-ar-en-finetunedTanzil-v5-ar-to-en | null | [
"transformers",
"tf",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MaryaAI/opus-mt-en-ROMANCE-finetuned-en-to-ro | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-en-ar-finetuned-13-9-ar-to-en | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-Math-13-10-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.13.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["syssr_en_ar"], "model-index": [{"name": "opus-mt-en-ar-finetuned-Math-13-10-en-to-ar", "results": []}]} | MaryaAI/opus-mt-en-ar-finetuned-Math-13-10-en-to-ar | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:syssr_en_ar",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MaryaAI/opus-mt-en-ar-finetuned-STEM-Colab-en-to-ar | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the syssr_en_ar dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2046
- Bleu: 7.9946
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
| No log | 1.0 | 1 | 1.2038 | 7.9946 | 20.0 |
| No log | 2.0 | 2 | 1.2038 | 7.9946 | 20.0 |
| No log | 3.0 | 3 | 1.2038 | 7.9946 | 20.0 |
| No log | 4.0 | 4 | 1.2036 | 7.9946 | 20.0 |
| No log | 5.0 | 5 | 1.2046 | 7.9946 | 20.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.12.1
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["syssr_en_ar"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "syssr_en_ar", "type": "syssr_en_ar", "args": "default"}, "metrics": [{"type": "bleu", "value": 7.9946, "name": "Bleu"}]}]}]} | MaryaAI/opus-mt-en-ar-finetuned-dummyData-10-10-ar-to-en | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:syssr_en_ar",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MaryaAI/opus-mt-en-ar-finetunedSTEM-en-to-ar | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-en-ar-finetunedSTEM-v1-en-to-ar | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-en-ar-finetunedSTEM-v2-en-to-ar | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaryaAI/opus-mt-en-ar-finetunedSTEM-v3-en-to-ar | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.0589
- Validation Loss: 5.3227
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.0589 | 5.3227 | 0 |
### Framework versions
- Transformers 4.17.0.dev0
- TensorFlow 2.7.0
- Datasets 1.18.3.dev0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_keras_callback"], "model-index": [{"name": "MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar", "results": []}]} | MaryaAI/opus-mt-en-ar-finetunedSTEM-v4-en-to-ar | null | [
"transformers",
"tf",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text2text-generation | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opus-mt-en-ro-finetuned-en-to-ro
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ro](https://huggingface.co/Helsinki-NLP/opus-mt-en-ro) on the wmt16 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2886
- Bleu: 28.1599
- Gen Len: 34.1236
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 0.7437 | 1.0 | 38145 | 1.2886 | 28.1599 | 34.1236 |
### Framework versions
- Transformers 4.10.0
- Pytorch 1.9.0+cu102
- Datasets 1.11.0
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "datasets": ["wmt16"], "metrics": ["bleu"], "model-index": [{"name": "opus-mt-en-ro-finetuned-en-to-ro", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "wmt16", "type": "wmt16", "args": "ro-en"}, "metrics": [{"type": "bleu", "value": 28.1599, "name": "Bleu"}]}]}]} | MaryaAI/opus-mt-en-ro-finetuned-en-to-ro | null | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"generated_from_trainer",
"dataset:wmt16",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Matchew/AFX1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Math/Learning | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Matheu/Mathe | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-generation | transformers |
# Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | MathiasVS/DialoGPT-small-RickAndMorty | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers |
# German BERT for News Classification
This a bert-base-german-cased model finetuned for text classification on german news articles
## Training data
Used the training set from the 10KGNAD dataset (gnad10 on HuggingFace Datasets). | {"language": ["de"], "tags": ["text-classification", "german-news-classification"], "datasets": ["gnad10"], "metrics": ["accuracy", "precision", "recall", "f1"], "model-index": [{"name": "Mathking/bert-base-german-cased-gnad10", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "gnad10", "type": "gnad10", "config": "default", "split": "train"}, "metrics": [{"type": "accuracy", "value": 0.9557598702001082, "name": "Accuracy", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTkxNjAwNTYzYjRjZmQ0M2UxMWQzYzk0YWFjZjRmYzcwNGEyYmRiNDIwNTlmNDNhYjAzNzBmNzU5MTg3MTM1ZSIsInZlcnNpb24iOjF9.1KfABx9YVvR2QiSXwtCBV8ijYGqwiQD3N3i7c1KV2Ke9tQvWA4_HnN7wvCKokESR-zEwIHWfALSveWIgoiSNBg"}, {"type": "f1", "value": 0.9550736462647613, "name": "F1 Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDNkYjU0NzAxNjBlOGQ1MWU2OGE5NWFkOGFlNTYwZGFkNTRiMDcwNDRlYmNiMTUxMzViM2Q4MmUyMjU2ZTQwYyIsInZlcnNpb24iOjF9.E9ysIc4ZYrpOpQTJsmLRN1q8Pg-5pWLlvs8WbTeJy2JYNmpBNblaGyeiHckZ8g8gD3Rqv7W9inpivmHRcI4-BQ"}, {"type": "f1", "value": 0.9557598702001082, "name": "F1 Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWMxNmVjMjYyNTAxYmYwN2YxNjAzOWQ2MDY3OGRhYzE4NWYwYTUyNjRhNmU2M2Y3MzFiYzI2ZTk4YWQ3NGNkNSIsInZlcnNpb24iOjF9.csdfLvORGZJY11TbWzylKfhz53BAncrjNgCDIGtWzK1AtJutkJj-SQo8rEd9o3Z5BKlH3Ta28O3Y7wKoc4PuDQ"}, {"type": "f1", "value": 0.9556789875763837, "name": "F1 Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiY2I1ZmNjMzViMDY1YWMyNzRkNDY0OTY1YTFkZWViN2JiMDlkMjJjNTZmZDFjZDIxZjA0YzI1NThiODUwMDlhZiIsInZlcnNpb24iOjF9.83yH-SfIAeB9Y3XNPcnn8N3g9puooZRgcBfNMeAKNqNM93U1qEE6JjFvhZBO_UU05cgfqnPp7Pt6h-JQcmdwBA"}, {"type": "precision", "value": 0.953834169384936, "name": "Precision Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjQ4YjA2MTZlMmYxMTA4ZTM5MDU1NjI3ZWE4YTBiZDBhMDUwN2FiODZkNjM5OWNiNGU2NjU5ZDE0OTUyODZmNyIsInZlcnNpb24iOjF9.sWcghxM9DeaaldnXR5sLz8KUHVhdjJ8GY_c4f-kZ0-0BDzf4CYURUVziWnlrRTjlUH-hVyfdKd1ufHvLotRgCg"}, {"type": "precision", "value": 0.9557598702001082, "name": "Precision Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWIzZmNlZTcxNzhhMzZhNWQ1ZWI4YzZjMDYyOTMwY2Q5N2EwMzFhMzE4OTFkZjg1NTIyYjVkMGNjZDYwZmQ2YSIsInZlcnNpb24iOjF9.rQ7ZIKeP25hLfHaYdPqX-VZCHoL-YohqGV9NZ-TAIHvNQbj0lPpX_nS89cJ1C0tSoHCeP14lIOWNncRJzQOOCA"}, {"type": "precision", "value": 0.9558822798145145, "name": "Precision Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDQzOTMxMGQ4YTI5MDUzNjdhNzdjY2QzNGVlNzUyODE4ZTI1MTY4NTkxZDVhMTBjZjhhMjlmNzRiNjEyOTk3NiIsInZlcnNpb24iOjF9.DWBZXL1mP7oNYQJKCORItDvkZm-l7TcIETNjdeVyS0BnxoEbqEE22OOJwnGLAk-wHtfx7jEKAA7ijQ1qF7cfAg"}, {"type": "recall", "value": 0.956651983810566, "name": "Recall Macro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTFhYTUyZWQ0N2VhOWQxMjY0MGM1ZjExOGE4NDQ5ODMzMmQ5YThkZTYzZjg0YmUwMDhlZDllMDk3MzY2ZWUzZSIsInZlcnNpb24iOjF9.H7UhmKtJ_5FZOQmZP-wPTrHHde-XxtMAj3kluHz6-8P1KOwJkxk24Lu7vTwHf3564XtnJC8eW2C5uyWDTpcgBg"}, {"type": "recall", "value": 0.9557598702001082, "name": "Recall Micro", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGY1MWZkOWYzNjg1NGU5YmFmODY2MDNjYWQ3OTUwNTgzMWRlZGUwNzU5NDY2NzFjZTMxOTBiMWVhZWIyNDYzMCIsInZlcnNpb24iOjF9.oKQ0zRYEs-sloah-BJvBKX5SFqWt8UX-0jCi3ldaLwNVJjM-rcdvsERyoYQ-QTLPKsZp4nko3-ic-BDCwGp9Bw"}, {"type": "recall", "value": 0.9557598702001082, "name": "Recall Weighted", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDlhMmIwOTBkOTIzOTlkZjNiMzlkMmE5NzQ3MzY5NTUxODQyMzY1OTJjNWY4NjI0N2NjYmY5NjkwZjU0MTA1YyIsInZlcnNpb24iOjF9.4FExU6skNNcvIrToS3MR04Q7ho7_PITTqPk8WMdOggaVvnwj8ujxcXyJMSRioQ1ttVlpg_oGismsSD9zttYkBg"}, {"type": "loss", "value": 0.17337004840373993, "name": "loss", "verified": true, "verifyToken": "eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzVmMmQ5OGE0OTU3MTg0NDg4YzhlODU1NWUyODM0NzFjODM3MTY5MWI2OTAyMzU5OTQ2YTljZTJkN2JkYTcyNSIsInZlcnNpb24iOjF9.jeYTrX35vtswkWi8ROqynY_W4rHfxonic74PviTNAKJzTF7tUCI2a9IBavXvSQhMfGv0NEkZzX8N8o4hQTvWDw"}]}]}]} | laiking/bert-base-german-cased-gnad10 | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"german-news-classification",
"de",
"dataset:gnad10",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MatsUy/wav2vec2-common_voice-nl-demo-eval | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
automatic-speech-recognition | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-common_voice-nl-demo
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the COMMON_VOICE - NL dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3523
- Wer: 0.2046
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0536 | 1.12 | 500 | 0.5349 | 0.4338 |
| 0.2543 | 2.24 | 1000 | 0.3859 | 0.3029 |
| 0.1472 | 3.36 | 1500 | 0.3471 | 0.2818 |
| 0.1088 | 4.47 | 2000 | 0.3489 | 0.2731 |
| 0.0855 | 5.59 | 2500 | 0.3582 | 0.2558 |
| 0.0721 | 6.71 | 3000 | 0.3457 | 0.2471 |
| 0.0653 | 7.83 | 3500 | 0.3299 | 0.2357 |
| 0.0527 | 8.95 | 4000 | 0.3440 | 0.2334 |
| 0.0444 | 10.07 | 4500 | 0.3417 | 0.2289 |
| 0.0404 | 11.19 | 5000 | 0.3691 | 0.2204 |
| 0.0345 | 12.3 | 5500 | 0.3453 | 0.2102 |
| 0.0288 | 13.42 | 6000 | 0.3634 | 0.2089 |
| 0.027 | 14.54 | 6500 | 0.3532 | 0.2044 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.3
- Tokenizers 0.11.0
| {"language": ["nl"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-common_voice-nl-demo", "results": []}]} | MatsUy/wav2vec2-common_voice-nl-demo | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"nl",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | MattFlynn11/aihrchatbot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 4
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1243
- Precision: 0.5220
- Recall: 0.6137
- F1: 0.5641
- Accuracy: 0.9630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 134 | 0.1357 | 0.4549 | 0.5521 | 0.4988 | 0.9574 |
| No log | 2.0 | 268 | 0.1243 | 0.5220 | 0.6137 | 0.5641 | 0.9630 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "4", "results": []}]} | Matthijsvanhof/4 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-NER
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1078
- Precision: 0.6129
- Recall: 0.6639
- F1: 0.6374
- Accuracy: 0.9688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 267 | 0.1131 | 0.6090 | 0.6264 | 0.6176 | 0.9678 |
| 0.1495 | 2.0 | 534 | 0.1078 | 0.6129 | 0.6639 | 0.6374 | 0.9688 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-dutch-cased-finetuned-NER", "results": []}]} | Matthijsvanhof/bert-base-dutch-cased-finetuned-NER | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-NER8
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1482
- Precision: 0.4716
- Recall: 0.4359
- F1: 0.4530
- Accuracy: 0.9569
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 68 | 0.1705 | 0.3582 | 0.3488 | 0.3535 | 0.9475 |
| No log | 2.0 | 136 | 0.1482 | 0.4716 | 0.4359 | 0.4530 | 0.9569 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
| {"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-dutch-cased-finetuned-NER8", "results": []}]} | Matthijsvanhof/bert-base-dutch-cased-finetuned-NER8 | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-dutch-cased-finetuned-mBERT
This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0898
- Precision: 0.7255
- Recall: 0.7255
- F1: 0.7255
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.1603 | 1.0 | 533 | 0.0928 | 0.6896 | 0.6962 | 0.6929 | 0.9742 |
| 0.0832 | 2.0 | 1066 | 0.0898 | 0.7255 | 0.7255 | 0.7255 | 0.9758 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-dutch-cased-finetuned-mBERT", "results": []}]} | Matthijsvanhof/bert-base-dutch-cased-finetuned-mBERT | null | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Matthijsvanhof/bert-base-dutch-cased-mBERT | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mattia/hotdog-recognition | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
fill-mask | transformers | {"license": "apache-2.0"} | Maunish/ecomm-sbert | null | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | {} | Maunish/ext_sentbert-5 | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text-classification | transformers | {} | Maunish/kgrouping-roberta-large | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mavar/rut5-base-quiz | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mavcil/KKTC | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Max-Harper/test-zero-shot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MaxPlay066/bjbbj | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
feature-extraction | transformers | This repository shares smaller version of bert-base-multilingual-uncased that keeps only Ukrainian, English, and Russian tokens in the vocabulary.
| Model | Num parameters | Size |
| ----------------------------------------- | -------------- | --------- |
| bert-base-multilingual-uncased | 167 million | ~650 MB |
| MaxVortman/bert-base-ukr-eng-rus-uncased | 110 million | ~423 MB | | {} | mshamrai/bert-base-ukr-eng-rus-uncased | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
#Rick and Morty DialoGPT Model | {"tags": ["conversational"]} | MaxW0748/DialoGPT-small-Rick | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-classification | transformers | {} | Maxinstellar/outputs | null | [
"transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
text2text-generation | transformers | hello
| {} | Maya/essai1 | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
text-generation | transformers |
# Harry Potter DialoGPT Model | {"tags": ["conversational"]} | MayankGupta/DialoGPT-small-harrypotter | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
null | null | {} | Mayukh/vision | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mayukojo/Travel_agent | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mayukojo/Travel_chatbot | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | McKenzie/bert-base-uncased | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Mcjeaze/Jeaze | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | MedSaa/distilbert-base-uncased-finetuned-ner | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Medha/distilbert-base-uncased-finetuned-cola | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Media1129/keyword-tag-model-10000-9-16_more_ingredient | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Media1129/keyword-tag-model-12000-9-16_more_ingredient | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
null | null | {} | Media1129/keyword-tag-model-14000-9-16_more_ingredient | null | [
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Media1129/keyword-tag-model-2000-9-16 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Media1129/keyword-tag-model-2000-9-16_more_ingredient | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Media1129/keyword-tag-model-2000 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Media1129/keyword-tag-model-3000-v2 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
|
token-classification | transformers | {} | Media1129/keyword-tag-model-4000-9-16 | null | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:04+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.