Search is not available for this dataset
pipeline_tag
stringclasses 48
values | library_name
stringclasses 205
values | text
stringlengths 0
18.3M
| metadata
stringlengths 2
1.07B
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
|
---|---|---|---|---|---|---|---|---|
feature-extraction | transformers | {} | Taekyoon/test_model | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | Taekyoon/v0.41_uniclova | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | Taekyoon/v0.4_uniclova | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
token-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-pos
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3009
- Precision: 0.9277
- Recall: 0.9329
- F1: 0.9303
- Accuracy: 0.9332
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2791 | 1.0 | 1756 | 0.3125 | 0.9212 | 0.9263 | 0.9237 | 0.9272 |
| 0.1853 | 2.0 | 3512 | 0.3038 | 0.9241 | 0.9309 | 0.9275 | 0.9307 |
| 0.1501 | 3.0 | 5268 | 0.3009 | 0.9277 | 0.9329 | 0.9303 | 0.9332 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-pos", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9276736387541917, "name": "Precision"}, {"type": "recall", "value": 0.9329402916272412, "name": "Recall"}, {"type": "f1", "value": 0.9302995112982049, "name": "F1"}, {"type": "accuracy", "value": 0.933154765408842, "name": "Accuracy"}]}]}]} | Tahsin/BERT-finetuned-conll2003-POS | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1561
- Accuracy: 0.9285
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 250 | 0.1635 | 0.9295 |
| 0.111 | 2.0 | 500 | 0.1515 | 0.936 |
| 0.111 | 3.0 | 750 | 0.1561 | 0.9285 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| {"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["emotion"], "metrics": ["accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion", "args": "default"}, "metrics": [{"type": "accuracy", "value": 0.9285, "name": "Accuracy"}]}]}]} | Tahsin/distilbert-base-uncased-finetuned-emotion | null | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | Tahsin-Mayeesha/bangla-fake-news-mbert | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | transformers |
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the OPENSLR_SLR53 - bengali dataset.
It achieves the following results on the evaluation set.
Without language model :
- Wer: 0.3110
- Cer : 0.072
With 5 gram language model trained on [indic-text](https://huggingface.co/datasets/Harveenchadha/indic-text/tree/main) dataset :
- Wer: 0.17776
- Cer : 0.04394
Note : 10% of a total 218703 samples have been used for evaluation. Evaluation set has 21871 examples. Training was stopped after 30k steps. Output predictions are available under files section.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
Note : Training and evaluation script modified from https://huggingface.co/chmanoj/xls-r-300m-te and https://github.com/huggingface/transformers/tree/master/examples/research_projects/robust-speech-event.
Bengali speech data was not available from common voice or librispeech multilingual datasets, so OpenSLR53 has been used.
Note 2 : Minimum audio duration of 0.1s has been used to filter the training data which excluded may be 10-20 samples.
# Citation
@misc {tahsin_mayeesha_2023,
author = { {Tahsin Mayeesha} },
title = { wav2vec2-bn-300m (Revision e10defc) },
year = 2023,
url = { https://huggingface.co/Tahsin-Mayeesha/wav2vec2-bn-300m },
doi = { 10.57967/hf/0939 },
publisher = { Hugging Face }
} | {"language": ["bn"], "license": "apache-2.0", "tags": ["automatic-speech-recognition", "hf-asr-leaderboard", "openslr_SLR53", "robust-speech-event"], "datasets": ["openslr", "SLR53", "Harveenchadha/indic-text"], "metrics": ["wer", "cer"], "model-index": [{"name": "Tahsin-Mayeesha/wav2vec2-bn-300m", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Open SLR", "type": "openslr", "args": "SLR66"}, "metrics": [{"type": "wer", "value": 0.31104373941386626, "name": "Test WER"}, {"type": "cer", "value": 0.07263099973420006, "name": "Test CER"}, {"type": "wer", "value": 0.17776164652632478, "name": "Test WER with lm"}, {"type": "cer", "value": 0.04394092712884769, "name": "Test CER with lm"}]}]}]} | Tahsin-Mayeesha/wav2vec2-bn-300m | null | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"openslr_SLR53",
"robust-speech-event",
"bn",
"dataset:openslr",
"dataset:SLR53",
"dataset:Harveenchadha/indic-text",
"doi:10.57967/hf/0939",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Tais/wav2vec2-large-xls-r-300m-tr-colab | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Tais/wav2vec2-large-xlsr-open-brazilian-portuguese | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers | {"license": "apache-2.0"} | TajMahaladeen/pokemon_gptj | null | [
"transformers",
"pytorch",
"gptj",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Takao/test | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
automatic-speech-recognition | espnet |
# Estonian Espnet2 ASR model
## Model description
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech.
## Intended uses & limitations
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
## How to use
```python
from espnet2.bin.asr_inference import Speech2Text
model = Speech2Text.from_pretrained(
"TalTechNLP/espnet2_estonian",
lm_weight=0.6, ctc_weight=0.4, beam_size=60
)
# read a sound file with 16k sample rate
import soundfile
speech, rate = soundfile.read("speech.wav")
assert rate == 16000
text, *_ = model(speech)
print(text[0])
```
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
## Training data
Acoustic training data:
| Type | Amount (h) |
|-----------------------|:------:|
| Broadcast speech | 591 |
| Spontaneous speech | 53 |
| Elderly speech corpus | 53 |
| Talks, lectures | 49 |
| Parliament speeches | 31 |
| *Total* | *761* |
Language model training data:
* Estonian National Corpus 2019
* OpenSubtitles
* Speech transcripts
## Training procedure
Standard EspNet2 Conformer recipe.
## Evaluation results
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/aktuaalne2021.testset|2864|56575|93.1|4.5|2.4|2.0|8.9|63.4|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.devset|273|4677|93.9|3.6|2.4|1.2|7.3|46.5|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/jutusaated.testset|818|11093|94.7|2.7|2.5|0.9|6.2|45.0|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.devset|1207|13865|82.3|8.5|9.3|3.4|21.2|74.1|
|decode_asr_lm_lm_large_valid.loss.ave_5best_asr_model_valid.acc.ave/www-trans.testset|1648|22707|86.4|7.6|6.0|2.5|16.1|75.7|
### BibTeX entry and citation info
#### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
``` | {"language": "et", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"]} | TalTechNLP/espnet2_estonian | null | [
"espnet",
"audio",
"automatic-speech-recognition",
"et",
"license:cc-by-4.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
audio-classification | speechbrain |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model (CE)
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses
more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training.
We observed that this improved the performance of extracted utterance embeddings for downstream tasks.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn-ce", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01,
-3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01,
-2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01,
-3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01,
-2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01,
-2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01,
-3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01,
-2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01,
-2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01,
-3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01,
-2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01,
-4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01,
-3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01,
-2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01,
-2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01,
-2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01,
-3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01,
-2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01,
-2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02,
-2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01,
-3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01,
-2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as log-likelihoods that
# the given utterance belongs to the given language (i.e., the larger the better)
# The linear-scale likelihood can be retrieved using the following:
print(prediction[1].exp())
tensor([0.9850])
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 6.7% on the VoxLingua107 development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
| {"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac"}]} | TalTechNLP/voxlingua107-epaca-tdnn-ce | null | [
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
audio-classification | speechbrain |
# VoxLingua107 ECAPA-TDNN Spoken Language Identification Model
## Model description
This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain.
The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition.
The model can classify a speech utterance according to the language spoken.
It covers 107 different languages (
Abkhazian,
Afrikaans,
Amharic,
Arabic,
Assamese,
Azerbaijani,
Bashkir,
Belarusian,
Bulgarian,
Bengali,
Tibetan,
Breton,
Bosnian,
Catalan,
Cebuano,
Czech,
Welsh,
Danish,
German,
Greek,
English,
Esperanto,
Spanish,
Estonian,
Basque,
Persian,
Finnish,
Faroese,
French,
Galician,
Guarani,
Gujarati,
Manx,
Hausa,
Hawaiian,
Hindi,
Croatian,
Haitian,
Hungarian,
Armenian,
Interlingua,
Indonesian,
Icelandic,
Italian,
Hebrew,
Japanese,
Javanese,
Georgian,
Kazakh,
Central Khmer,
Kannada,
Korean,
Latin,
Luxembourgish,
Lingala,
Lao,
Lithuanian,
Latvian,
Malagasy,
Maori,
Macedonian,
Malayalam,
Mongolian,
Marathi,
Malay,
Maltese,
Burmese,
Nepali,
Dutch,
Norwegian Nynorsk,
Norwegian,
Occitan,
Panjabi,
Polish,
Pushto,
Portuguese,
Romanian,
Russian,
Sanskrit,
Scots,
Sindhi,
Sinhala,
Slovak,
Slovenian,
Shona,
Somali,
Albanian,
Serbian,
Sundanese,
Swedish,
Swahili,
Tamil,
Telugu,
Tajik,
Thai,
Turkmen,
Tagalog,
Turkish,
Tatar,
Ukrainian,
Urdu,
Uzbek,
Vietnamese,
Waray,
Yiddish,
Yoruba,
Mandarin Chinese).
## Intended uses & limitations
The model has two uses:
- use 'as is' for spoken language recognition
- use as an utterance-level feature (embedding) extractor, for creating a dedicated language ID model on your own data
The model is trained on automatically collected YouTube data. For more
information about the dataset, see [here](http://bark.phon.ioc.ee/voxlingua107/).
#### How to use
```python
import torchaudio
from speechbrain.pretrained import EncoderClassifier
language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn", savedir="tmp")
# Download Thai language sample from Omniglot and cvert to suitable form
signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3")
prediction = language_id.classify_batch(signal)
print(prediction)
(tensor([[0.3210, 0.3751, 0.3680, 0.3939, 0.4026, 0.3644, 0.3689, 0.3597, 0.3508,
0.3666, 0.3895, 0.3978, 0.3848, 0.3957, 0.3949, 0.3586, 0.4360, 0.3997,
0.4106, 0.3886, 0.4177, 0.3870, 0.3764, 0.3763, 0.3672, 0.4000, 0.4256,
0.4091, 0.3563, 0.3695, 0.3320, 0.3838, 0.3850, 0.3867, 0.3878, 0.3944,
0.3924, 0.4063, 0.3803, 0.3830, 0.2996, 0.4187, 0.3976, 0.3651, 0.3950,
0.3744, 0.4295, 0.3807, 0.3613, 0.4710, 0.3530, 0.4156, 0.3651, 0.3777,
0.3813, 0.6063, 0.3708, 0.3886, 0.3766, 0.4023, 0.3785, 0.3612, 0.4193,
0.3720, 0.4406, 0.3243, 0.3866, 0.3866, 0.4104, 0.4294, 0.4175, 0.3364,
0.3595, 0.3443, 0.3565, 0.3776, 0.3985, 0.3778, 0.2382, 0.4115, 0.4017,
0.4070, 0.3266, 0.3648, 0.3888, 0.3907, 0.3755, 0.3631, 0.4460, 0.3464,
0.3898, 0.3661, 0.3883, 0.3772, 0.9289, 0.3687, 0.4298, 0.4211, 0.3838,
0.3521, 0.3515, 0.3465, 0.4772, 0.4043, 0.3844, 0.3973, 0.4343]]), tensor([0.9289]), tensor([94]), ['th'])
# The scores in the prediction[0] tensor can be interpreted as cosine scores between
# the languages and the given utterance (i.e., the larger the better)
# The identified language ISO code is given in prediction[3]
print(prediction[3])
['th']
# Alternatively, use the utterance embedding extractor:
emb = language_id.encode_batch(signal)
print(emb.shape)
torch.Size([1, 1, 256])
```
#### Limitations and bias
Since the model is trained on VoxLingua107, it has many limitations and biases, some of which are:
- Probably it's accuracy on smaller languages is quite limited
- Probably it works worse on female speech than male speech (because YouTube data includes much more male speech)
- Based on subjective experiments, it doesn't work well on speech with a foreign accent
- Probably it doesn't work well on children's speech and on persons with speech disorders
## Training data
The model is trained on [VoxLingua107](http://bark.phon.ioc.ee/voxlingua107/).
VoxLingua107 is a speech dataset for training spoken language identification models.
The dataset consists of short speech segments automatically extracted from YouTube videos and labeled according the language of the video title and description, with some post-processing steps to filter out false positives.
VoxLingua107 contains data for 107 languages. The total amount of speech in the training set is 6628 hours.
The average amount of data per language is 62 hours. However, the real amount per language varies a lot. There is also a seperate development set containing 1609 speech segments from 33 languages, validated by at least two volunteers to really contain the given language.
## Training procedure
We used [SpeechBrain](https://github.com/speechbrain/speechbrain) to train the model.
Training recipe will be published soon.
## Evaluation results
Error rate: 7% on the development dataset
### BibTeX entry and citation info
```bibtex
@inproceedings{valk2021slt,
title={{VoxLingua107}: a Dataset for Spoken Language Recognition},
author={J{\"o}rgen Valk and Tanel Alum{\"a}e},
booktitle={Proc. IEEE SLT Workshop},
year={2021},
}
```
| {"language": "multilingual", "license": "apache-2.0", "tags": ["audio-classification", "speechbrain", "embeddings", "Language", "Identification", "pytorch", "ECAPA-TDNN", "TDNN", "VoxLingua107"], "datasets": ["VoxLingua107"], "metrics": ["Accuracy"], "widget": [{"example_title": "English Sample", "src": "https://cdn-media.huggingface.co/speech_samples/LibriSpeech_61-70968-0000.flac"}]} | TalTechNLP/voxlingua107-epaca-tdnn | null | [
"speechbrain",
"audio-classification",
"embeddings",
"Language",
"Identification",
"pytorch",
"ECAPA-TDNN",
"TDNN",
"VoxLingua107",
"multilingual",
"dataset:VoxLingua107",
"license:apache-2.0",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
automatic-speech-recognition | transformers |
# XLS-R-300m-ET
This is a XLS-R-300M model [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) finetuned on around 800 hours of diverse Estonian data.
## Model description
This is a general-purpose Estonian ASR model trained in the Lab of Language Technology at TalTech. It consists of only the CTC-based end-to-end model, no language model is currently provided.
## Intended uses & limitations
This model is intended for general-purpose speech recognition, such as broadcast conversations, interviews, talks, etc.
## How to use
TODO
#### Limitations and bias
Since this model was trained on mostly broadcast speech and texts from the web, it might have problems correctly decoding the following:
* Speech containing technical and other domain-specific terms
* Children's speech
* Non-native speech
* Speech recorded under very noisy conditions or with a microphone far from the speaker
* Very spontaneous and overlapping speech
## Training data
Acoustic training data:
| Type | Amount (h) |
|-----------------------|:------:|
| Broadcast speech | 591 |
| Spontaneous speech | 53 |
| Elderly speech corpus | 53 |
| Talks, lectures | 49 |
| Parliament speeches | 31 |
| *Total* | *761* |
## Training procedure
Finetuned using Fairseq.
## Evaluation results
### WER
|Dataset | WER |
|---|---|
| jutusaated.devset | 7.9 |
| jutusaated.testset | 6.1 |
| Common Voice 6.1 | 12.5 |
| Common Voice 8.0 | 13.4 |
| {"language": "et", "license": "cc-by-4.0", "tags": ["audio", "automatic-speech-recognition", "hf-asr-leaderboard"], "model-index": [{"name": "xls-r-300m-et", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice", "type": "common_voice", "args": "et"}, "metrics": [{"type": "wer", "value": 12.520395591222401, "name": "Test WER"}, {"type": "cer", "value": 2.70911524386249, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "et"}, "metrics": [{"type": "wer", "value": 13.38447882323104, "name": "Test WER"}, {"type": "cer", "value": 2.9816686199500255, "name": "Test CER"}]}]}]} | TalTechNLP/xls-r-300m-et | null | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"hf-asr-leaderboard",
"et",
"license:cc-by-4.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Talgatov2001/art | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Talos-1/vit-base-patch16-224-in21k-euroSat | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Tanelo/tanel_first_model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Tangyaoxiang/distilbert-base-uncased-finetuned-squad | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | transformers |
<h2> GPT2 Model for German Language </h2>
Model Name: Tanhim/gpt2-model-de <br />
language: German or Deutsch <br />
thumbnail: https://huggingface.co/Tanhim/gpt2-model-de <br />
datasets: Ten Thousand German News Articles Dataset <br />
### How to use
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, I
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> generation= pipeline('text-generation', model='Tanhim/gpt2-model-de', tokenizer='Tanhim/gpt2-model-de')
>>> set_seed(42)
>>> generation("Hallo, ich bin ein Sprachmodell,", max_length=30, num_return_sequences=5)
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("Tanhim/gpt2-model-de")
model = AutoModelWithLMHead.from_pretrained("Tanhim/gpt2-model-de")
text = "Ersetzen Sie mich durch einen beliebigen Text, den Sie wünschen."
encoded_input = tokenizer(text, return_tensors='pt')
output = model(**encoded_input)
```
Citation request:
If you use the model of this repository in your research, please consider citing the following way:
```python
@misc{GermanTransformer,
author = {Tanhim Islam},
title = {{PyTorch Based Transformer Machine Learning Model for German Text Generation Task}},
howpublished = "\url{https://huggingface.co/Tanhim/gpt2-model-de}",
year = {2021},
note = "[Online; accessed 17-June-2021]"
}
``` | {"language": "de", "license": "gpl", "widget": [{"text": "Hallo, ich bin ein Sprachmodell"}]} | Tanhim/gpt2-model-de | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"de",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
translation | transformers |
<h2> English to German Translation </h2>
Model Name: Tanhim/translation-En2De <br />
language: German or Deutsch <br />
thumbnail: https://huggingface.co/Tanhim/translation-En2De <br />
### How to use
You can use this model directly with a pipeline for machine translation. Since the generation relies on some randomness, I
set a seed for reproducibility:
```python
>>> from transformers import pipeline, set_seed
>>> text_En2De= pipeline('translation', model='Tanhim/translation-En2De', tokenizer='Tanhim/translation-En2De')
>>> set_seed(42)
>>> text_En2De("My name is Karl and I live in Aachen")
```
### beta version | {"language": "de", "license": "gpl", "tags": ["translation"], "datasets": ["wmt19"], "widget": [{"text": "My name is Karl and I live in Aachen."}]} | Tanhim/translation-En2De | null | [
"transformers",
"pytorch",
"marian",
"text2text-generation",
"translation",
"de",
"dataset:wmt19",
"license:gpl",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Tanisha28/GPT-harrypotter | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-generation | null |
# Hoshiyo Kojima DialoGPT Model | {"tags": ["conversational"]} | Taramiko/DialoGPT-small-hoshiyo_kojima | null | [
"conversational",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-generation | transformers |
# Hoshiyo Kojima DialoGPT Model | {"tags": ["conversational"]} | Taramiko/Hoshiyo_Kojima | null | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text2text-generation | transformers |
# Model Trained Using AutoNLP
- Problem type: Summarization
- Model ID: 21664560
- CO2 Emissions (in grams): 5.680803958729511
## Validation Metrics
- Loss: 1.7488420009613037
- Rouge1: 38.1491
- Rouge2: 18.6257
- RougeL: 26.8448
- RougeLsum: 32.2433
- Gen Len: 49.0
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/Tarang1998/autonlp-pegasus-21664560
``` | {"language": "unk", "tags": "autonlp", "datasets": ["Tarang1998/autonlp-data-pegasus"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 5.680803958729511} | Tarang1998/autonlp-pegasus-21664560 | null | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"autonlp",
"unk",
"dataset:Tarang1998/autonlp-data-pegasus",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Tariq15994/email_classifier | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Tariq15994/spam | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers |
# Model Card for RuBERT for Sentiment Analysis
# Model Details
## Model Description
Russian texts sentiment classification.
- **Developed by:** Tatyana Voloshina
- **Shared by [Optional]:** Tatyana Voloshina
- **Model type:** Text Classification
- **Language(s) (NLP):** More information needed
- **License:** More information needed
- **Parent Model:** BERT
- **Resources for more information:**
- [GitHub Repo](https://github.com/T-Sh/Sentiment-Analysis)
# Uses
## Direct Use
This model can be used for the task of text classification.
## Downstream Use [Optional]
More information needed.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset)
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
More information needed
### Metrics
More information needed
## Results
More information needed
# Model Examination
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed.
# Citation
More information needed.
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Tatyana Voloshina in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
Needed pytorch trained model presented in [Drive](https://drive.google.com/drive/folders/1EnJBq0dGfpjPxbVjybqaS7PsMaPHLUIl?usp=sharing).
Load and place model.pth.tar in folder next to another files of a model.
```python
!pip install tensorflow-gpu
!pip install deeppavlov
!python -m deeppavlov install squad_bert
!pip install fasttext
!pip install transformers
!python -m deeppavlov install bert_sentence_embedder
from deeppavlov import build_model
model = build_model(path_to_model/rubert_sentiment.json)
model(["Сегодня хорошая погода", "Я счастлив проводить с тобою время", "Мне нравится эта музыкальная композиция"])
```
</details>
| {"language": ["ru"], "tags": ["sentiment", "text-classification"], "datasets": ["Tatyana/ru_sentiment_dataset"]} | MonoHime/rubert-base-cased-sentiment-new | null | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"sentiment",
"ru",
"dataset:Tatyana/ru_sentiment_dataset",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers |
# Keras model with ruBERT conversational embedder for Sentiment Analysis
Russian texts sentiment classification.
Model trained on [Tatyana/ru_sentiment_dataset](https://huggingface.co/datasets/Tatyana/ru_sentiment_dataset)
## Labels meaning
0: NEUTRAL
1: POSITIVE
2: NEGATIVE
## How to use
```python
!pip install tensorflow-gpu
!pip install deeppavlov
!python -m deeppavlov install squad_bert
!pip install fasttext
!pip install transformers
!python -m deeppavlov install bert_sentence_embedder
from deeppavlov import build_model
model = build_model(Tatyana/rubert_conversational_cased_sentiment/custom_config.json)
model(["Сегодня хорошая погода", "Я счастлив проводить с тобою время", "Мне нравится эта музыкальная композиция"])
```
| {"language": ["ru"], "tags": ["sentiment", "text-classification"], "datasets": ["Tatyana/ru_sentiment_dataset"]} | MonoHime/rubert_conversational_cased_sentiment | null | [
"transformers",
"pytorch",
"bert",
"sentiment",
"text-classification",
"ru",
"dataset:Tatyana/ru_sentiment_dataset",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
image-classification | generic |
## Example
The model is by no means a state-of-the-art model, but nevertheless
produces reasonable image captioning results. It was mainly fine-tuned
as a proof-of-concept for the 🤗 FlaxVisionEncoderDecoder Framework.
The model can be used as follows:
**In PyTorch**
```python
import torch
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, VisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = VisionEncoderDecoderModel.from_pretrained(loc)
model.eval()
def predict(image):
pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values
with torch.no_grad():
output_ids = model.generate(pixel_values, max_length=16, num_beams=4, return_dict_in_generate=True).sequences
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
preds = predict(image)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
```
**In Flax**
```python
import jax
import requests
from PIL import Image
from transformers import ViTFeatureExtractor, AutoTokenizer, FlaxVisionEncoderDecoderModel
loc = "ydshieh/vit-gpt2-coco-en"
feature_extractor = ViTFeatureExtractor.from_pretrained(loc)
tokenizer = AutoTokenizer.from_pretrained(loc)
model = FlaxVisionEncoderDecoderModel.from_pretrained(loc)
gen_kwargs = {"max_length": 16, "num_beams": 4}
# This takes sometime when compiling the first time, but the subsequent inference will be much faster
@jax.jit
def generate(pixel_values):
output_ids = model.generate(pixel_values, **gen_kwargs).sequences
return output_ids
def predict(image):
pixel_values = feature_extractor(images=image, return_tensors="np").pixel_values
output_ids = generate(pixel_values)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
# We will verify our results on an image of cute cats
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
with Image.open(requests.get(url, stream=True).raw) as image:
preds = predict(image)
print(preds)
# should produce
# ['a cat laying on top of a couch next to another cat']
``` | {"library_name": "generic", "tags": ["image-classification"]} | TeamAlerito/gti-coco-en | null | [
"generic",
"pytorch",
"tf",
"jax",
"tensorboard",
"vision-encoder-decoder",
"image-classification",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
null | null | {} | Teekay94-t/Llt | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/Distil-Sentence-Transformer-Fine-Tuned | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/Mini-Sentence-Transformer-Fine-Tuned | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | Teepika/Sentence-Transformer-Check | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/Sentence-Transformer-F | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/Sentence-Transformer-Fine-Tuned | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/Sentence-Transformer-Fine-Tuned2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/Sentence-Transformer-Fine-Tuned3 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
feature-extraction | transformers | {} | Teepika/Sentence-Transformer-NSP-Fine-Tuned | null | [
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
fill-mask | transformers | {} | Teepika/dummy-model | null | [
"transformers",
"pytorch",
"camembert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/finetuned-bert-mrpc | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/my-awesome-model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/my_new_model | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
question-answering | transformers | {} | Teepika/roberta-base-squad2-finetuned-selqa | null | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | Teepika/t5-small-finetuned-xsum-gcloud1 | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/t5-small-finetuned-xsum-glcoud | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text2text-generation | transformers | {} | Teepika/t5-small-finetuned-xsum-proplus | null | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/t5-small-finetuned-xsum | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/trial | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/trial1 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | Teepika/trial1tpka | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | TegeneG/KNGPT-2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
null | null | {} | TegeneG/KNGPT2 | null | [
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP/albert-base-v2-mnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP/bert-base-cased-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP/bert-base-uncased-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP/electra-base-mnli | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP/xlnet-base-cased-mnli | null | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/albert-base-v2-avg-mnli | null | [
"transformers",
"pytorch",
"albert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | The uploaded model is from epoch 4 with Matthews Correlation of 61.05
"best_metric": 0.4796141982078552,<br>
"best_model_checkpoint": "/content/output_dir/checkpoint-268",<br>
"epoch": 10.0,<br>
"global_step": 2680,<br>
"is_hyper_param_search": false,<br>
"is_local_process_zero": true,<br>
"is_world_process_zero": true,<br>
"max_steps": 2680,<br>
"num_train_epochs": 10,<br>
"total_flos": 7113018526540800.0,<br>
"trial_name": null,<br>
"trial_params": null<br>
<table class="table table-bordered table-hover table-condensed" style="width: 60%; overflow: auto">
<thead><tr><th title="Field #1">epoch</th>
<th title="Field #2">eval_loss</th>
<th title="Field #3">eval_matthews_correlation</th>
<th title="Field #4">eval_runtime</th>
<th title="Field #5">eval_samples_per_second</th>
<th title="Field #6">eval_steps_per_second</th>
<th title="Field #7">step</th>
<th title="Field #8">learning_rate</th>
<th title="Field #9">loss</th>
</tr></thead>
<tbody><tr>
<td align="left">1</td>
<td align="left">0.4796141982078552</td>
<td align="left">0.5351033849356494</td>
<td align="left">8.8067</td>
<td align="left">118.433</td>
<td align="left">14.875</td>
<td align="left">268</td>
<td align="left">0.000018067415730337083</td>
<td align="left">0.4913</td>
</tr>
<tr>
<td align="left">2</td>
<td align="left">0.5334435701370239</td>
<td align="left">0.5178799252679331</td>
<td align="left">8.9439</td>
<td align="left">116.616</td>
<td align="left">14.647</td>
<td align="left">536</td>
<td align="left">0.00001605992509363296</td>
<td align="left">0.2872</td>
</tr>
<tr>
<td align="left">3</td>
<td align="left">0.5544090270996094</td>
<td align="left">0.5649788851042796</td>
<td align="left">8.9467</td>
<td align="left">116.58</td>
<td align="left">14.642</td>
<td align="left">804</td>
<td align="left">0.000014052434456928841</td>
<td align="left">0.1777</td>
</tr>
<tr>
<td align="left">4</td>
<td align="left">0.5754779577255249</td>
<td align="left">0.6105374636148787</td>
<td align="left">8.8982</td>
<td align="left">117.215</td>
<td align="left">14.722</td>
<td align="left">1072</td>
<td align="left">0.000012044943820224718</td>
<td align="left">0.1263</td>
</tr>
<tr>
<td align="left">5</td>
<td align="left">0.7263916730880737</td>
<td align="left">0.5807606001872874</td>
<td align="left">8.9705</td>
<td align="left">116.27</td>
<td align="left">14.603</td>
<td align="left">1340</td>
<td align="left">0.000010037453183520601</td>
<td align="left">0.0905</td>
</tr>
<tr>
<td align="left">6</td>
<td align="left">0.8121512532234192</td>
<td align="left">0.5651092792103851</td>
<td align="left">8.9924</td>
<td align="left">115.987</td>
<td align="left">14.568</td>
<td align="left">1608</td>
<td align="left">0.00000802996254681648</td>
<td align="left">0.0692</td>
</tr>
<tr>
<td align="left">7</td>
<td align="left">0.941014289855957</td>
<td align="left">0.5632084517291658</td>
<td align="left">8.9583</td>
<td align="left">116.428</td>
<td align="left">14.623</td>
<td align="left">1876</td>
<td align="left">0.000006022471910112359</td>
<td align="left">0.0413</td>
</tr>
<tr>
<td align="left">8</td>
<td align="left">1.0095174312591553</td>
<td align="left">0.5856531698367675</td>
<td align="left">9.0029</td>
<td align="left">115.851</td>
<td align="left">14.551</td>
<td align="left">2144</td>
<td align="left">0.00000401498127340824</td>
<td align="left">0.0327</td>
</tr>
<tr>
<td align="left">9</td>
<td align="left">1.0425965785980225</td>
<td align="left">0.5941395545037332</td>
<td align="left">8.9217</td>
<td align="left">116.906</td>
<td align="left">14.683</td>
<td align="left">2412</td>
<td align="left">0.00000200749063670412</td>
<td align="left">0.0202</td>
</tr>
<tr>
<td align="left">10</td>
<td align="left">1.0782166719436646</td>
<td align="left">0.5956649094312695</td>
<td align="left">8.9472</td>
<td align="left">116.572</td>
<td align="left">14.641</td>
<td align="left">2680</td>
<td align="left">0</td>
<td align="left">0.0104</td>
</tr>
</tbody></table> | {} | TehranNLP-org/bert-base-cased-avg-cola | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | TehranNLP-org/bert-base-cased-avg-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-cola-2e-5-21 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-cola-2e-5-42 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-cola-2e-5-63 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-mnli-2e-5-21 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-mnli-2e-5-63 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-mnli | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-21 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-42 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-avg-sst2-2e-5-63 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-cls-hatexplain | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-cls-mnli | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-cls-sst2 | null | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-mrpc-2e-5-42 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/bert-base-uncased-qqp-2e-5-42 | null | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-ag-news-2e-5-42 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-cola-2e-5-21 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-cola-2e-5-42 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-cola-2e-5-63 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | The uploaded model is from epoch 9 with Matthews Correlation of 66.77
"best_metric": 0.667660908939119,<br>
"best_model_checkpoint": "/content/output_dir/checkpoint-2412",<br>
"epoch": 10.0,<br>
"global_step": 2680,<br>
"is_hyper_param_search": false,<br>
"is_local_process_zero": true,<br>
"is_world_process_zero": true,<br>
"max_steps": 2680,<br>
"num_train_epochs": 10,<br>
"total_flos": 7189983634007040.0,<br>
"trial_name": null,<br>
"trial_params": null<br>
<table class="table table-bordered table-hover table-condensed">
<thead><tr><th title="Field #1">epoch</th>
<th title="Field #2">eval_loss</th>
<th title="Field #3">eval_matthews_correlation</th>
<th title="Field #4">eval_runtime</th>
<th title="Field #5">eval_samples_per_second</th>
<th title="Field #6">eval_steps_per_second</th>
<th title="Field #7">step</th>
<th title="Field #8">learning_rate</th>
<th title="Field #9">loss</th>
</tr></thead>
<tbody><tr>
<td align="right">1</td>
<td align="right">0.5115634202957153</td>
<td align="right">0.5385290213636863</td>
<td align="right">7.985</td>
<td align="right">130.62</td>
<td align="right">16.406</td>
<td align="right">268</td>
<td align="right">0.00009280492497114274</td>
<td align="right">0.4622</td>
</tr>
<tr>
<td align="right">2</td>
<td align="right">0.4201788902282715</td>
<td align="right">0.6035894895952164</td>
<td align="right">8.0283</td>
<td align="right">129.916</td>
<td align="right">16.317</td>
<td align="right">536</td>
<td align="right">0.00008249326664101577</td>
<td align="right">0.2823</td>
</tr>
<tr>
<td align="right">3</td>
<td align="right">0.580650806427002</td>
<td align="right">0.5574138665741355</td>
<td align="right">8.1314</td>
<td align="right">128.268</td>
<td align="right">16.11</td>
<td align="right">804</td>
<td align="right">0.00007218160831088881</td>
<td align="right">0.1804</td>
</tr>
<tr>
<td align="right">4</td>
<td align="right">0.4439031779766083</td>
<td align="right">0.6557697896854868</td>
<td align="right">8.1435</td>
<td align="right">128.078</td>
<td align="right">16.087</td>
<td align="right">1072</td>
<td align="right">0.00006186994998076183</td>
<td align="right">0.1357</td>
</tr>
<tr>
<td align="right">5</td>
<td align="right">0.5736830830574036</td>
<td align="right">0.6249925495853809</td>
<td align="right">8.0533</td>
<td align="right">129.512</td>
<td align="right">16.267</td>
<td align="right">1340</td>
<td align="right">0.00005155829165063486</td>
<td align="right">0.0913</td>
</tr>
<tr>
<td align="right">6</td>
<td align="right">0.7729296684265137</td>
<td align="right">0.6188970025554703</td>
<td align="right">8.081</td>
<td align="right">129.068</td>
<td align="right">16.211</td>
<td align="right">1608</td>
<td align="right">0.000041246633320507885</td>
<td align="right">0.065</td>
</tr>
<tr>
<td align="right">7</td>
<td align="right">0.7351673245429993</td>
<td align="right">0.6405767700619004</td>
<td align="right">8.1372</td>
<td align="right">128.176</td>
<td align="right">16.099</td>
<td align="right">1876</td>
<td align="right">0.00003093497499038092</td>
<td align="right">0.0433</td>
</tr>
<tr>
<td align="right">8</td>
<td align="right">0.7900031208992004</td>
<td align="right">0.6565021466238845</td>
<td align="right">8.1095</td>
<td align="right">128.615</td>
<td align="right">16.154</td>
<td align="right">2144</td>
<td align="right">0.000020623316660253942</td>
<td align="right">0.0199</td>
</tr>
<tr>
<td align="right">9</td>
<td align="right">0.8539554476737976</td>
<td align="right">0.667660908939119</td>
<td align="right">8.1204</td>
<td align="right">128.442</td>
<td align="right">16.132</td>
<td align="right">2412</td>
<td align="right">0.000010311658330126971</td>
<td align="right">0.0114</td>
</tr>
<tr>
<td align="right">10</td>
<td align="right">0.9261117577552795</td>
<td align="right">0.660301076782038</td>
<td align="right">8.0088</td>
<td align="right">130.231</td>
<td align="right">16.357</td>
<td align="right">2680</td>
<td align="right">0</td>
<td align="right">0.0066</td>
</tr>
</tbody></table> | {} | TehranNLP-org/electra-base-avg-cola | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-mnli-2e-5-21 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-mnli-2e-5-63 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-mnli-2e-5 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-mnli | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-qqp-2e-5-42 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-sst2-2e-5-21 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-sst2-2e-5-42 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-avg-sst2-2e-5-63 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-mrpc-2e-5-42 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-qqp-2e-5-42 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/electra-base-qqp-cls-2e-5-42 | null | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/roberta-base-mnli-2e-5-42 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/roberta-base-mrpc-2e-5-42 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/roberta-base-qqp-2e-5-42 | null | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/xlnet-base-cased-avg-cola-2e-5-21 | null | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/xlnet-base-cased-avg-cola-2e-5-42 | null | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/xlnet-base-cased-avg-cola-2e-5-63 | null | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5-21 | null | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5-63 | null | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
|
text-classification | transformers | {} | TehranNLP-org/xlnet-base-cased-avg-mnli-2e-5 | null | [
"transformers",
"pytorch",
"xlnet",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| null | 2022-03-02T23:29:05+00:00 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.