Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
text2text-generation
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test31
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test32
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test33
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test34
null
[ "transformers", "pytorch", "bart", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test35
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test36
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test4
null
[ "transformers", "pytorch", "bart", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test5
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test6
null
[ "transformers", "pytorch", "bart", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test7
null
[ "transformers", "pytorch", "bart", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test8
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Pyke/bart-finetuned-on-patent-Deepspeed-Test9
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Pyke/bart-finetuned-with-patent-test
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
This model is finetuned by Qichang Zheng(Pyke) based on bart with patent abstract dataset(7 million records), with 'facebook/bart-base' being the tokenizer and original model. The input is the same as the output, which is the patent abstract. This model is finetuned to serve as a reference to the research that Qichang is in.
{}
Pyke/bart-finetuned-with-patent
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pyroghy/DialoGPT-Rin-Tohsaka
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Pyroghy/DialoGPT-small-rin_tohsaka
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/Ab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/AbkPre
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/Abkh
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/Abkha
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/AbkhazPredict
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/AbkhazPrediction
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/Abkhi
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/abk-eng
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/opus-mt-ab-en
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QA/your-model-name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
Propaganda Techniques Analysis BERT ---- This model is a BERT based model to make predictions of propaganda techniques in news articles in English. The model is described in [this paper](https://propaganda.qcri.org/papers/EMNLP_2019__Fine_Grained_Propaganda_Detection.pdf). ## Model description Please find propaganda definition here: https://propaganda.qcri.org/annotations/definitions.html You can also try the model in action here: https://www.tanbih.org/prta ### How to use ```python >>> from transformers import BertTokenizerFast >>> from .model import BertForTokenAndSequenceJointClassification >>> >>> tokenizer = BertTokenizerFast.from_pretrained('bert-base-cased') >>> model = BertForTokenAndSequenceJointClassification.from_pretrained( >>> "QCRI/PropagandaTechniquesAnalysis-en-BERT", >>> revision="v0.1.0", >>> ) >>> >>> inputs = tokenizer.encode_plus("Hello, my dog is cute", return_tensors="pt") >>> outputs = model(**inputs) >>> sequence_class_index = torch.argmax(outputs.sequence_logits, dim=-1) >>> sequence_class = model.sequence_tags[sequence_class_index[0]] >>> token_class_index = torch.argmax(outputs.token_logits, dim=-1) >>> tokens = tokenizer.convert_ids_to_tokens(inputs.input_ids[0][1:-1]) >>> tags = [model.token_tags[i] for i in token_class_index[0].tolist()[1:-1]] ``` ### BibTeX entry and citation info ```bibtex @inproceedings{da-san-martino-etal-2019-fine, title = "Fine-Grained Analysis of Propaganda in News Article", author = "Da San Martino, Giovanni and Yu, Seunghak and Barr{\'o}n-Cede{\~n}o, Alberto and Petrov, Rostislav and Nakov, Preslav", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", month = nov, year = "2019", address = "Hong Kong, China", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/D19-1565", doi = "10.18653/v1/D19-1565", pages = "5636--5646", abstract = "Propaganda aims at influencing people{'}s mindset with the purpose of advancing a specific agenda. Previous work has addressed propaganda detection at document level, typically labelling all articles from a propagandistic news outlet as propaganda. Such noisy gold labels inevitably affect the quality of any learning system trained on them. A further issue with most existing systems is the lack of explainability. To overcome these limitations, we propose a novel task: performing fine-grained analysis of texts by detecting all fragments that contain propaganda techniques as well as their type. In particular, we create a corpus of news articles manually annotated at fragment level with eighteen propaganda techniques and propose a suitable evaluation measure. We further design a novel multi-granularity neural network, and we show that it outperforms several strong BERT-based baselines.", } ```
{"language": "en", "license": "MIT", "tags": ["propaganda", "bert"], "datasets": [], "metrics": [], "thumbnail": "https://pbs.twimg.com/profile_images/1092721745994440704/d6R-AHzj_400x400.jpg"}
QCRI/PropagandaTechniquesAnalysis-en-BERT
null
[ "transformers", "pytorch", "bert", "propaganda", "en", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QE/numerai_statistics
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QQ/scarlett
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Qasim/wav2vec2-large-xls-r-300m-turkish-colab
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
QianWeiTech/GPT2-News
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
{}
QianWeiTech/GPT2-Titles
null
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Qiaozhen/fake-news-detector
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
# Model Trained Using AutoNLP - Problem type: Binary Classification - Model ID: 36769078 - CO2 Emissions (in grams): 23.42719853096565 ## Validation Metrics - Loss: 0.15959647297859192 - Accuracy: 0.9817757009345794 - Precision: 0.980411361410382 - Recall: 0.9813725490196078 - AUC: 0.9982379201680672 - F1: 0.9808917197452229 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/Qinghui/autonlp-fake-covid-news-36769078 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("Qinghui/autonlp-fake-covid-news-36769078", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Qinghui/autonlp-fake-covid-news-36769078", use_auth_token=True) inputs = tokenizer("I love AutoNLP", return_tensors="pt") outputs = model(**inputs) ```
{"language": "unk", "tags": "autonlp", "datasets": ["Qinghui/autonlp-data-fake-covid-news"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 23.42719853096565}
Qinghui/autonlp-fake-covid-news-36769078
null
[ "transformers", "pytorch", "roberta", "text-classification", "autonlp", "unk", "dataset:Qinghui/autonlp-data-fake-covid-news", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# Punctuator for Uncased English The model is fine-tuned based on `DistilBertForTokenClassification` for adding punctuations to plain text (uncased English) ## Usage ```python from transformers import DistilBertForTokenClassification, DistilBertTokenizerFast model = DistilBertForTokenClassification.from_pretrained("Qishuai/distilbert_punctuator_en") tokenizer = DistilBertTokenizerFast.from_pretrained("Qishuai/distilbert_punctuator_en") ``` ## Model Overview ### Training data Combination of following three dataset: - BBC news: From BBC news website corresponding to stories in five topical areas from 2004-2005. [Reference](https://www.kaggle.com/hgultekin/bbcnewsarchive) - News articles: 20000 samples of short news articles scraped from Hindu, Indian times and Guardian between Feb 2017 and Aug 2017 [Reference](https://www.kaggle.com/sunnysai12345/news-summary?select=news_summary_more.csv) - Ted talks: transcripts of over 4,000 TED talks between 2004 and 2019 [Reference](https://www.kaggle.com/miguelcorraljr/ted-ultimate-dataset) ### Model Performance - Validation with 500 samples of dataset scraped from https://www.thenews.com.pk website. [Reference](https://www.kaggle.com/asad1m9a9h6mood/news-articles) - Metrics Report: | | precision | recall | f1-score | support | |:--------------:|:---------:|:------:|:--------:|:-------:| | COMMA | 0.66 | 0.55 | 0.60 | 7064 | | EXLAMATIONMARK | 1.00 | 0.00 | 0.00 | 5 | | PERIOD | 0.73 | 0.63 | 0.68 | 6573 | | QUESTIONMARK | 0.54 | 0.41 | 0.47 | 17 | | micro avg | 0.69 | 0.59 | 0.64 | 13659 | | macro avg | 0.73 | 0.40 | 0.44 | 13659 | | weighted avg | 0.69 | 0.59 | 0.64 | 13659 | - Validation with 86 news ted talks of 2020 which are not included in training dataset [Reference](https://www.kaggle.com/thegupta/ted-talk) - Metrics Report: | | precision | recall | f1-score | support | |:--------------:|:---------:|:------:|:--------:|:-------:| | COMMA | 0.71 | 0.56 | 0.63 | 10712 | | EXLAMATIONMARK | 0.45 | 0.07 | 0.12 | 75 | | PERIOD | 0.75 | 0.65 | 0.70 | 7921 | | QUESTIONMARK | 0.73 | 0.67 | 0.70 | 827 | | micro avg | 0.73 | 0.60 | 0.66 | 19535 | | macro avg | 0.66 | 0.49 | 0.53 | 19535 | | weighted avg | 0.73 | 0.60 | 0.66 | 19535 |
{}
Qishuai/distilbert_punctuator_en
null
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
# Punctuator for Simplified Chinese The model is fine-tuned based on `DistilBertForTokenClassification` for adding punctuations to plain text (simplified Chinese). The model is fine-tuned based on distilled model `bert-base-chinese`. ## Usage ```python from transformers import DistilBertForTokenClassification, DistilBertTokenizerFast model = DistilBertForTokenClassification.from_pretrained("Qishuai/distilbert_punctuator_zh") tokenizer = DistilBertTokenizerFast.from_pretrained("Qishuai/distilbert_punctuator_zh") ``` ## Model Overview ### Training data Combination of following three dataset: - News articles of People's Daily 2014. [Reference](https://github.com/InsaneLife/ChineseNLPCorpus) ### Model Performance - Validation with MSRA training dataset. [Reference](https://github.com/InsaneLife/ChineseNLPCorpus/tree/master/NER/MSRA) - Metrics Report: | | precision | recall | f1-score | support | |:----------------:|:---------:|:------:|:--------:|:-------:| | C_COMMA | 0.67 | 0.59 | 0.63 | 91566 | | C_DUNHAO | 0.50 | 0.37 | 0.42 | 21013 | | C_EXLAMATIONMARK | 0.23 | 0.06 | 0.09 | 399 | | C_PERIOD | 0.84 | 0.99 | 0.91 | 44258 | | C_QUESTIONMARK | 0.00 | 1.00 | 0.00 | 0 | | micro avg | 0.71 | 0.67 | 0.69 | 157236 | | macro avg | 0.45 | 0.60 | 0.41 | 157236 | | weighted avg | 0.69 | 0.67 | 0.68 | 157236 |
{}
Qishuai/distilbert_punctuator_zh
null
[ "transformers", "pytorch", "safetensors", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QiunanLiu/model_name
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QueenIonna/Taeyong
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QuentinColdwater/DialoGPT-small-coldwater
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
QuentinColdwater/DialoGPT-small-quentincoldwater
null
[ "transformers", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QuentinColdwater/q_coldwater_chatbot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
QuentinColdwater/quentin_chatbot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Quick/mindall-e
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
Testing PPO-trainer
{}
QuickRead/PPO_training
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
{}
QuickRead/Reward_training_Pegasus_xsum
null
[ "transformers", "pytorch", "pegasus", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fine-tune-Pegasus This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.3242 - Rouge1: 17.993 - Rouge2: 2.9392 - Rougel: 12.313 - Rougelsum: 13.3091 - Gen Len: 67.0552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.35e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["xsum"], "metrics": ["rouge"], "model-index": [{"name": "fine-tune-Pegasus", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "xsum", "type": "xsum", "args": "default"}, "metrics": [{"type": "rouge", "value": 17.993, "name": "Rouge1"}]}]}]}
QuickRead/fine-tune-Pegasus
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:xsum", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
QuickRead/pegasus-reddit-full
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-reddit This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on the reddit dataset. It achieves the following results on the evaluation set: - Loss: 3.3329 - Rouge1: 23.967 - Rouge2: 5.0032 - Rougel: 15.3267 - Rougelsum: 18.5905 - Gen Len: 69.2193 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.35e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 500 - num_epochs: 1.0 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.1 - Datasets 1.17.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "datasets": ["reddit"], "metrics": ["rouge"], "model-index": [{"name": "pegasus-reddit", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Language Modeling"}, "dataset": {"name": "reddit", "type": "reddit", "args": "default"}, "metrics": [{"type": "rouge", "value": 23.967, "name": "Rouge1"}]}]}]}
QuickRead/pegasus-reddit
null
[ "transformers", "pytorch", "pegasus", "text2text-generation", "generated_from_trainer", "dataset:reddit", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Quin/Kenneth
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Quindence/DialoGPT-small-LaytonBot
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Qwq/Qq
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
RAPIDS/distilbert-cyberlogs
null
[ "transformers", "pytorch", "distilbert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
RAPIDS/electra-cyberlogs
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RAQA/hshhdhdddd
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RASMUS/norwegian-roberta-base
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
RASMUS/wav2vec2-xlsr-1b-et-lm
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-et-lm-1B This model was finetuned with mozilla_foundation/common_voice_8_0 et with train+other+validation splits. It achieves the following results on the test set: (Loss reported with last eval step at step 2000/2040 during training) - Loss: 0.2150 - Wer: 0.2012 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00005 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 1 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": "et", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_8_0", "audio", "automatic-speech-recognition", "speech", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R 1B Wav2Vec2 Estonian by Rasmus Toivanen", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "et"}, "metrics": [{"type": "wer", "value": 20.12, "name": "Test WER"}, {"type": "cer", "value": 3.82, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "et"}, "metrics": [{"type": "wer", "value": 40.77, "name": "Test WER"}, {"type": "cer", "value": 12.32, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "et"}, "metrics": [{"type": "wer", "value": 41.97, "name": "Test WER"}]}]}]}
RASMUS/wav2vec2-xlsr-1b-et
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "mozilla-foundation/common_voice_8_0", "audio", "speech", "robust-speech-event", "hf-asr-leaderboard", "et", "dataset:mozilla-foundation/common_voice_8_0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-1b-ru This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.1352 - Wer: 0.0971 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.5462 | 0.35 | 500 | 0.4027 | 0.3575 | | 0.498 | 0.69 | 1000 | 0.2588 | 0.2513 | | 0.4279 | 1.04 | 1500 | 0.2265 | 0.2204 | | 0.4099 | 1.38 | 2000 | 0.2189 | 0.1979 | | 0.4688 | 1.73 | 2500 | 0.2100 | 0.1920 | | 0.2241 | 2.07 | 3000 | 0.1980 | 0.1767 | | 0.2056 | 2.42 | 3500 | 0.2020 | 0.1683 | | 0.3423 | 2.76 | 4000 | 0.1862 | 0.1606 | | 0.2478 | 3.11 | 4500 | 0.1787 | 0.1563 | | 0.3079 | 3.45 | 5000 | 0.1759 | 0.1555 | | 0.2477 | 3.8 | 5500 | 0.1713 | 0.1423 | | 0.1718 | 4.14 | 6000 | 0.1695 | 0.1391 | | 0.1675 | 4.49 | 6500 | 0.1677 | 0.1372 | | 0.1631 | 4.83 | 7000 | 0.1652 | 0.1333 | | 0.1429 | 5.18 | 7500 | 0.1605 | 0.1308 | | 0.1505 | 5.52 | 8000 | 0.1612 | 0.1245 | | 0.1385 | 5.87 | 8500 | 0.1487 | 0.1225 | | 0.1285 | 6.22 | 9000 | 0.1526 | 0.1201 | | 0.1153 | 6.56 | 9500 | 0.1464 | 0.1172 | | 0.1159 | 6.91 | 10000 | 0.1505 | 0.1143 | | 0.1061 | 7.25 | 10500 | 0.1444 | 0.1106 | | 0.1016 | 7.6 | 11000 | 0.1427 | 0.1075 | | 0.1125 | 7.94 | 11500 | 0.1386 | 0.1045 | | 0.0937 | 8.29 | 12000 | 0.1403 | 0.1022 | | 0.1059 | 8.63 | 12500 | 0.1406 | 0.1022 | | 0.0857 | 8.98 | 13000 | 0.1372 | 0.0992 | | 0.0901 | 9.32 | 13500 | 0.1380 | 0.0977 | | 0.0913 | 9.67 | 14000 | 0.1352 | 0.0971 | ### Framework versions - Transformers 4.17.0.dev0 - Pytorch 1.10.2+cu102 - Datasets 1.18.3 - Tokenizers 0.11.0
{"language": "ru", "tags": ["audio", "automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "speech"], "datasets": ["mozilla-foundation/common_voice_8_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R 1B Wav2Vec2 Russian by Rasmus Toivanen", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ru"}, "metrics": [{"type": "wer", "value": 10.83, "name": "Test WER"}, {"type": "cer", "value": 2.41, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 37.71, "name": "Test WER"}, {"type": "cer", "value": 12.98, "name": "Test CER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 31.89, "name": "Test WER"}]}]}]}
RASMUS/wav2vec2-xlsr-1b-ru
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "audio", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "speech", "ru", "dataset:mozilla-foundation/common_voice_8_0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
RASMUS/wav2vec2-xlsr-300-lm
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
RASMUS/wav2vec2-xlsr-300-versatile-test
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
RASMUS/wav2vec2-xlsr-300
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
RASMUS/wav2vec2-xlsr-300m-et
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-fi-lm-1B This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common voice train/dev/other datasets. It achieves the following results on the evaluation set without language model: - Loss: 0.1853 - Wer: 0.2205 With language model: - Wer: 0.1026 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8158 | 0.67 | 400 | 0.4835 | 0.6310 | | 0.5679 | 1.33 | 800 | 0.4806 | 0.5538 | | 0.6055 | 2.0 | 1200 | 0.3888 | 0.5083 | | 0.5353 | 2.67 | 1600 | 0.3258 | 0.4365 | | 0.4883 | 3.33 | 2000 | 0.3313 | 0.4204 | | 0.4513 | 4.0 | 2400 | 0.2924 | 0.3904 | | 0.3753 | 4.67 | 2800 | 0.2593 | 0.3608 | | 0.3478 | 5.33 | 3200 | 0.2832 | 0.3551 | | 0.3796 | 6.0 | 3600 | 0.2495 | 0.3402 | | 0.2556 | 6.67 | 4000 | 0.2342 | 0.3106 | | 0.229 | 7.33 | 4400 | 0.2181 | 0.2812 | | 0.205 | 8.0 | 4800 | 0.2041 | 0.2523 | | 0.1654 | 8.67 | 5200 | 0.2015 | 0.2416 | | 0.152 | 9.33 | 5600 | 0.1942 | 0.2294 | | 0.1569 | 10.0 | 6000 | 0.1853 | 0.2205 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": ["fi"], "license": "apache-2.0", "tags": ["generated_from_trainer", "automatic-speech-recognition", "robust-speech-event", "hf-asr-leaderboard"], "model-index": [{"name": "wav2vec2-xlsr-fi-lm-1B", "results": []}]}
RASMUS/wav2vec2-xlsr-fi-lm-1B
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard", "fi", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-fi-train-aug-lm-1B This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1499 - Wer: 0.1955 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 | | 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 | | 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 | | 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 | | 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 | | 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 | | 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 | | 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 | | 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 | | 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 | | 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 | | 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 | | 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": "fi", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_7_0", "audio", "automatic-speech-recognition", "speech"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"]}
RASMUS/wav2vec2-xlsr-fi-train-aug-bigLM-1B
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "mozilla-foundation/common_voice_7_0", "audio", "speech", "fi", "dataset:mozilla-foundation/common_voice_7_0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
{}
RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B-lower-lr
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xlsr-fi-train-aug-lm-1B This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1499 - Wer: 0.1955 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6473 | 0.29 | 400 | 0.2857 | 0.3825 | | 0.6039 | 0.58 | 800 | 0.2459 | 0.3476 | | 0.4757 | 0.87 | 1200 | 0.2338 | 0.3274 | | 0.4473 | 1.15 | 1600 | 0.2246 | 0.3128 | | 0.4322 | 1.44 | 2000 | 0.1962 | 0.2805 | | 0.3961 | 1.73 | 2400 | 0.2070 | 0.2797 | | 0.3642 | 2.02 | 2800 | 0.1790 | 0.2473 | | 0.3561 | 2.31 | 3200 | 0.1769 | 0.2375 | | 0.282 | 2.6 | 3600 | 0.1672 | 0.2263 | | 0.2978 | 2.89 | 4000 | 0.1636 | 0.2192 | | 0.2722 | 3.17 | 4400 | 0.1637 | 0.2102 | | 0.2924 | 3.46 | 4800 | 0.1506 | 0.2021 | | 0.2631 | 3.75 | 5200 | 0.1499 | 0.1955 | ### Framework versions - Transformers 4.16.0.dev0 - Pytorch 1.10.1+cu102 - Datasets 1.17.1.dev0 - Tokenizers 0.11.0
{"language": "fi", "tags": ["generated_from_trainer", "mozilla-foundation/common_voice_7_0", "audio", "automatic-speech-recognition", "speech", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_7_0"], "metrics": ["wer", "cer"], "model-index": [{"name": "XLS-R 1B Wav2Vec2 Finnish by Rasmus Toivanen", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 7", "type": "mozilla-foundation/common_voice_7_0", "args": "fi"}, "metrics": [{"type": "wer", "value": 10.96, "name": "Test WER"}, {"type": "cer", "value": 2.81, "name": "Test CER"}]}]}]}
RASMUS/wav2vec2-xlsr-fi-train-aug-lm-1B
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "mozilla-foundation/common_voice_7_0", "audio", "speech", "robust-speech-event", "hf-asr-leaderboard", "fi", "dataset:mozilla-foundation/common_voice_7_0", "model-index", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RASMUS/wav2vec2-xlsr-fi-train-aug-lm-aalto-10k-1B
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RASMUS/wav2vec2-xlsr-fi-train-aug-lm-aalto-full-1B
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
RAhul03/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# chatbot
{"tags": ["conversational"]}
REAP3R/Chat-bot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Saitama DialoGPT Model
{"tags": ["conversational"]}
REZERO/DialoGPT-medium-saitama
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
RICH双子
{}
RICH/rui-test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
this is a test by rui
{}
RICH/test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
Try the test sentence: <i>The woman said "my name is Sarah [and] I live in London."</i> The model should tag the tokens in the sentence with information about whether or not they are contained within a compound clause. If you find the model useful, please cite my thesis which presents the dataset used for finetuning: Evans, R. (2020) Sentence Simplification for Text Processing. Doctoral thesis. University of Wolverhampton. Wolverhampton, UK. (http://rgcl.wlv.ac.uk/~richard/Evans2020_SentenceSimplificationForTextProcessing.pdf) There you will find more information about the tagging scheme. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
{}
RJ3vans/CCVspanTagger
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
This model identifies compound nouns in input sentences. Try the test sentence: I love apples [and] potatoes. Accuracy is best when you place square brackets around the coordinating conjunction. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
{}
RJ3vans/CLNspanTagger
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
This model identifies compound noun phrases in an input sentence. Try the test sentence: The inquiry, which continues, will recall John Smith [and] Peter Montgomery next month for further questioning. Note that you need square brackets around the conjunction coordinating the NPs. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
{}
RJ3vans/CMN1spanTagger
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
This model identifies compound verb phrases (including conjoins and coordinators) in an input sentence. Try the test sentence: John kicked the ball [and] chased after it. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
{}
RJ3vans/CMV1spanTagger
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
Try the test sentences: <i>My name is Sarah and I live in London[, which] is the largest city in the UK.</i> <i>John thought that that was a strange idea.</i> <i>It was on Tuesdays when Peter took Tess for a walk.</i> <i>John was so large that he had to crouch to fit through the front door.</i> The model should tag the tokens in the sentence with information about whether or not they are contained within particular types of syntactic constituents. If you find the model useful, please cite my thesis which presents the dataset used for finetuning: Evans, R. (2020) Sentence Simplification for Text Processing. Doctoral thesis. University of Wolverhampton. Wolverhampton, UK. (http://rgcl.wlv.ac.uk/~richard/Evans2020_SentenceSimplificationForTextProcessing.pdf) There you will find more information about the tagging scheme.
{}
RJ3vans/13.05.2022.SSCCVspanTagger
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
This model identifies complex NPs modified by non-finite nominal clauses ("appositives") in the input sentence. Try the test sentence: My name is Sarah and I live in London[,] the capital of England. Note that accuracy is greatly improved if you place square brackets around the left boundary of the non-finite nominal clause. The model was derived using code adapted from an original program written by Dr. Le An Ha at the University of Wolverhampton.
{}
RJ3vans/SSMNspanTagger
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
token-classification
transformers
This model is used to tag the tokens in an input sequence with information about the different signs of syntactic complexity that they contain. For more details, please see Chapters 2 and 3 of my thesis (http://rgcl.wlv.ac.uk/~richard/Evans2020_SentenceSimplificationForTextProcessing.pdf). It was derived using code written by Dr. Le An Ha at the University of Wolverhampton. To use this model, the following code snippet may help: ====================================================================== import torch from transformers import AutoModelForTokenClassification, AutoTokenizer SignTaggingModel = AutoModelForTokenClassification.from_pretrained('RJ3vans/SignTagger') SignTaggingTokenizer = AutoTokenizer.from_pretrained('RJ3vans/SignTagger') label_list = ["M:N_CCV", "M:N_CIN", "M:N_CLA", "M:N_CLAdv", "M:N_CLN", "M:N_CLP", # This could be obtained from the config file "M:N_CLQ", "M:N_CLV", "M:N_CMA1", "M:N_CMAdv", "M:N_CMN1", "M:N_CMN2", "M:N_CMN3", "M:N_CMN4", "M:N_CMP", "M:N_CMP2", "M:N_CMV1", "M:N_CMV2", "M:N_CMV3", "M:N_COMBINATORY", "M:N_CPA", "M:N_ESAdvP", "M:N_ESCCV", "M:N_ESCM", "M:N_ESMA", "M:N_ESMAdvP", "M:N_ESMI", "M:N_ESMN", "M:N_ESMP", "M:N_ESMV", "M:N_HELP", "M:N_SPECIAL", "M:N_SSCCV", "M:N_SSCM", "M:N_SSMA", "M:N_SSMAdvP", "M:N_SSMI", "M:N_SSMN", "M:N_SSMP", "M:N_SSMV", "M:N_STQ", "M:N_V", "M:N_nan", "M:Y_CCV", "M:Y_CIN", "M:Y_CLA", "M:Y_CLAdv", "M:Y_CLN", "M:Y_CLP", "M:Y_CLQ", "M:Y_CLV", "M:Y_CMA1", "M:Y_CMAdv", "M:Y_CMN1", "M:Y_CMN2", "M:Y_CMN4", "M:Y_CMP", "M:Y_CMP2", "M:Y_CMV1", "M:Y_CMV2", "M:Y_CMV3", "M:Y_COMBINATORY", "M:Y_CPA", "M:Y_ESAdvP", "M:Y_ESCCV", "M:Y_ESCM", "M:Y_ESMA", "M:Y_ESMAdvP", "M:Y_ESMI", "M:Y_ESMN", "M:Y_ESMP", "M:Y_ESMV", "M:Y_HELP", "M:Y_SPECIAL", "M:Y_SSCCV", "M:Y_SSCM", "M:Y_SSMA", "M:Y_SSMAdvP", "M:Y_SSMI", "M:Y_SSMN", "M:Y_SSMP", "M:Y_SSMV", "M:Y_STQ"] sentence = 'The County Court in Nottingham heard that Roger Gedge, 30, had his leg amputated following the incident outside a rock festival in Wollaton Park, Nottingham, five years ago.' tokens = SignTaggingTokenizer.tokenize(SignTaggingTokenizer.decode(SignTaggingTokenizer.encode(sentence))) inputs = SignTaggingTokenizer.encode(sentence, return_tensors="pt") outputs = SignTaggingModel(inputs)[0] predictions = torch.argmax(outputs, dim=2) print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())]) ======================================================================
{}
RJ3vans/SignTagger
null
[ "transformers", "pytorch", "bert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RMRS/roberta-base-bne-finetuned-amazon_reviews_multi
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RREXIONN/onetwo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RRob06/rob_data
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RTGuo/1st_model
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
null
# My Awesome Model
{"tags": ["conversational"]}
RTM/ChatBot
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
null
# Lucky
{"tags": ["conversational"]}
RTM/Lucky
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# TIMBOT DialoGPT model
{"tags": ["conversational"]}
RTurk/DialoGPT-small-TIMBOT
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
RTurk/DialoGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
!!! At the moment, the model is distilled, a version from one of the first checkpoints is available for download. We plan to post the full model in the next few days. !!! This is a distilled HRBert model for an mlm task. Sentence embeddings can be produced as follows: ```python # pip install transformers from transformers import pipeline fill_mask = pipeline( "fill-mask", model='RabotaRu/HRBert-mini', tokenizer='RabotaRu/HRBert-mini' ) fill_mask('<mask> на склад') ```
{"language": ["ru", "en", "be", "bg", "uk", "ro", "kz", "tg", "tat", "sv", "sl", "sr", "uz", "es", "fi"], "license": "mit", "tags": ["russian", "fill-mask", "pretraining", "embeddings", "masked-lm"], "widget": [{"text": "<mask> \u043d\u0430 \u0441\u043a\u043b\u0430\u0434"}]}
RabotaRu/HRBert-mini
null
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "russian", "pretraining", "embeddings", "masked-lm", "ru", "en", "be", "bg", "uk", "ro", "kz", "tg", "tat", "sv", "sl", "sr", "uz", "es", "fi", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
### T5 for question-generation This is [t5-base](https://arxiv.org/abs/1910.10683) model trained for answer aware question generation task. The answer spans are highlighted within the text with special highlight tokens. You can play with the model using the inference API, just highlight the answer spans with `<hl>` tokens and end the text with `</s>`. For example `<hl> 42 <hl> is the answer to life, the universe and everything. </s>` For more deatils see [this](https://github.com/patil-suraj/question_generation) repo.
{"license": "mit", "tags": ["question-generation"], "datasets": ["squad"], "widget": [{"text": "<hl> 42 <hl> is the answer to life, the universe and everything. </s>"}, {"text": "Python is a programming language. It is developed by <hl> Guido Van Rossum <hl>. </s>"}, {"text": "Although <hl> practicality <hl> beats purity </s>"}]}
Rachneet/t5-base-qg-hl-squadv2
null
[ "transformers", "pytorch", "jax", "t5", "text2text-generation", "question-generation", "dataset:squad", "arxiv:1910.10683", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Radella/quora_helpful_answer_classifier
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Radella/quora_helpful_answers_classifier
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Radella/quora_helpful_answers_detection
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Radhika0908/Yugasabot_blogs
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Radhika0908/blog
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Radhika0908/blogs
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
{}
RadhikaSatam/CovBert-radhika
null
[ "transformers", "pytorch", "jax", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# radical DialoGPT Model
{"tags": ["conversational"]}
Radicalkiddo/DialoGPT-small-Radical
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
# Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 14502562 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP", "parameters":{"max_length":1000}}' https://api-inference.huggingface.co/Radvian/autonlp-indo_summarization-14502562 ```
{"language": "unk", "tags": "autonlp", "datasets": ["Radvian/autonlp-data-indo_summarization"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}]}
Radvian/t5_liputan6_finetuned_indonesia_summarization
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autonlp", "unk", "dataset:Radvian/autonlp-data-indo_summarization", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00