Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0626 - Precision: 0.9193 - Recall: 0.9311 - F1: 0.9251 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2393 | 1.0 | 878 | 0.0732 | 0.9052 | 0.9207 | 0.9129 | 0.9801 | | 0.0569 | 2.0 | 1756 | 0.0626 | 0.9193 | 0.9311 | 0.9251 | 0.9824 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.0+cu111 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert-base-uncased-finetuned-ner", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9192622045504749, "name": "Precision"}, {"type": "recall", "value": 0.9310884886452623, "name": "Recall"}, {"type": "f1", "value": 0.9251375534930251, "name": "F1"}, {"type": "accuracy", "value": 0.9823820039080496, "name": "Accuracy"}]}]}]}
Vibharkchauhan/distilbert-base-uncased-finetuned-ner
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vibharkchauhan/token-classification
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vicent/fasttext_model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vicent/model_text
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Victayria/test_repo
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
VictorSanh/bart-base-finetuned-xsum
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# RoBERTa-base-finetuned-yelp-polarity This is a [RoBERTa-base](https://huggingface.co/roberta-base) checkpoint fine-tuned on binary sentiment classifcation from [Yelp polarity](https://huggingface.co/nlp/viewer/?dataset=yelp_polarity). It gets **98.08%** accuracy on the test set. ## Hyper-parameters We used the following hyper-parameters to train the model on one GPU: ```python num_train_epochs = 2.0 learning_rate = 1e-05 weight_decay = 0.0 adam_epsilon = 1e-08 max_grad_norm = 1.0 per_device_train_batch_size = 32 gradient_accumulation_steps = 1 warmup_steps = 3500 seed = 42 ```
{"language": "en", "datasets": ["yelp_polarity"]}
VictorSanh/roberta-base-finetuned-yelp-polarity
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "en", "dataset:yelp_polarity", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
VictoriaLoppyisamazing8787/Victoria
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-J 6B on Vietnamese News Details will be available soon. For more information, please contact [email protected] (Dương) / [email protected] (Thành) / [email protected] (Bình). ### How to use ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("VietAI/gpt-j-6B-vietnamese-news") model = AutoModelForCausalLM.from_pretrained("VietAI/gpt-j-6B-vietnamese-news", low_cpu_mem_usage=True) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) prompt = "Tiềm năng của trí tuệ nhân tạo" # your input sentence input_ids = tokenizer(prompt, return_tensors="pt")['input_ids'].to(device) gen_tokens = model.generate( input_ids, max_length=max_length, do_sample=True, temperature=0.9, top_k=20, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ```
{"language": ["vi"], "tags": ["pytorch", "causal-lm", "text-generation"]}
VietAI/gpt-j-6B-vietnamese-news
null
[ "transformers", "pytorch", "gptj", "text-generation", "causal-lm", "vi", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# GPT-Neo 1.3B on Vietnamese News Details will be available soon. For more information, please contact [email protected] (Dương) / [email protected] (Thành) / [email protected] (Bình). ### How to use ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news") model = AutoModelForCausalLM.from_pretrained("VietAI/gpt-neo-1.3B-vietnamese-news", low_cpu_mem_usage=True) device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) prompt = "Tiềm năng của trí tuệ nhân tạo" # your input sentence input_ids = tokenizer(prompt, return_tensors="pt")['input_ids'].to(device) gen_tokens = model.generate( input_ids, max_length=max_length, do_sample=True, temperature=0.9, top_k=20, ) gen_text = tokenizer.batch_decode(gen_tokens)[0] print(gen_text) ```
{"language": ["vi"], "tags": ["pytorch", "causal-lm", "gpt"]}
VietAI/gpt-neo-1.3B-vietnamese-news
null
[ "transformers", "pytorch", "gpt_neo", "text-generation", "causal-lm", "gpt", "vi", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vigneswaran978999/adhu
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vikkyholla/Q
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vikkyholla/W
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vikram/test_transformers
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# Norwegian Electra ![Image of norwegian electra](https://i.imgur.com/QqSEC5I.png) Trained on Oscar + wikipedia + opensubtitles + some other data I had with the awesome power of TPUs(V3-8) Use with caution. I have no downstream tasks in Norwegian to test on so I have no idea of its performance yet. # Model ## Electra: Pre-training Text Encoders as Discriminators Rather Than Generators Kevin Clark and Minh-Thang Luong and Quoc V. Le and Christopher D. Manning - https://openreview.net/pdf?id=r1xMH1BtvB - https://github.com/google-research/electra # Acknowledgments ### TensorFlow Research Cloud Research supported with Cloud TPUs from Google's TensorFlow Research Cloud (TFRC). Thanks for providing access to the TFRC ❤️ - https://www.tensorflow.org/tfrc #### OSCAR corpus - https://oscar-corpus.com/ #### OPUS - http://opus.nlpl.eu/ - http://www.opensubtitles.org/
{"language": false, "thumbnail": "https://i.imgur.com/QqSEC5I.png"}
ViktorAlm/electra-base-norwegian-uncased-discriminator
null
[ "transformers", "pytorch", "tf", "electra", "pretraining", "no", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
# Albumin-15s ## Model description This is a version of [Albert-base-v2](https://huggingface.co/albert-base-v2) for 15's long aptamers comparison to determine which one is more affine to target protein Albumin. The Albert model was pretrained in the English language, it has many similarities with language or proteins and aptamers which is why we had to fine-tune it to help the model learn embedded positioning for aptamers to be able to distinguish better sequences. More information can be found in our [github]() and our iGEMs [wiki](). ## Intended uses & limitations You can use the fine-tuned model for either masked aptamer pair sequence classification, which one is more affine for target protein Albumin, prediction, but it's mostly intended to be fine-tuned again on a different length aptamer or simply expanded datasets. #### How to use This model can be used to predict compared affinity with dataset preprocessing function which encodes the specific type of data (Sequence1, Sequence2, Label) where Label indicates binary if Sequence1 is more affine to target protein Albumin. ```python from transformers import AutoTokenizer, BertModel mname = "Vilnius-Lithuania-iGEM/Albumin" model = BertModel.from_pretrained(mname) ``` To predict batches of sequences you have to employ custom functions shown in [git/prediction.ipynb]() #### Limitations and bias It seems that fine-tuned Albert model for this kind of task has limition of 90 % accuracy predicting which aptamer is more suitable for a target protein, also Albert-large or immense dataset of 15s aptamer could increase accuracy few %, however extrapolation case is not studied and we cannot confirm this model is state-of-The-art when one of aptamers is SUPER good (has almost maximum entropy to the Albumin). ## Eval results accuracy : 0.8601 precision: 0.8515 recall : 0.8725 f1 : 0.8618 roc_auc : 0.9388 The score was calculated using sklearn.metrics.
{}
Vilnius-Lithuania-iGEM/Albumin
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vin1412/abc
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
VincentButterfield/DialoGPT-small-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
pytorch
Ce modèle est développé pour KARA. Ce modèle est : - Un outil d'analyse de sentiment associé à un commentaire de sondage RH - Entrainé pour être utilisé en ANGLAIS (les commentaires doivent êtres traduits) - Spécialisé pour des commentaires entre 10 et 512 charactères Ce modèle n'est pas : - Utilisable pour détecter un discours haineux ou bien une lettre de suicide Étiquettes : - Label_0 = Négatif - Label_1 = Positif version 1.1.0 Performances sur le jeux de données du HRM : 91.5% de précision
{"language": ["en"], "library_name": "pytorch", "tags": ["sentiment-analysis"], "metrics": ["negative", "positive"], "widget": [{"text": "Thank you for listening to the recommendations of the telephone team for teleworking. we have a strong expertise in this field and accurate listening to Our management!!!!", "example_title": "Exemple positif"}, {"text": "working conditions and wages are less than average more part of the time it is not a hierarchical system Our opinion counts", "example_title": "Exemple n\u00e9gatif"}]}
VincentC12/sentiment_analysis_kara
null
[ "pytorch", "distilbert", "sentiment-analysis", "en", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vinhngx/norwegian-roberta-base
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Violeta/ArmBERTa
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
Violeta/ArmBERTa_Model
null
[ "transformers", "pytorch", "jax", "roberta", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
Viona/agriculture-sentence-transformer
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.7809 - Matthews Correlation: 0.5286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5299 | 1.0 | 535 | 0.5040 | 0.4383 | | 0.3472 | 2.0 | 1070 | 0.5284 | 0.4911 | | 0.2333 | 3.0 | 1605 | 0.6633 | 0.5091 | | 0.1733 | 4.0 | 2140 | 0.7809 | 0.5286 | | 0.1255 | 5.0 | 2675 | 0.8894 | 0.5282 | ### Framework versions - Transformers 4.10.2 - Pytorch 1.9.0+cu102 - Datasets 1.12.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["glue"], "metrics": ["matthews_correlation"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "cola"}, "metrics": [{"type": "matthews_correlation", "value": 0.5286324175580216, "name": "Matthews Correlation"}]}]}]}
VirenS13117/distilbert-base-uncased-finetuned-cola
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
VirenS13117/distilgpt2-finetuned-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
VirenS13117/distilroberta-base-finetuned-wikitext2
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Harry Potter DialoGPT Model
{"tags": ["conversational"]}
VishalArun/DialoGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vishesh/Paimon-small
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{"license": "afl-3.0"}
Vishnu393831/VICTORY
null
[ "license:afl-3.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vishva/UNIBOT
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vishva/UNIFAQ-T5
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vishva/unibot-faq
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
null
# VAN-Base VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network). ## Description While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. ## Evaluation Results | Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download | | :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: | | VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) | | VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) | | VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), | | VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) | ### BibTeX entry and citation info ```bibtex @article{guo2022visual, title={Visual Attention Network}, author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min}, journal={arXiv preprint arXiv:2202.09741}, year={2022} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
Visual-Attention-Network/VAN-Base-original
null
[ "image-classification", "dataset:imagenet", "arxiv:2202.09741", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
null
# VAN-Large VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network). ## Description While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. ## Evaluation Results | Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download | | :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: | | VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) | | VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) | | VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), | | VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) | ### BibTeX entry and citation info ```bibtex @article{guo2022visual, title={Visual Attention Network}, author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min}, journal={arXiv preprint arXiv:2202.09741}, year={2022} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
Visual-Attention-Network/VAN-Large-original
null
[ "image-classification", "dataset:imagenet", "arxiv:2202.09741", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
null
# VAN-Small VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network). ## Description While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. ## Evaluation Results | Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download | | :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: | | VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) | | VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) | | VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), | | VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) | ### BibTeX entry and citation info ```bibtex @article{guo2022visual, title={Visual Attention Network}, author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min}, journal={arXiv preprint arXiv:2202.09741}, year={2022} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
Visual-Attention-Network/VAN-Small-original
null
[ "image-classification", "dataset:imagenet", "arxiv:2202.09741", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
image-classification
null
# VAN-Tiny VAN is trained on ImageNet-1k (1 million images, 1,000 classes) at resolution 224x224. It was first introduced in the paper [Visual Attention Network](https://arxiv.org/abs/2202.09741) and first released in [here](https://github.com/Visual-Attention-Network). ## Description While originally designed for natural language processing (NLP) tasks, the self-attention mechanism has recently taken various computer vision areas by storm. However, the 2D nature of images brings three challenges for applying self-attention in computer vision. (1) Treating images as 1D sequences neglects their 2D structures. (2) The quadratic complexity is too expensive for high-resolution images. (3) It only captures spatial adaptability but ignores channel adaptability. In this paper, we propose a novel large kernel attention (LKA) module to enable self-adaptive and long-range correlations in self-attention while avoiding the above issues. We further introduce a novel neural network based on LKA, namely Visual Attention Network (VAN). While extremely simple and efficient, VAN outperforms the state-of-the-art vision transformers (ViTs) and convolutional neural networks (CNNs) with a large margin in extensive experiments, including image classification, object detection, semantic segmentation, instance segmentation, etc. ## Evaluation Results | Model | #Params(M) | GFLOPs | Top1 Acc(%) | Download | | :-------- | :--------: | :----: | :---------: | :----------------------------------------------------------: | | VAN-Tiny | 4.1 | 0.9 | 75.4 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Tiny) | | VAN-Small | 13.9 | 2.5 | 81.1 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Small) | | VAN-Base | 26.6 | 5.0 | 82.8 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Base), | | VAN-Large | 44.8 | 9.0 | 83.9 |[Hugging Face 🤗](https://huggingface.co/Visual-Attention-Network/VAN-Large) | ### BibTeX entry and citation info ```bibtex @article{guo2022visual, title={Visual Attention Network}, author={Guo, Meng-Hao and Lu, Cheng-Ze and Liu, Zheng-Ning and Cheng, Ming-Ming and Hu, Shi-Min}, journal={arXiv preprint arXiv:2202.09741}, year={2022} } ```
{"license": "apache-2.0", "tags": ["image-classification"], "datasets": ["imagenet"]}
Visual-Attention-Network/VAN-Tiny-original
null
[ "image-classification", "dataset:imagenet", "arxiv:2202.09741", "license:apache-2.0", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Rick Sanchez DialoGPT Model
{"tags": ["conversational"]}
Vitafeu/DialoGPT-medium-ricksanchez
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vitortrindader/DialogGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
Vivek/GPT2_GSM8k
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Vivek/checkpoints
null
[ "transformers", "jax", "gpt2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
{}
Vivek/flax-gpt2-common-sense-reasoning
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
This is to test the common sense reasoning of a GPT-2 model.To assess how intelligent or it is adapted to this datasets which requires not only big models but also a little common sense also.
{}
Vivek/gpt2-common-sense-reasoning
null
[ "transformers", "jax", "tensorboard", "gpt2", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Vivek/gptneo_cose
null
[ "transformers", "jax", "tensorboard", "gpt_neo", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Vivek/gptneo_cosmos
null
[ "transformers", "jax", "tensorboard", "gpt_neo", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Vivek/gptneo_hellaswag
null
[ "transformers", "jax", "tensorboard", "gpt_neo", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Vivek/gptneo_piqa
null
[ "transformers", "jax", "tensorboard", "gpt_neo", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Vivek/gptneo_storycloze
null
[ "transformers", "jax", "tensorboard", "gpt_neo", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Vivek/gptneo_winogrande
null
[ "transformers", "jax", "tensorboard", "gpt_neo", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
{}
VlakoResker/wav2vec2-large-xls-r-300m-ru-en
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
Vlasta/CDNA_bert_6
null
[ "transformers", "pytorch", "bert", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
sentence-similarity
transformers
#### Table of contents 1. [Introduction](#introduction) 2. [Pretrain model](#models) 3. [Using SimeCSE_Vietnamese with `sentences-transformers`](#sentences-transformers) - [Installation](#install1) - [Example usage](#usage1) 4. [Using SimeCSE_Vietnamese with `transformers`](#transformers) - [Installation](#install2) - [Example usage](#usage2) # <a name="introduction"></a> SimeCSE_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese Pre-trained SimeCSE_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese : - SimeCSE_Vietnamese pre-training approach is based on [SimCSE](https://arxiv.org/abs/2104.08821) which optimizes the SimeCSE_Vietnamese pre-training procedure for more robust performance. - SimeCSE_Vietnamese encode input sentences using a pre-trained language model such as [PhoBert](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) - SimeCSE_Vietnamese works with both unlabeled and labeled data. ## Pre-trained models <a name="models"></a> Model | #params | Arch. ---|---|--- [`VoVanPhuc/sup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) | 135M | base [`VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base) | 135M | base ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `sentences-transformers` ### Installation <a name="install1"></a> - Install `sentence-transformers`: - `pip install -U sentence-transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage1"></a> ```python from sentence_transformers import SentenceTransformer from pyvi.ViTokenizer import tokenize model = SentenceTransformer('VoVanPhuc/sup-SimCSE-VietNamese-phobert-base') sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] embeddings = model.encode(sentences) ``` ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `transformers` ### Installation <a name="install2"></a> - Install `transformers`: - `pip install -U transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage2"></a> ```python import torch from transformers import AutoModel, AutoTokenizer from pyvi.ViTokenizer import tokenize PhobertTokenizer = AutoTokenizer.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") model = AutoModel.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] inputs = PhobertTokenizer(sentences, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output ``` ## Quick Start [Open In Colab](https://colab.research.google.com/drive/12__EXJoQYHe9nhi4aXLTf9idtXT8yr7H?usp=sharing) ## Citation @article{gao2021simcse, title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings}, author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi}, journal={arXiv preprint arXiv:2104.08821}, year={2021} } @inproceedings{phobert, title = {{PhoBERT: Pre-trained language models for Vietnamese}}, author = {Dat Quoc Nguyen and Anh Tuan Nguyen}, booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020}, year = {2020}, pages = {1037--1042} }
{"language": ["vi"], "pipeline_tag": "sentence-similarity"}
VoVanPhuc/sup-SimCSE-VietNamese-phobert-base
null
[ "transformers", "pytorch", "roberta", "sentence-similarity", "vi", "arxiv:2104.08821", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
#### Table of contents 1. [Introduction](#introduction) 2. [Pretrain model](#models) 3. [Using SimeCSE_Vietnamese with `sentences-transformers`](#sentences-transformers) - [Installation](#install1) - [Example usage](#usage1) 4. [Using SimeCSE_Vietnamese with `transformers`](#transformers) - [Installation](#install2) - [Example usage](#usage2) # <a name="introduction"></a> SimeCSE_Vietnamese: Simple Contrastive Learning of Sentence Embeddings with Vietnamese Pre-trained SimeCSE_Vietnamese models are the state-of-the-art of Sentence Embeddings with Vietnamese : - SimeCSE_Vietnamese pre-training approach is based on [SimCSE](https://arxiv.org/abs/2104.08821) which optimizes the SimeCSE_Vietnamese pre-training procedure for more robust performance. - SimeCSE_Vietnamese encode input sentences using a pre-trained language model such as [PhoBert](https://www.aclweb.org/anthology/2020.findings-emnlp.92/) - SimeCSE_Vietnamese works with both unlabeled and labeled data. ## Pre-trained models <a name="models"></a> Model | #params | Arch. ---|---|--- [`VoVanPhuc/sup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/sup-SimCSE-VietNamese-phobert-base) | 135M | base [`VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base`](https://huggingface.co/VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base) | 135M | base ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `sentences-transformers` ### Installation <a name="install1"></a> - Install `sentence-transformers`: - `pip install -U sentence-transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage1"></a> ```python from sentence_transformers import SentenceTransformer from pyvi.ViTokenizer import tokenize model = SentenceTransformer('VoVanPhuc/sup-SimCSE-VietNamese-phobert-base') sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] embeddings = model.encode(sentences) ``` ## <a name="sentences-transformers"></a> Using SimeCSE_Vietnamese with `transformers` ### Installation <a name="install2"></a> - Install `transformers`: - `pip install -U transformers` - Install `pyvi` to word segment: - `pip install pyvi` ### Example usage <a name="usage2"></a> ```python import torch from transformers import AutoModel, AutoTokenizer from pyvi.ViTokenizer import tokenize PhobertTokenizer = AutoTokenizer.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") model = AutoModel.from_pretrained("VoVanPhuc/sup-SimCSE-VietNamese-phobert-base") sentences = ['Kẻ đánh bom đinh tồi tệ nhất nước Anh.', 'Nghệ sĩ làm thiện nguyện - minh bạch là việc cấp thiết.', 'Bắc Giang tăng khả năng điều trị và xét nghiệm.', 'HLV futsal Việt Nam tiết lộ lý do hạ Lebanon.', 'việc quan trọng khi kêu gọi quyên góp từ thiện là phải minh bạch, giải ngân kịp thời.', '20% bệnh nhân Covid-19 có thể nhanh chóng trở nặng.', 'Thái Lan thua giao hữu trước vòng loại World Cup.', 'Cựu tuyển thủ Nguyễn Bảo Quân: May mắn ủng hộ futsal Việt Nam', 'Chủ ki-ốt bị đâm chết trong chợ đầu mối lớn nhất Thanh Hoá.', 'Bắn chết người trong cuộc rượt đuổi trên sông.' ] sentences = [tokenize(sentence) for sentence in sentences] inputs = PhobertTokenizer(sentences, padding=True, truncation=True, return_tensors="pt") with torch.no_grad(): embeddings = model(**inputs, output_hidden_states=True, return_dict=True).pooler_output ``` ## Quick Start [Open In Colab](https://colab.research.google.com/drive/12__EXJoQYHe9nhi4aXLTf9idtXT8yr7H?usp=sharing) ## Citation @article{gao2021simcse, title={{SimCSE}: Simple Contrastive Learning of Sentence Embeddings}, author={Gao, Tianyu and Yao, Xingcheng and Chen, Danqi}, journal={arXiv preprint arXiv:2104.08821}, year={2021} } @inproceedings{phobert, title = {{PhoBERT: Pre-trained language models for Vietnamese}}, author = {Dat Quoc Nguyen and Anh Tuan Nguyen}, booktitle = {Findings of the Association for Computational Linguistics: EMNLP 2020}, year = {2020}, pages = {1037--1042} }
{}
VoVanPhuc/unsup-SimCSE-VietNamese-phobert-base
null
[ "transformers", "pytorch", "roberta", "arxiv:2104.08821", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Voldemort/Sarcasm-Detection
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#Cortana DialoGPT Model
{"tags": ["conversational"]}
VulcanBin/DialoGPT-small-cortana
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vuong/wav2vec2-large-xls-r-300m-turkish-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vuong/wav2vec2-large-xls-r-300m-vi-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vuong/wav2vec2-large-xls-r-300m-vi_vi-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vuong/wav2vec2-large-xls-r-300m-vii-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vuong/wav2vec2-large-xls-r-300m-vina-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Vuong/wav2vec2-large-xls-r-300m-vivi-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WE/HA
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# Deberta-Chinese ​ 本项目,基于微软开源的Deberta模型,在中文领域进行预训练。开源本模型,旨在为其他人提供更多预训练语言模型选择。 ​ 本预训练模型,基于WuDaoCorpora语料库预训练而成。WuDaoCorpora是北京智源人工智能研究院(智源研究院)构建的大规模、高质量数据集,用于支撑“悟道”大模型项目研究。 ​ 使用WWM与n-gramMLM 等预训练方法进行预训练。 | 预训练模型 | 学习率 | batchsize | 设备 | 语料库 | 时间 | 优化器 | | --------------------- | ------ | --------- | ------ | ------ | ---- | ------ | | Deberta-Chinese-Large | 1e-5 | 512 | 2*3090 | 200G | 14天 | AdamW | ​ ### 加载与使用 依托于huggingface-transformers ``` tokenizer = BertTokenizer.from_pretrained("WENGSYX/Deberta-Chinese-Large") model = AutoModel.from_pretrained("WENGSYX/Deberta-Chinese-Large") ``` #### 注意,请使用BertTokenizer加载中文词表
{}
WENGSYX/Deberta-Chinese-Large
null
[ "transformers", "pytorch", "deberta", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
# Multilingual SimCSE #### A contrastive learning model using parallel language pair training ##### By using parallel sentence pairs in different languages, the text is mapped to the same vector space for pre-training similar to Simcse ##### Firstly, the [mDeBERTa](https://huggingface.co/microsoft/mdeberta-v3-base) model is used to load the pre-training parameters, and then the pre-training is carried out based on the [CCMatrix](https://github.com/facebookresearch/LASER/tree/main/tasks/CCMatrix) data set. ##### Training data: 100 million parallel pairs ##### Taining equipment: 4 * 3090 ## Pipline Code ``` from transformers import AutoModel,AutoTokenizer model = AutoModel.from_pretrained('WENGSYX/Multilingual_SimCSE') tokenizer = AutoTokenizer.from_pretrained('WENGSYX/Multilingual_SimCSE') word1 = tokenizer('Hello,world.',return_tensors='pt') word2 = tokenizer('你好,世界',return_tensors='pt') out1 = model(**word1).last_hidden_state.mean(1) out2 = model(**word2).last_hidden_state.mean(1) print(F.cosine_similarity(out1,out2)) ---------------------------------------------------- tensor([0.8758], grad_fn=<DivBackward0>) ``` ## Train Code ``` from transformers import AutoModel,AutoTokenizer,AdamW model = AutoModel.from_pretrained('WENGSYX/Multilingual_SimCSE') tokenizer = AutoTokenizer.from_pretrained('WENGSYX/Multilingual_SimCSE') optimizer = AdamW(model.parameters(),lr=1e-5) def compute_loss(y_pred, t=0.05, device="cuda"): idxs = torch.arange(0, y_pred.shape[0], device=device) y_true = idxs + 1 - idxs % 2 * 2 similarities = F.cosine_similarity(y_pred.unsqueeze(1), y_pred.unsqueeze(0), dim=2) similarities = similarities - torch.eye(y_pred.shape[0], device=device) * 1e12 similarities = similarities / t loss = F.cross_entropy(similarities, y_true) return torch.mean(loss) wordlist = [['Hello,world','你好,世界'],['Pensa che il bianco rappresenti la purezza.','Он думает, что белые символизируют чистоту.']] input_ids, attention_mask, token_type_ids = [], [], [] for x in wordlist: text1 = tokenizer(x[0], padding='max_length', truncation=True, max_length=512) input_ids.append(text1['input_ids']) attention_mask.append(text1['attention_mask']) text2 = tokenizer(x[1], padding='max_length', truncation=True, max_length=512) input_ids.append(text2['input_ids']) attention_mask.append(text2['attention_mask']) input_ids = torch.tensor(input_ids,device=device) attention_mask = torch.tensor(attention_mask,device=device) output = model(input_ids=input_ids,attention_mask=attention_mask) output = output.last_hidden_state.mean(1) loss = compute_loss(output) loss.backward() optimizer.step() optimizer.zero_grad() ```
{}
WENGSYX/Multilingual_SimCSE
null
[ "transformers", "pytorch", "safetensors", "deberta-v2", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WKQ/WKQ
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
"Hello"
{}
WSS/wav2vec2-large-xlsr-53-vietnamese
null
[ "transformers", "pytorch", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wafaa/arabic_mod
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wafaa/general_arabic_model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wang123/distilbert-base-uncased-finetuned-emotion
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
https://github.com/zejunwang1/bert4vec
{}
WangZeJun/roformer-sim-base-chinese
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
https://github.com/zejunwang1/bert4vec
{}
WangZeJun/roformer-sim-small-chinese
null
[ "transformers", "pytorch", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
https://github.com/zejunwang1/bert4vec
{}
WangZeJun/simbert-base-chinese
null
[ "transformers", "pytorch", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Warmcandy/DialoGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Rick Sanchez DialoGPT Model
{"tags": ["conversational"]}
WarrenK-Design/DialoGPT-small-Rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
{}
Wasabi42/Joker_Model
null
[ "transformers", "pytorch", "gpt2", "text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wasabi42/my-new-shiny-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wasabi42/new-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Wataru/T5-base-ja-open2ch-dialogue
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
Wataru/sentence-roberta-tiny
null
[ "transformers", "pytorch", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Watlo/DialoGPT-small-harrypotter
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
Testing a new model
{}
WayScriptDerrick/SampleModel
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WayScriptDerrick/TestingModel
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wayz/My
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
Weelz/Paraphraser
null
[ "transformers", "pytorch", "t5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Weihan/asd
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Weihan/electra
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
{}
Weihan/electra_tokenizer
null
[ "transformers", "electra", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
Weipeng/dummy-model
null
[ "transformers", "pytorch", "camembert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
multiple-choice
transformers
{}
Weiqin/roberta-large-finetuned-race-roberta
null
[ "transformers", "pytorch", "tensorboard", "roberta", "multiple-choice", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Weitung/distilbert-base-uncased-finetuned-cola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# WellcomeBertMesh WellcomeBertMesh is build from the data science team at the WellcomeTrust to tag biomedical grants with Medical Subject Headings ([Mesh](https://www.nlm.nih.gov/mesh/meshhome.html)). Even though developed with the intention to be used towards research grants, it should be applicable to any type of biomedical text close to the domain it was trained which is abstracts from biomedical publications. # Model description The model is inspired from [BertMesh](https://pubmed.ncbi.nlm.nih.gov/32976559/) which is trained on the full text of biomedical publications and uses BioBert as its pretrained model. WellcomeBertMesh is utilising the latest state of the art model in the biomedical domain which is [PubMedBert](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract) from Microsoft and attach a Multilabel attention head which essentially allows the model to pay attention to different tokens per label to decide whether it applies. We train the model using data from the [BioASQ](http://bioasq.org) competition which consists of abstracts from PubMed publications. We use 2016-2019 data for training and 2020-2021 for testing which gives us ~2.5M publications to train and 220K to test. This is out of a total of 14M publications. It takes 4 days to train WellcomeBertMesh on 8 Nvidia P100 GPUs. The model achieves 63% micro f1 with a 0.5 threshold for all labels. The code for developing the model is open source and can be found in https://github.com/wellcometrust/grants_tagger # How to use ⚠️ You need transformers 4.17+ for the example to work due to its recent support for custom models. You can use the model straight from the hub but because it contains a custom forward function due to the multilabel attention head you have to pass `trust_remote_code=True`. You can get access to the probabilities for all labels by omitting `return_labels=True`. ``` from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Wellcome/WellcomeBertMesh" ) model = AutoModel.from_pretrained( "Wellcome/WellcomeBertMesh", trust_remote_code=True ) text = "This grant is about malaria and not about HIV." inputs = tokenizer([text], padding="max_length") labels = model(**inputs, return_labels=True) print(labels) ``` You can inspect the model code if you navigate to the files and see `model.py`.
{"license": "apache-2.0", "pipeline_tag": "text-classification"}
Wellcome/WellcomeBertMesh
null
[ "transformers", "pytorch", "bert", "feature-extraction", "text-classification", "custom_code", "license:apache-2.0", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0584 - Precision: 0.9286 - Recall: 0.9475 - F1: 0.9379 - Accuracy: 0.9859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2183 | 1.0 | 878 | 0.0753 | 0.9087 | 0.9291 | 0.9188 | 0.9800 | | 0.0462 | 2.0 | 1756 | 0.0614 | 0.9329 | 0.9470 | 0.9399 | 0.9858 | | 0.0244 | 3.0 | 2634 | 0.0584 | 0.9286 | 0.9475 | 0.9379 | 0.9859 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.8.2+cu111 - Datasets 1.15.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["conll2003"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-finetuned-ner1", "results": [{"task": {"type": "token-classification", "name": "Token Classification"}, "dataset": {"name": "conll2003", "type": "conll2003", "args": "conll2003"}, "metrics": [{"type": "precision", "value": 0.9285832096321953, "name": "Precision"}, {"type": "recall", "value": 0.9474924267923258, "name": "Recall"}, {"type": "f1", "value": 0.9379425239483548, "name": "F1"}, {"type": "accuracy", "value": 0.9859009831047272, "name": "Accuracy"}]}]}]}
Wende/bert-finetuned-ner1
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:conll2003", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Harry Potter DaibloGPT Model
{"tags": ["conversational"]}
Wessel/DiabloGPT-medium-harrypotter
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# White's Bot
{"tags": ["conversational"]}
White/white-bot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Whitez/Chickenbot
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Twety DialoGPT Model
{"tags": ["conversational"]}
Whitez/DialoGPT-small-twety
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xlsr-arabic-demo-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu102 - Datasets 1.13.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-arabic-demo-colab", "results": []}]}
Wiam/wav2vec2-large-xlsr-arabic-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
WidadAwane/AraBert_Hate_Speech_Detecter
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Wiedy/wav2vec2-large-xls-r-300m-tr-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
Wiirin/BERT-finetuned-PubMed-FoodCancer
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
Wiirin/DistilBERT-finetuned-PubMed-FoodCancer
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00