Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
token-classification
transformers
# sahajBERT Named Entity Recognition ## Model description [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann). Named Entities predicted by the model: | Label id | Label | |:--------:|:----:| |0 |O| |1 |B-PER| |2 |I-PER| |3 |B-ORG| |4 |I-ORG| |5 |B-LOC| |6 |I-LOC| ## Intended uses & limitations #### How to use You can use this model directly with a pipeline for masked language modeling: ```python from transformers import AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast # Initialize tokenizer tokenizer = PreTrainedTokenizerFast.from_pretrained("neuropark/sahajBERT-NER") # Initialize model model = AlbertForTokenClassification.from_pretrained("neuropark/sahajBERT-NER") # Initialize pipeline pipeline = TokenClassificationPipeline(tokenizer=tokenizer, model=model) raw_text = "এই ইউনিয়নে ৩ টি মৌজা ও ১০ টি গ্রাম আছে ।" # Change me output = pipeline(raw_text) ``` #### Limitations and bias <!-- Provide examples of latent issues and potential remediations. --> WIP ## Training data The model was initialized it with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali of [WikiANN ](https://huggingface.co/datasets/wikiann) ## Training procedure Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` --> ## Eval results loss: 0.11714419722557068 accuracy: 0.9772286821705426 precision: 0.9585365853658536 recall: 0.9651277013752456 f1 : 0.9618208516886931 ### BibTeX entry and citation info Coming soon! <!-- ```bibtex @inproceedings{..., year={2020} } ``` -->
{"language": "bn", "license": "apache-2.0", "tags": ["collaborative", "bengali", "NER"], "datasets": "xtreme", "metrics": ["Loss", "Accuracy", "Precision", "Recall"]}
SaulLu/recreate-history
null
[ "transformers", "pytorch", "albert", "token-classification", "collaborative", "bengali", "NER", "bn", "dataset:xtreme", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
feature-extraction
transformers
# HTLM Pretraining Dataset: 23TB of simplified HTML extracted from common crawl dumps Paper: [HTLM: Hyper-Text Pre-Training and Prompting of Language Models](https://arxiv.org/abs/2107.06955) Authors: Armen Aghajanyan, Dmytro Okhonko, Mike Lewis, Mandar Joshi, Hu Xu, Gargi Ghosh, Luke Zettlemoyer Disclaimer: The team releasing BERT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Abstract We introduce HTLM, a hyper-text language model trained on a large-scale web crawl. Modeling hyper-text has a number of advantages: (1) it is easily gathered at scale, (2) it provides rich document-level and end-task-adjacent supervision (e.g. class and id attributes often encode document category information), and (3) it allows for new structured prompting that follows the established semantics of HTML (e.g. to do zero-shot summarization by infilling title tags for a webpage that contains the input text). We show that pretraining with a BART-style denoising loss directly on simplified HTML provides highly effective transfer for a wide range of end tasks and supervision levels. HTLM matches or exceeds the performance of comparably sized text-only LMs for zero-shot prompting and fine-tuning for classification benchmarks, while also setting new state-of-the-art performance levels for zero-shot summarization. We also find that hyper-text prompts provide more value to HTLM, in terms of data efficiency, than plain text prompts do for existing LMs, and that HTLM is highly effective at auto-prompting itself, by simply generating the most likely hyper-text formatting for any available training data. We will release all code and models to support future HTLM research. ## Usage For the moment you can use it as is to do a classic Mask Filling task (see snippet bellow) or fine-tune it on a downstream task. ``` from transformers import BartTokenizer, BartForConditionalGeneration TXT = "My friends are <mask> but they eat too many carbs." model_name = "SaulLu/test-add-new-model" tokenizer = BartTokenizer.from_pretrained(model_name) model = BartForConditionalGeneration.from_pretrained(model_name) input_ids = tokenizer([TXT], return_tensors='pt')['input_ids'] logits = model(input_ids).logits masked_index = (input_ids[0] == tokenizer.mask_token_id).nonzero().item() probs = logits[0, masked_index].softmax(dim=0) values, predictions = probs.topk(5) tokenizer.decode(predictions).split() ```
{}
SaulLu/test-add-new-model
null
[ "transformers", "pytorch", "bart", "feature-extraction", "arxiv:2107.06955", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/test-model-2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
# sahajBERT News Category Classification ## Model description You can embed local or remote images using `![](...)` ## Intended uses & limitations #### How to use ```python # You can include sample code which will be formatted ``` #### Limitations and bias Provide examples of latent issues and potential remediations. ## Training data Describe the data you used to train the model. If you initialized it with pre-trained weights, add a link to the pre-trained model card or repository with description of the pre-training data. ## Training procedure ### Collaborative training procedure [here](https://huggingface.co/albertvillanova) ### Preprocessing, hardware used, hyperparameters... ## Eval results ### BibTeX entry and citation info ```bibtex @inproceedings{..., year={2020} } ```
{"language": [], "tags": [], "datasets": [], "metrics": []}
SaulLu/test-model
null
[ "transformers", "pytorch", "albert", "pretraining", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
test readme test 2 test 3 test 4 test 5 test 6 test 7 test 8 test 9 test 10 test 11
{}
SaulLu/test-push-to-hub
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SaulLu/tr7b-350M-validation-alibi-tensorboard
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/bert-base-cased-chaii
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/bert-base-multilingual-cased-finetuned-chaii
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/bert-large-uncased-whole-word-masking-chaii
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/bert-large-uncased-whole-word-masking-finetuned-chaii
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/bert-multi-cased-finedtuned-xquad-chaii
null
[ "transformers", "pytorch", "safetensors", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/bert-multi-cased-finetuned-chaii
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/bert-multi-uncased-finetuned-chaii
null
[ "transformers", "pytorch", "bert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-albert-base
null
[ "transformers", "pytorch", "albert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-albert-large
null
[ "transformers", "pytorch", "albert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-bert-base-uncased
null
[ "transformers", "pytorch", "bert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-bert-large-uncased
null
[ "transformers", "pytorch", "bert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-roberta-base
null
[ "transformers", "pytorch", "roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-roberta-large
null
[ "transformers", "pytorch", "roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # FineTuning | **Architecture** | **Weights** | **Training Loss** | **Validation Loss** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-base) | **0.641** | **0.4728** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-base-uncased) | 0.6781 | 0.4977 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-base) | 0.7119 | 0.5155 | | xlm-roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-xlm-roberta-base) | 0.7225 | 0.525 | | bert-large-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-bert-large-uncased) | 0.7482 | 0.5161 | | albert-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-albert-large) | 1.075 | 0.9921 | | roberta-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-finetuned-roberta-large) | 2.749 | 1.075 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-finetuned-xlm-roberta-base
null
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-albert-base
null
[ "transformers", "pytorch", "albert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-bert-base-uncased
null
[ "transformers", "pytorch", "safetensors", "bert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-distilbert-base-uncased
null
[ "transformers", "pytorch", "distilbert", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-electra-base
null
[ "transformers", "pytorch", "electra", "pretraining", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-electra-large
null
[ "transformers", "pytorch", "electra", "pretraining", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-electra-small
null
[ "transformers", "pytorch", "electra", "pretraining", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
![](https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true) # PreTraining | **Architecture** | **Weights** | **PreTraining Loss** | **PreTraining Perplexity** | |:-----------------------:|:---------------:|:----------------:|:----------------------:| | roberta-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-roberta-base) | **0.3488** | **3.992** | | bert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-bert-base-uncased) | 0.3909 | 6.122 | | electra-large | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-large) | 0.723 | 6.394 | | albert-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-albert-base) | 0.7343 | 7.76 | | electra-small | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-small) | 0.9226 | 11.098 | | electra-base | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-electra-base) | 0.9468 | 8.783 | | distilbert-base-uncased | [huggingface/hub](https://huggingface.co/SauravMaheshkar/clr-pretrained-distilbert-base-uncased) | 1.082 | 7.963 |
{"license": "cc0-1.0", "tags": ["kaggle"], "datasets": ["Commonlit-Readibility"], "metrics": ["Perplexity"], "thumbnail": "https://github.com/SauravMaheshkar/CommonLit-Readibility/blob/main/assets/CommonLit%20-%20Big%20Banner.png?raw=true"}
SauravMaheshkar/clr-pretrained-roberta-base
null
[ "transformers", "pytorch", "safetensors", "roberta", "fill-mask", "kaggle", "dataset:Commonlit-Readibility", "license:cc0-1.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/distilbert-base-cased-distilled-chaii
null
[ "transformers", "pytorch", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/distilbert-base-uncased-distilled-chaii
null
[ "transformers", "pytorch", "safetensors", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/distilbert-multi-finetuned-for-xqua-on-chaii
null
[ "transformers", "pytorch", "safetensors", "distilbert", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/electra-base-chaii
null
[ "transformers", "pytorch", "safetensors", "electra", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
null
<div align = "center"> <img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true"> </div> This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:- | Huggingface Hub Link | Public LB Score | | :---: | :---: | | [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 | | [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
{"language": "multilingual", "license": "cc0-1.0", "tags": ["kaggle", "rembert", "pytorch", "question-answering"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true", "inference": false}
SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii
null
[ "kaggle", "rembert", "pytorch", "question-answering", "multilingual", "dataset:Commonlit-Readibility", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
null
<div align = "center"> <img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true"> </div> This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:- | Huggingface Hub Link | Public LB Score | | :---: | :---: | | [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 | | [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
{"language": "multilingual", "license": "cc0-1.0", "tags": ["kaggle", "rembert", "pytorch", "question-answering"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true", "inference": false}
SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii
null
[ "kaggle", "rembert", "pytorch", "question-answering", "multilingual", "dataset:Commonlit-Readibility", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
null
<div align = "center"> <img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true"> </div> This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:- | Huggingface Hub Link | Public LB Score | | :---: | :---: | | [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 | | [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
{"language": "multilingual", "license": "cc0-1.0", "tags": ["kaggle", "rembert", "pytorch", "question-answering"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true", "inference": false}
SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii
null
[ "kaggle", "rembert", "pytorch", "question-answering", "multilingual", "dataset:Commonlit-Readibility", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
null
<div align = "center"> <img src = "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true"> </div> This dataset contains the [**google/rembert**](https://huggingface.co/transformers/model_doc/rembert.html) model weights according to my team's experimentation strategy during the [**chaii - Hindi and Tamil Question Answering**](https://www.kaggle.com/c/chaii-hindi-and-tamil-question-answering) competition. They are listed below with their corresponding public LB score:- | Huggingface Hub Link | Public LB Score | | :---: | :---: | | [**SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-128-chaii) | 0.724 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-135-chaii) | 0.723 | | [**SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii) | 0.737 | | [**SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii**](https://huggingface.co/SauravMaheshkar/rembert-maxseq-384-docstride-128-chaii) | 0.725 |
{"language": "multilingual", "license": "cc0-1.0", "tags": ["kaggle", "rembert", "pytorch", "question-answering"], "datasets": ["Commonlit-Readibility"], "thumbnail": "https://github.com/SauravMaheshkar/chaii-Hindi-Tamil-QA/blob/main/assets/Coffee%20Banner.png?raw=true", "inference": false}
SauravMaheshkar/rembert-maxseq-400-docstride-135-chaii
null
[ "kaggle", "rembert", "pytorch", "question-answering", "multilingual", "dataset:Commonlit-Readibility", "license:cc0-1.0", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/roberta-base-chaii
null
[ "transformers", "pytorch", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/roberta-large-chaii
null
[ "transformers", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
Practice/Demo repository following the tutorial `run_image_classification_flax.py` script
{}
SauravMaheshkar/vit-base-patch16-imagenette
null
[ "transformers", "jax", "tensorboard", "vit", "image-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/xlm-multi-roberta-large-chaii
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/xlm-roberta-base-chaii
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
{}
SauravMaheshkar/xlm-roberta-large-chaii
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SavageShug/Me
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
Saviour/ChandlerBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sawaki/gpt2-QD
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Saya/saya
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sayan0492/DialoGPT-small-Rotom
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Paimon DialoGPT Model
{"tags": ["conversational"]}
Saz/DialoGPT-small-paimon
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Saz DialoGPT Model
{"tags": ["conversational"]}
Saz/DialoGPT-small-saz
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
#13th Doctor DialoGPT model
{"tags": ["conversational"]}
Science-geek32/DialoGPT-small-doctor
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
13th doctor model DialoGPT-small
{"tags": ["conversational"]}
Science-geek32/DialoGPT-small-doctor2.0
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Scientist-ANkit/NLP_test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Sandal Bot Quick and dumb model for a discord chat bot. Based on DialoGPT-Medium
{"tags": ["conversational"]}
Scoops/SandalBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-medium-Scott") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-medium-Scott") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
ScottaStrong/DialogGPT-medium-Scott
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
ScottaStrong/DialogGPT-medium-joshua
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-small) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-Scott") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-Scott") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
ScottaStrong/DialogGPT-small-Scott
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# DialoGPT Trained on the Speech of a Game Character This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script). I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot) Chat with the model: ```python from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-joshua") model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True))) ```
{"license": "mit", "tags": ["conversational"], "thumbnail": "https://huggingface.co/front/thumbnails/dialogpt.png"}
ScottaStrong/DialogGPT-small-joshua
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "license:mit", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SeanFitt/Bert
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SebastianS/code-search-net-tokenizer
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
# dummy this is only a dummy model originally based on RoBERT model ## intended uses and limitations not intended to be used, same limitations as camembert-base model ## how to use it cant be used (lol) ## training data French subcorpus of the newly available multilingual corpus OSCAR ## training procedure evaluated on multiple downstream tasks ## variable and metrics not explicitly stated ## evaluation metrics maybe OSCAR ## evaluation results not explicitly stated
{"language": "fr", "license": "mit", "datasets": ["oscar"]}
SebastianS/dummy-model
null
[ "transformers", "pytorch", "camembert", "fill-mask", "fr", "dataset:oscar", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SebastianS/dummy
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Melchior DialoGPT Model
{"tags": ["conversational"]}
Sebastianthecrab/DialoGPT-small-melchior
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Sebb/german-nli-base-thesis
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
{}
Sebb/german-nli-large-thesis
null
[ "transformers", "pytorch", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
fill-mask
transformers
{}
Sebu/dummy-model
null
[ "transformers", "pytorch", "camembert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
# Sedged DialoGPT Model
{"tags": ["conversational"]}
Sedge/DialoGPT-small-Sedge
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Selderey/s
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Selwyn/Lamo
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
automatic-speech-recognition
transformers
# wav2vec2-irish-lite Speech to Text ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ga-IE", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("Semih/wav2vec2_Irish_Large") model = Wav2Vec2ForCTC.from_pretrained("Semih/wav2vec2_Irish_Large") resampler = torchaudio.transforms.Resample(48_000, 16_000) ``` Test Result: 55.11
{"language": "ga-IE", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["common_voice"], "metrics": ["wer"]}
Semih/wav2vec2_Irish_Large
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "audio", "speech", "dataset:common_voice", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
# dog Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### buldog ![buldog](images/buldog.jpg) #### golden ![golden](images/golden.jpg) #### pug ![pug](images/pug.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
Sena/dog
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
transformers
# flowers Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### karanfil ![karanfil](images/karanfil.jpg) #### leylak ![leylak](images/leylak.jpg) #### menekse ![menekse](images/menekse.jpg) #### nergis ![nergis](images/nergis.jpg) #### zambak ![zambak](images/zambak.jpg)
{"tags": ["image-classification", "pytorch", "huggingpics"], "metrics": ["accuracy"]}
Sena/flowers
null
[ "transformers", "pytorch", "tensorboard", "vit", "image-classification", "huggingpics", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
image-classification
null
# UniFormer (image model) UniFormer models are trained on ImageNet at resolution 224x224. It was introduced in the paper [UniFormer: Unifying Convolution and Self-attention for Visual Recognition](https://arxiv.org/abs/2201.09450) by Li et al, and first released in [this repository](https://github.com/Sense-X/UniFormer). ## Model description The UniFormer is a type of Vision Transformer, which can seamlessly integrate merits of convolution and self-attention in a concise transformer format. It adopt local MHRA in shallow layers to largely reduce computation burden and global MHRA in deep layers to learn global token relation. Without any extra training data, UniFormer achieves **86.3** top-1 accuracy on ImageNet-1K classification. With only ImageNet-1K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. UniFormer obtains **82.9/84.8** top-1 accuracy on Kinetics-400/600, and **60.9/71.2** top-1 accuracy on Something-Something V1/V2 video classification tasks. It also achieves **53.8** box AP and **46.4** mask AP on COCO object detection task, **50.8** mIoU on ADE20K semantic segmentation task, and **77.4** AP on COCO pose estimation task. ![teaser](framework.png) [Source](https://paperswithcode.com/paper/uniformer-unifying-convolution-and-self) ## Intended uses & limitations You can use the raw model for image classification. We now only upload the models trained without Token Labeling and Layer Scale. More powerful models can be found in [the model hub](https://github.com/Sense-X/UniFormer/tree/main/image_classification). ### ImageNet | Model | Pretrain | Resolution | Top-1 | #Param. | FLOPs | | --------------- | ----------- | ---------- | ----- | ------- | ----- | | UniFormer-S | ImageNet-1K | 224x224 | 82.9 | 22M | 3.6G | | UniFormer-S† | ImageNet-1K | 224x224 | 83.4 | 24M | 4.2G | | UniFormer-B | ImageNet-1K | 224x224 | 83.8 | 50M | 8.3G | ### How to use You can followed our [demo](https://huggingface.co/spaces/Sense-X/uniformer_image_demo/tree/main) to use our models. ```python from uniformer import uniformer_small from imagenet_class_index import imagenet_classnames model = uniformer_small() # load state model_path = hf_hub_download(repo_id="Sense-X/uniformer_image", filename="uniformer_small_in1k.pth") state_dict = torch.load(model_path, map_location='cpu') model.load_state_dict(state_dict) # set to eval mode model = model.to(device) model = model.eval() # process image image = img image_transform = T.Compose( [ T.Resize(224), T.CenterCrop(224), T.ToTensor(), T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), ] ) image = image_transform(image) image = image.unsqueeze(0) # model predicts one of the 1000 ImageNet classes prediction = model(image) predicted_class_idx = prediction.flatten().argmax(-1).item() print("Predicted class:", imagenet_classnames[str(predicted_class_idx)][1]) ``` ### BibTeX entry and citation info ```bibtex @misc{li2022uniformer, title={UniFormer: Unifying Convolution and Self-attention for Visual Recognition}, author={Kunchang Li and Yali Wang and Junhao Zhang and Peng Gao and Guanglu Song and Yu Liu and Hongsheng Li and Yu Qiao}, year={2022}, eprint={2201.09450}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
{"license": "mit", "tags": ["vision", "image-classification"], "datasets": ["imagenet"]}
Sense-X/uniformer_image
null
[ "vision", "image-classification", "dataset:imagenet", "arxiv:2201.09450", "license:mit", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
video-classification
null
# UniFormer (video model) UniFormer models are trained on [Kinetics](https://deepmind.com/research/open-source/kinetics) and [Something-Something](https://20bn.com/datasets/something-something) at resolution 224x224. It was introduced in the paper [UniFormer: Unified Transformer for Efficient Spatial-Temporal Representation Learning](https://arxiv.org/abs/2201.04676) by Li et al, and first released in [this repository](https://github.com/Sense-X/UniFormer). ## Model description The UniFormer is a type of Vision Transformer, which can seamlessly integrate merits of convolution and self-attention in a concise transformer format. It adopt local MHRA in shallow layers to largely reduce computation burden and global MHRA in deep layers to learn global token relation. Without any extra training data, UniFormer achieves **86.3** top-1 accuracy on ImageNet-1K classification. With only ImageNet-1K pre-training, it can simply achieve state-of-the-art performance in a broad range of downstream tasks. UniFormer obtains **82.9/84.8** top-1 accuracy on Kinetics-400/600, and **60.9/71.2** top-1 accuracy on Something-Something V1/V2 video classification tasks. It also achieves **53.8** box AP and **46.4** mask AP on COCO object detection task, **50.8** mIoU on ADE20K semantic segmentation task, and **77.4** AP on COCO pose estimation task. ![teaser](framework.png) [Source](https://paperswithcode.com/paper/uniformer-unified-transformer-for-efficient) ## Intended uses & limitations You can use the raw model for video classification. We now only upload the powerful models with **single clip**. More models can be found in [the model hub](https://github.com/Sense-X/UniFormer/tree/main/video_classification). ### Kinetics | Model | #Frame | Sampling Stride | FLOPs | K400 Top-1 | K600 Top-1 | | ----------- | ------ | --------------- | ----- | ---------- | ---------- | | UniFormer-S | 16x1x1 | 8 | 41.8G | 78.4 | 80.8 | | UniFormer-B | 16x1x1 | 8 | 96.7G | 79.3 | 81.7 | | UniFormer-B | 32x1x1 | 4 | 259G | 80.9 | 82.4 | ### Something-Something | Model | #Frame | FLOPs | SSV1 Top-1 | SSV2 Top-1 | | ----------- | ------ | ----- | ---------- | ---------- | | UniFormer-S | 16x1x1 | 41.8G | 54.4 | 65.0 | | UniFormer-B | 32x1x1 | 259G | 58.0 | 67.5 | ### How to use You can followed our [demo](https://huggingface.co/spaces/Sense-X/uniformer_video_demo/tree/main) to use our models. ```python from uniformer import uniformer_small from kinetics_class_index import kinetics_classnames model = uniformer_small() # load state model_path = hf_hub_download(repo_id="Sense-X/uniformer_video", filename="uniformer_small_k400_16x8.pth") state_dict = torch.load(model_path, map_location='cpu') model.load_state_dict(state_dict) # set to eval mode model = model.to(device) model = model.eval() # please refer to the following url to process video of Kinetics: # https://huggingface.co/spaces/Sense-X/uniformer_video_demo/blob/main/app.py vid = load_video(video) # model predicts one of the 400 Kintics classes prediction = model(vid) predicted_class_idx = prediction.flatten().argmax(-1).item() print("Predicted class:", kinetics_classnames[str(predicted_class_idx)]) ``` ### BibTeX entry and citation info ```bibtex @misc{li2022uniformer, title={UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning}, author={Kunchang Li and Yali Wang and Peng Gao and Guanglu Song and Yu Liu and Hongsheng Li and Yu Qiao}, year={2022}, eprint={2201.04676}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
{"license": "mit", "tags": ["vision", "video-classification"], "datasets": ["kinetics-400", "kinetics-600", "something-something-v1", "something-something-v2"]}
Sense-X/uniformer_video
null
[ "vision", "video-classification", "dataset:kinetics-400", "dataset:kinetics-600", "dataset:something-something-v1", "dataset:something-something-v2", "arxiv:2201.04676", "license:mit", "has_space", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-generation
transformers
GPyT is a GPT2 model trained from scratch (not fine tuned) on Python code from Github. Overall, it was ~80GB of pure Python code, the current GPyT model is a mere 2 epochs through this data, so it may benefit greatly from continued training and/or fine-tuning. Newlines are replaced by `<N>` Input to the model is code, up to the context length of 1024, with newlines replaced by `<N>` Here's a quick example of using this model: ```py from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("Sentdex/GPyT") model = AutoModelWithLMHead.from_pretrained("Sentdex/GPyT") # copy and paste some code in here inp = """import""" newlinechar = "<N>" converted = inp.replace("\n", newlinechar) tokenized = tokenizer.encode(converted, return_tensors='pt') resp = model.generate(tokenized) decoded = tokenizer.decode(resp[0]) reformatted = decoded.replace("<N>","\n") print(reformatted) ``` Should produce: ``` import numpy as np import pytest import pandas as pd<N ``` This model does a ton more than just imports, however. For a bunch of examples and a better understanding of the model's capabilities: https://pythonprogramming.net/GPT-python-code-transformer-model-GPyT/ Considerations: 1. This model is intended for educational and research use only. Do not trust model outputs. 2. Model is highly likely to regurgitate code almost exactly as it saw it. It's up to you to determine licensing if you intend to actually use the generated code. 3. All Python code was blindly pulled from github. This means included code is both Python 2 and 3, among other more subtle differences, such as tabs being 2 spaces in some cases and 4 in others...and more non-homologous things. 4. Along with the above, this means the code generated could wind up doing or suggesting just about anything. Run the generated code at own risk...it could be *anything*
{"language": "code", "license": "mit", "tags": ["Code", "GPyT", "code generator"]}
Sentdex/GPyT
null
[ "transformers", "pytorch", "tf", "gpt2", "text-generation", "Code", "GPyT", "code generator", "code", "license:mit", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Senthil/Test
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SentralKL/DialoGPT-medium-ricksanchez
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
question-answering
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-cased-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.0458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.0179 | 1.0 | 6194 | 0.9548 | | 0.7277 | 2.0 | 12388 | 0.9717 | | 0.507 | 3.0 | 18582 | 1.0458 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.10.0+cu111 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-cased-finetuned-squad", "results": []}]}
Seongkyu/bert-base-cased-finetuned-squad
null
[ "transformers", "pytorch", "tensorboard", "bert", "question-answering", "generated_from_trainer", "dataset:squad", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text2text-generation
transformers
{}
Seonguk/textSummarization
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SergeT/ManWithBeard
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
Sergey222/Aaa
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SeruWulf/Me
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MiniLM-L12-H384-uncased__sst2__all-train This model is a fine-tuned version of [microsoft/MiniLM-L12-H384-uncased](https://huggingface.co/microsoft/MiniLM-L12-H384-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2632 - Accuracy: 0.9055 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.4183 | 1.0 | 433 | 0.3456 | 0.8720 | | 0.2714 | 2.0 | 866 | 0.2632 | 0.9055 | | 0.2016 | 3.0 | 1299 | 0.3357 | 0.8990 | | 0.1501 | 4.0 | 1732 | 0.4474 | 0.8863 | | 0.1119 | 5.0 | 2165 | 0.3998 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu102 - Datasets 1.17.0 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "MiniLM-L12-H384-uncased__sst2__all-train", "results": []}]}
SetFit/MiniLM-L12-H384-uncased__sst2__all-train
null
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-base__sst2__all-train This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6964 - Accuracy: 0.49 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 7 | 0.6964 | 0.49 | | No log | 2.0 | 14 | 0.7010 | 0.49 | | No log | 3.0 | 21 | 0.7031 | 0.49 | | No log | 4.0 | 28 | 0.7054 | 0.49 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-base__sst2__all-train", "results": []}]}
SetFit/deberta-v3-base__sst2__all-train
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-0 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9917 - Accuracy: 0.7705 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7001 | 1.0 | 7 | 0.7327 | 0.2857 | | 0.6326 | 2.0 | 14 | 0.6479 | 0.5714 | | 0.5232 | 3.0 | 21 | 0.5714 | 0.5714 | | 0.3313 | 4.0 | 28 | 0.6340 | 0.7143 | | 0.3161 | 5.0 | 35 | 0.6304 | 0.7143 | | 0.0943 | 6.0 | 42 | 0.4719 | 0.8571 | | 0.0593 | 7.0 | 49 | 0.5000 | 0.7143 | | 0.0402 | 8.0 | 56 | 0.3530 | 0.8571 | | 0.0307 | 9.0 | 63 | 0.3499 | 0.8571 | | 0.0033 | 10.0 | 70 | 0.3258 | 0.8571 | | 0.0021 | 11.0 | 77 | 0.3362 | 0.8571 | | 0.0012 | 12.0 | 84 | 0.4591 | 0.8571 | | 0.0036 | 13.0 | 91 | 0.4661 | 0.8571 | | 0.001 | 14.0 | 98 | 0.5084 | 0.8571 | | 0.0017 | 15.0 | 105 | 0.5844 | 0.8571 | | 0.0005 | 16.0 | 112 | 0.6645 | 0.8571 | | 0.002 | 17.0 | 119 | 0.7422 | 0.8571 | | 0.0006 | 18.0 | 126 | 0.7354 | 0.8571 | | 0.0005 | 19.0 | 133 | 0.7265 | 0.8571 | | 0.0005 | 20.0 | 140 | 0.7207 | 0.8571 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-0", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-0
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-1 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6804 - Accuracy: 0.5497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7086 | 1.0 | 7 | 0.7176 | 0.2857 | | 0.6897 | 2.0 | 14 | 0.7057 | 0.2857 | | 0.6491 | 3.0 | 21 | 0.6582 | 0.8571 | | 0.567 | 4.0 | 28 | 0.4480 | 0.8571 | | 0.4304 | 5.0 | 35 | 0.5465 | 0.7143 | | 0.0684 | 6.0 | 42 | 0.5408 | 0.8571 | | 0.0339 | 7.0 | 49 | 0.6501 | 0.8571 | | 0.0082 | 8.0 | 56 | 0.9152 | 0.8571 | | 0.0067 | 9.0 | 63 | 2.5162 | 0.5714 | | 0.0045 | 10.0 | 70 | 1.1136 | 0.8571 | | 0.0012 | 11.0 | 77 | 1.1668 | 0.8571 | | 0.0007 | 12.0 | 84 | 1.2071 | 0.8571 | | 0.0005 | 13.0 | 91 | 1.2310 | 0.8571 | | 0.0006 | 14.0 | 98 | 1.2476 | 0.8571 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-1", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-1
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-2 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6959 - Accuracy: 0.5008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7079 | 1.0 | 7 | 0.7361 | 0.2857 | | 0.6815 | 2.0 | 14 | 0.7659 | 0.2857 | | 0.6938 | 3.0 | 21 | 0.7944 | 0.2857 | | 0.4584 | 4.0 | 28 | 1.2441 | 0.2857 | | 0.4949 | 5.0 | 35 | 1.2285 | 0.5714 | | 0.0574 | 6.0 | 42 | 1.7796 | 0.5714 | | 0.0156 | 7.0 | 49 | 2.6027 | 0.5714 | | 0.0051 | 8.0 | 56 | 2.8717 | 0.5714 | | 0.0017 | 9.0 | 63 | 2.8491 | 0.5714 | | 0.0023 | 10.0 | 70 | 1.7149 | 0.7143 | | 0.001 | 11.0 | 77 | 1.1101 | 0.7143 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-2", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-2
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-3 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6286 - Accuracy: 0.7068 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6955 | 1.0 | 7 | 0.7370 | 0.2857 | | 0.6919 | 2.0 | 14 | 0.6855 | 0.4286 | | 0.6347 | 3.0 | 21 | 0.5872 | 0.7143 | | 0.4016 | 4.0 | 28 | 0.6644 | 0.7143 | | 0.3097 | 5.0 | 35 | 0.5120 | 0.7143 | | 0.0785 | 6.0 | 42 | 0.5845 | 0.7143 | | 0.024 | 7.0 | 49 | 0.6951 | 0.7143 | | 0.0132 | 8.0 | 56 | 0.8972 | 0.7143 | | 0.0037 | 9.0 | 63 | 1.5798 | 0.7143 | | 0.0034 | 10.0 | 70 | 1.5178 | 0.7143 | | 0.003 | 11.0 | 77 | 1.3511 | 0.7143 | | 0.0012 | 12.0 | 84 | 1.1346 | 0.7143 | | 0.0007 | 13.0 | 91 | 0.9752 | 0.7143 | | 0.0008 | 14.0 | 98 | 0.8531 | 0.7143 | | 0.0007 | 15.0 | 105 | 0.8149 | 0.7143 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-3", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-3
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-4 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6329 - Accuracy: 0.6392 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6945 | 1.0 | 7 | 0.7381 | 0.2857 | | 0.7072 | 2.0 | 14 | 0.7465 | 0.2857 | | 0.6548 | 3.0 | 21 | 0.7277 | 0.4286 | | 0.5695 | 4.0 | 28 | 0.6738 | 0.5714 | | 0.4615 | 5.0 | 35 | 0.8559 | 0.5714 | | 0.0823 | 6.0 | 42 | 1.0983 | 0.5714 | | 0.0274 | 7.0 | 49 | 1.9937 | 0.5714 | | 0.0106 | 8.0 | 56 | 2.2209 | 0.5714 | | 0.0039 | 9.0 | 63 | 2.2114 | 0.5714 | | 0.0031 | 10.0 | 70 | 2.2808 | 0.5714 | | 0.0013 | 11.0 | 77 | 2.3707 | 0.5714 | | 0.0008 | 12.0 | 84 | 2.4902 | 0.5714 | | 0.0005 | 13.0 | 91 | 2.5208 | 0.5714 | | 0.0007 | 14.0 | 98 | 2.5683 | 0.5714 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-4", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-4
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-5 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5433 - Accuracy: 0.7924 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6774 | 1.0 | 7 | 0.7450 | 0.2857 | | 0.7017 | 2.0 | 14 | 0.7552 | 0.2857 | | 0.6438 | 3.0 | 21 | 0.7140 | 0.4286 | | 0.3525 | 4.0 | 28 | 0.5570 | 0.7143 | | 0.2061 | 5.0 | 35 | 0.5303 | 0.8571 | | 0.0205 | 6.0 | 42 | 0.6706 | 0.8571 | | 0.0068 | 7.0 | 49 | 0.8284 | 0.8571 | | 0.0029 | 8.0 | 56 | 0.9281 | 0.8571 | | 0.0015 | 9.0 | 63 | 0.9871 | 0.8571 | | 0.0013 | 10.0 | 70 | 1.0208 | 0.8571 | | 0.0008 | 11.0 | 77 | 1.0329 | 0.8571 | | 0.0005 | 12.0 | 84 | 1.0348 | 0.8571 | | 0.0004 | 13.0 | 91 | 1.0437 | 0.8571 | | 0.0005 | 14.0 | 98 | 1.0512 | 0.8571 | | 0.0004 | 15.0 | 105 | 1.0639 | 0.8571 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-5", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-5
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-6 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6846 - Accuracy: 0.5058 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6673 | 1.0 | 7 | 0.7580 | 0.2857 | | 0.5896 | 2.0 | 14 | 0.7885 | 0.5714 | | 0.5294 | 3.0 | 21 | 1.0040 | 0.4286 | | 0.3163 | 4.0 | 28 | 1.1761 | 0.5714 | | 0.1315 | 5.0 | 35 | 1.4315 | 0.4286 | | 0.0312 | 6.0 | 42 | 2.6115 | 0.2857 | | 0.1774 | 7.0 | 49 | 2.1631 | 0.5714 | | 0.0052 | 8.0 | 56 | 2.3838 | 0.4286 | | 0.0043 | 9.0 | 63 | 2.6553 | 0.4286 | | 0.0032 | 10.0 | 70 | 2.2774 | 0.4286 | | 0.0015 | 11.0 | 77 | 1.9467 | 0.7143 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-6", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-6
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-7 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6953 - Accuracy: 0.5063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6911 | 1.0 | 7 | 0.7455 | 0.2857 | | 0.6844 | 2.0 | 14 | 0.7242 | 0.2857 | | 0.6137 | 3.0 | 21 | 0.7341 | 0.4286 | | 0.3805 | 4.0 | 28 | 1.0217 | 0.4286 | | 0.2201 | 5.0 | 35 | 1.1437 | 0.2857 | | 0.0296 | 6.0 | 42 | 1.5997 | 0.4286 | | 0.0103 | 7.0 | 49 | 2.6835 | 0.4286 | | 0.0046 | 8.0 | 56 | 3.3521 | 0.4286 | | 0.002 | 9.0 | 63 | 3.7846 | 0.4286 | | 0.0017 | 10.0 | 70 | 4.0088 | 0.4286 | | 0.0018 | 11.0 | 77 | 4.1483 | 0.4286 | | 0.0006 | 12.0 | 84 | 4.2235 | 0.4286 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-7", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-7
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-8 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6915 - Accuracy: 0.6579 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7129 | 1.0 | 7 | 0.7309 | 0.2857 | | 0.6549 | 2.0 | 14 | 0.7316 | 0.4286 | | 0.621 | 3.0 | 21 | 0.7131 | 0.5714 | | 0.3472 | 4.0 | 28 | 0.5703 | 0.4286 | | 0.2041 | 5.0 | 35 | 0.6675 | 0.5714 | | 0.031 | 6.0 | 42 | 1.6750 | 0.5714 | | 0.0141 | 7.0 | 49 | 1.8743 | 0.5714 | | 0.0055 | 8.0 | 56 | 1.1778 | 0.5714 | | 0.0024 | 9.0 | 63 | 1.0699 | 0.5714 | | 0.0019 | 10.0 | 70 | 1.0933 | 0.5714 | | 0.0012 | 11.0 | 77 | 1.1218 | 0.7143 | | 0.0007 | 12.0 | 84 | 1.1468 | 0.7143 | | 0.0006 | 13.0 | 91 | 1.1584 | 0.7143 | | 0.0006 | 14.0 | 98 | 1.3092 | 0.7143 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-16-8", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-8
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-16-9 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2598 - Accuracy: 0.7809 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6887 | 1.0 | 7 | 0.7452 | 0.2857 | | 0.6889 | 2.0 | 14 | 0.7988 | 0.2857 | | 0.6501 | 3.0 | 21 | 0.8987 | 0.2857 | | 0.4286 | 4.0 | 28 | 0.9186 | 0.4286 | | 0.3591 | 5.0 | 35 | 0.5566 | 0.7143 | | 0.0339 | 6.0 | 42 | 1.1130 | 0.5714 | | 0.013 | 7.0 | 49 | 1.8296 | 0.7143 | | 0.0041 | 8.0 | 56 | 1.7069 | 0.7143 | | 0.0023 | 9.0 | 63 | 1.1942 | 0.7143 | | 0.0022 | 10.0 | 70 | 0.6054 | 0.7143 | | 0.0011 | 11.0 | 77 | 0.3872 | 0.7143 | | 0.0006 | 12.0 | 84 | 0.3217 | 0.7143 | | 0.0005 | 13.0 | 91 | 0.2879 | 0.8571 | | 0.0005 | 14.0 | 98 | 0.2640 | 0.8571 | | 0.0004 | 15.0 | 105 | 0.2531 | 0.8571 | | 0.0003 | 16.0 | 112 | 0.2384 | 0.8571 | | 0.0004 | 17.0 | 119 | 0.2338 | 0.8571 | | 0.0003 | 18.0 | 126 | 0.2314 | 0.8571 | | 0.0003 | 19.0 | 133 | 0.2276 | 0.8571 | | 0.0003 | 20.0 | 140 | 0.2172 | 0.8571 | | 0.0003 | 21.0 | 147 | 0.2069 | 0.8571 | | 0.0002 | 22.0 | 154 | 0.2018 | 0.8571 | | 0.0002 | 23.0 | 161 | 0.2005 | 0.8571 | | 0.0002 | 24.0 | 168 | 0.1985 | 0.8571 | | 0.0002 | 25.0 | 175 | 0.1985 | 1.0 | | 0.0002 | 26.0 | 182 | 0.1955 | 1.0 | | 0.0002 | 27.0 | 189 | 0.1967 | 1.0 | | 0.0002 | 28.0 | 196 | 0.1918 | 1.0 | | 0.0002 | 29.0 | 203 | 0.1888 | 1.0 | | 0.0002 | 30.0 | 210 | 0.1864 | 1.0 | | 0.0002 | 31.0 | 217 | 0.1870 | 1.0 | | 0.0002 | 32.0 | 224 | 0.1892 | 1.0 | | 0.0002 | 33.0 | 231 | 0.1917 | 1.0 | | 0.0002 | 34.0 | 238 | 0.1869 | 1.0 | | 0.0002 | 35.0 | 245 | 0.1812 | 1.0 | | 0.0001 | 36.0 | 252 | 0.1777 | 1.0 | | 0.0002 | 37.0 | 259 | 0.1798 | 1.0 | | 0.0002 | 38.0 | 266 | 0.1824 | 0.8571 | | 0.0002 | 39.0 | 273 | 0.1846 | 0.8571 | | 0.0002 | 40.0 | 280 | 0.1839 | 0.8571 | | 0.0001 | 41.0 | 287 | 0.1826 | 0.8571 | | 0.0001 | 42.0 | 294 | 0.1779 | 0.8571 | | 0.0002 | 43.0 | 301 | 0.1762 | 0.8571 | | 0.0001 | 44.0 | 308 | 0.1742 | 1.0 | | 0.0002 | 45.0 | 315 | 0.1708 | 1.0 | | 0.0001 | 46.0 | 322 | 0.1702 | 1.0 | | 0.0001 | 47.0 | 329 | 0.1699 | 1.0 | | 0.0001 | 48.0 | 336 | 0.1695 | 1.0 | | 0.0001 | 49.0 | 343 | 0.1683 | 1.0 | | 0.0001 | 50.0 | 350 | 0.1681 | 1.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-16-9", "results": []}]}
SetFit/deberta-v3-large__sst2__train-16-9
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-32-0 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4849 - Accuracy: 0.7716 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7059 | 1.0 | 13 | 0.6840 | 0.5385 | | 0.6595 | 2.0 | 26 | 0.6214 | 0.6923 | | 0.4153 | 3.0 | 39 | 0.1981 | 0.9231 | | 0.0733 | 4.0 | 52 | 0.5068 | 0.9231 | | 0.2092 | 5.0 | 65 | 1.3114 | 0.6923 | | 0.003 | 6.0 | 78 | 1.1062 | 0.8462 | | 0.0012 | 7.0 | 91 | 1.5948 | 0.7692 | | 0.0008 | 8.0 | 104 | 1.6913 | 0.7692 | | 0.0006 | 9.0 | 117 | 1.7191 | 0.7692 | | 0.0005 | 10.0 | 130 | 1.6527 | 0.7692 | | 0.0003 | 11.0 | 143 | 1.4840 | 0.7692 | | 0.0002 | 12.0 | 156 | 1.3076 | 0.8462 | | 0.0002 | 13.0 | 169 | 1.3130 | 0.8462 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-32-0", "results": []}]}
SetFit/deberta-v3-large__sst2__train-32-0
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-32-1 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4201 - Accuracy: 0.8759 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7162 | 1.0 | 13 | 0.6832 | 0.5385 | | 0.6561 | 2.0 | 26 | 0.7270 | 0.4615 | | 0.4685 | 3.0 | 39 | 1.0674 | 0.5385 | | 0.2837 | 4.0 | 52 | 1.0841 | 0.5385 | | 0.1129 | 5.0 | 65 | 0.3502 | 0.9231 | | 0.0118 | 6.0 | 78 | 0.4829 | 0.9231 | | 0.0022 | 7.0 | 91 | 0.7430 | 0.8462 | | 0.0007 | 8.0 | 104 | 0.8219 | 0.8462 | | 0.0005 | 9.0 | 117 | 0.8787 | 0.8462 | | 0.0003 | 10.0 | 130 | 0.8713 | 0.8462 | | 0.0003 | 11.0 | 143 | 0.8473 | 0.8462 | | 0.0002 | 12.0 | 156 | 0.8482 | 0.8462 | | 0.0002 | 13.0 | 169 | 0.8494 | 0.8462 | | 0.0002 | 14.0 | 182 | 0.8638 | 0.8462 | | 0.0002 | 15.0 | 195 | 0.8492 | 0.8462 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-32-1", "results": []}]}
SetFit/deberta-v3-large__sst2__train-32-1
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
null
null
{}
SetFit/deberta-v3-large__sst2__train-32-2
null
[ "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-0 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7088 - Accuracy: 0.5008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6705 | 1.0 | 3 | 0.7961 | 0.25 | | 0.6571 | 2.0 | 6 | 0.8092 | 0.25 | | 0.7043 | 3.0 | 9 | 0.7977 | 0.25 | | 0.6207 | 4.0 | 12 | 0.8478 | 0.25 | | 0.5181 | 5.0 | 15 | 0.9782 | 0.25 | | 0.4136 | 6.0 | 18 | 1.3151 | 0.25 | | 0.3702 | 7.0 | 21 | 1.8633 | 0.25 | | 0.338 | 8.0 | 24 | 2.2119 | 0.25 | | 0.2812 | 9.0 | 27 | 2.3058 | 0.25 | | 0.2563 | 10.0 | 30 | 2.3353 | 0.25 | | 0.2132 | 11.0 | 33 | 2.5921 | 0.25 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "base_model": "microsoft/deberta-v3-large", "model-index": [{"name": "deberta-v3-large__sst2__train-8-0", "results": []}]}
SetFit/deberta-v3-large__sst2__train-8-0
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "base_model:microsoft/deberta-v3-large", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-1 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7020 - Accuracy: 0.5008 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6773 | 1.0 | 3 | 0.7822 | 0.25 | | 0.6587 | 2.0 | 6 | 0.8033 | 0.25 | | 0.693 | 3.0 | 9 | 0.8101 | 0.25 | | 0.5979 | 4.0 | 12 | 1.1235 | 0.25 | | 0.4095 | 5.0 | 15 | 1.3563 | 0.25 | | 0.2836 | 6.0 | 18 | 1.5325 | 0.5 | | 0.1627 | 7.0 | 21 | 1.7786 | 0.25 | | 0.0956 | 8.0 | 24 | 2.0067 | 0.5 | | 0.0535 | 9.0 | 27 | 2.3351 | 0.5 | | 0.0315 | 10.0 | 30 | 2.6204 | 0.5 | | 0.0182 | 11.0 | 33 | 2.8483 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-1", "results": []}]}
SetFit/deberta-v3-large__sst2__train-8-1
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-2 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6794 - Accuracy: 0.6063 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6942 | 1.0 | 3 | 0.7940 | 0.25 | | 0.6068 | 2.0 | 6 | 0.9326 | 0.25 | | 0.6553 | 3.0 | 9 | 0.7979 | 0.25 | | 0.475 | 4.0 | 12 | 0.7775 | 0.25 | | 0.377 | 5.0 | 15 | 0.7477 | 0.25 | | 0.3176 | 6.0 | 18 | 0.6856 | 0.75 | | 0.2708 | 7.0 | 21 | 0.6554 | 0.75 | | 0.2855 | 8.0 | 24 | 0.8129 | 0.5 | | 0.148 | 9.0 | 27 | 0.7074 | 0.75 | | 0.0947 | 10.0 | 30 | 0.7090 | 0.75 | | 0.049 | 11.0 | 33 | 0.7885 | 0.75 | | 0.0252 | 12.0 | 36 | 0.9203 | 0.75 | | 0.0165 | 13.0 | 39 | 1.0937 | 0.75 | | 0.0084 | 14.0 | 42 | 1.2502 | 0.75 | | 0.0059 | 15.0 | 45 | 1.3726 | 0.75 | | 0.0037 | 16.0 | 48 | 1.4784 | 0.75 | | 0.003 | 17.0 | 51 | 1.5615 | 0.75 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-2", "results": []}]}
SetFit/deberta-v3-large__sst2__train-8-2
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-3 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6421 - Accuracy: 0.6310 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6696 | 1.0 | 3 | 0.7917 | 0.25 | | 0.6436 | 2.0 | 6 | 0.8107 | 0.25 | | 0.6923 | 3.0 | 9 | 0.8302 | 0.25 | | 0.5051 | 4.0 | 12 | 0.9828 | 0.25 | | 0.3688 | 5.0 | 15 | 0.7402 | 0.25 | | 0.2671 | 6.0 | 18 | 0.5820 | 0.75 | | 0.1935 | 7.0 | 21 | 0.8356 | 0.5 | | 0.0815 | 8.0 | 24 | 1.0431 | 0.25 | | 0.0591 | 9.0 | 27 | 0.9679 | 0.75 | | 0.0276 | 10.0 | 30 | 1.0659 | 0.75 | | 0.0175 | 11.0 | 33 | 0.9689 | 0.75 | | 0.0152 | 12.0 | 36 | 0.8820 | 0.75 | | 0.006 | 13.0 | 39 | 0.8337 | 0.75 | | 0.0041 | 14.0 | 42 | 0.7650 | 0.75 | | 0.0036 | 15.0 | 45 | 0.6960 | 0.75 | | 0.0034 | 16.0 | 48 | 0.6548 | 0.75 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-3", "results": []}]}
SetFit/deberta-v3-large__sst2__train-8-3
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-v3-large__sst2__train-8-4 This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3023 - Accuracy: 0.7057 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6816 | 1.0 | 3 | 0.8072 | 0.25 | | 0.6672 | 2.0 | 6 | 0.8740 | 0.25 | | 0.6667 | 3.0 | 9 | 0.8578 | 0.25 | | 0.5346 | 4.0 | 12 | 1.0353 | 0.25 | | 0.4517 | 5.0 | 15 | 1.1030 | 0.25 | | 0.3095 | 6.0 | 18 | 0.9986 | 0.25 | | 0.2464 | 7.0 | 21 | 0.9286 | 0.5 | | 0.1342 | 8.0 | 24 | 0.4063 | 1.0 | | 0.0851 | 9.0 | 27 | 0.2210 | 1.0 | | 0.0491 | 10.0 | 30 | 0.2302 | 1.0 | | 0.0211 | 11.0 | 33 | 0.4020 | 0.75 | | 0.017 | 12.0 | 36 | 0.2382 | 1.0 | | 0.0084 | 13.0 | 39 | 0.0852 | 1.0 | | 0.0051 | 14.0 | 42 | 0.0354 | 1.0 | | 0.0047 | 15.0 | 45 | 0.0208 | 1.0 | | 0.0029 | 16.0 | 48 | 0.0155 | 1.0 | | 0.0022 | 17.0 | 51 | 0.0139 | 1.0 | | 0.0019 | 18.0 | 54 | 0.0144 | 1.0 | | 0.0016 | 19.0 | 57 | 0.0168 | 1.0 | | 0.0013 | 20.0 | 60 | 0.0231 | 1.0 | | 0.0011 | 21.0 | 63 | 0.0369 | 1.0 | | 0.0009 | 22.0 | 66 | 0.0528 | 1.0 | | 0.001 | 23.0 | 69 | 0.0639 | 1.0 | | 0.0009 | 24.0 | 72 | 0.0670 | 1.0 | | 0.0009 | 25.0 | 75 | 0.0526 | 1.0 | | 0.0008 | 26.0 | 78 | 0.0425 | 1.0 | | 0.0011 | 27.0 | 81 | 0.0135 | 1.0 | | 0.0007 | 28.0 | 84 | 0.0076 | 1.0 | | 0.0007 | 29.0 | 87 | 0.0057 | 1.0 | | 0.0007 | 30.0 | 90 | 0.0049 | 1.0 | | 0.0008 | 31.0 | 93 | 0.0045 | 1.0 | | 0.0007 | 32.0 | 96 | 0.0044 | 1.0 | | 0.0008 | 33.0 | 99 | 0.0043 | 1.0 | | 0.0005 | 34.0 | 102 | 0.0044 | 1.0 | | 0.0006 | 35.0 | 105 | 0.0045 | 1.0 | | 0.0006 | 36.0 | 108 | 0.0046 | 1.0 | | 0.0007 | 37.0 | 111 | 0.0048 | 1.0 | | 0.0006 | 38.0 | 114 | 0.0049 | 1.0 | | 0.0005 | 39.0 | 117 | 0.0050 | 1.0 | | 0.0005 | 40.0 | 120 | 0.0050 | 1.0 | | 0.0004 | 41.0 | 123 | 0.0051 | 1.0 | | 0.0005 | 42.0 | 126 | 0.0051 | 1.0 | | 0.0004 | 43.0 | 129 | 0.0051 | 1.0 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.10.3
{"license": "mit", "tags": ["generated_from_trainer"], "metrics": ["accuracy"], "model-index": [{"name": "deberta-v3-large__sst2__train-8-4", "results": []}]}
SetFit/deberta-v3-large__sst2__train-8-4
null
[ "transformers", "pytorch", "deberta-v2", "text-classification", "generated_from_trainer", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:04+00:00