Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
fill-mask
transformers
# RuBio for paper: dsdfsfsdf
{}
alexyalunin/my-awesome-model
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
<img src="https://raw.githubusercontent.com/alger-ia/dziribert/main/dziribert_drawing.png" alt="drawing" width="25%" height="25%" align="right"/> # DziriBERT DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian text contents written using both Arabic and Latin characters. It sets new state of the art results on Algerian text classification datasets, even if it has been pre-trained on much less data (~1 million tweets). For more information, please visit our paper: https://arxiv.org/pdf/2109.12346.pdf. ## How to use ```python from transformers import BertTokenizer, BertForMaskedLM tokenizer = BertTokenizer.from_pretrained("alger-ia/dziribert") model = BertForMaskedLM.from_pretrained("alger-ia/dziribert") ``` You can find a fine-tuning script in our Github repo: https://github.com/alger-ia/dziribert ## Limitations The pre-training data used in this project comes from social media (Twitter). Therefore, the Masked Language Modeling objective may predict offensive words in some situations. Modeling this kind of words may be either an advantage (e.g. when training a hate speech model) or a disadvantage (e.g. when generating answers that are directly sent to the end user). Depending on your downstream task, you may need to filter out such words especially when returning automatically generated text to the end user. ### How to cite ```bibtex @article{dziribert, title={DziriBERT: a Pre-trained Language Model for the Algerian Dialect}, author={Abdaoui, Amine and Berrimi, Mohamed and Oussalah, Mourad and Moussaoui, Abdelouahab}, journal={arXiv preprint arXiv:2109.12346}, year={2021} } ``` ## Contact Please contact [email protected] for any question, feedback or request.
{"language": ["ar", "dz"], "license": "apache-2.0", "tags": ["pytorch", "bert", "multilingual", "ar", "dz"], "widget": [{"text": " \u0623\u0646\u0627 \u0645\u0646 \u0627\u0644\u062c\u0632\u0627\u0626\u0631 \u0645\u0646 \u0648\u0644\u0627\u064a\u0629 [MASK] "}, {"text": "rabi [MASK] khouya sami"}, {"text": " \u0631\u0628\u064a [MASK] \u062e\u0648\u064a\u0627 \u0644\u0639\u0632\u064a\u0632"}, {"text": "tahya el [MASK]."}, {"text": "rouhi ya dzayer [MASK]"}], "inference": true}
alger-ia/dziribert
null
[ "transformers", "pytorch", "tf", "safetensors", "bert", "fill-mask", "multilingual", "ar", "dz", "arxiv:2109.12346", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
<p>Chinese Bert Large Model</p> <p>bert large中文预训练模型</p> #### 训练语料 中文wiki, 2018-2020海量新闻语料
{}
algolet/bert-large-chinese
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
<h3 align="center"> <p>MT5 Base Model for Chinese Question Generation</p> </h3> <h3 align="center"> <p>基于mt5的中文问题生成任务</p> </h3> #### 可以通过安装question-generation包开始用 ``` pip install question-generation ``` 使用方法请参考github项目:https://github.com/algolet/question_generation #### 在线使用 可以直接在线使用我们的模型:https://www.algolet.com/applications/qg #### 通过transformers调用 ``` python import torch from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("algolet/mt5-base-chinese-qg") model = AutoModelForSeq2SeqLM.from_pretrained("algolet/mt5-base-chinese-qg") model.eval() text = "在一个寒冷的冬天,赶集完回家的农夫在路边发现了一条冻僵了的蛇。他很可怜蛇,就把它放在怀里。当他身上的热气把蛇温暖以后,蛇很快苏醒了,露出了残忍的本性,给了农夫致命的伤害——咬了农夫一口。农夫临死之前说:“我竟然救了一条可怜的毒蛇,就应该受到这种报应啊!”" text = "question generation: " + text inputs = tokenizer(text, return_tensors='pt', truncation=True, max_length=512) with torch.no_grad(): outs = model.generate(input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], max_length=128, no_repeat_ngram_size=4, num_beams=4) question = tokenizer.decode(outs[0], skip_special_tokens=True) questions = [q.strip() for q in question.split("<sep>") if len(q.strip()) > 0] print(questions) ['在寒冷的冬天,农夫在哪里发现了一条可怜的蛇?', '农夫是如何看待蛇的?', '当农夫遇到蛇时,他做了什么?'] ``` #### 指标 rouge-1: 0.4041 rouge-2: 0.2104 rouge-l: 0.3843 --- language: - zh tags: - mt5 - question generation metrics: - rouge ---
{}
algolet/mt5-base-chinese-qg
null
[ "transformers", "pytorch", "mt5", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
algomuffin/disney
null
[ "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
algomuffin/dummy
null
[ "transformers", "pytorch", "camembert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
algomuffin/my_model
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
algoprog/mimics-bart-base
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
{}
algoprog/mimics-query-bart-base
null
[ "transformers", "pytorch", "bart", "text2text-generation", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
feature-extraction
transformers
{}
algoprog/mimics-query-facet-encoder-mpnet-base
null
[ "transformers", "pytorch", "mpnet", "feature-extraction", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
algoprog/mimics-tagging-roberta-base
null
[ "transformers", "pytorch", "roberta", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2899 - Precision: 0.3170 - Recall: 0.5261 - F1: 0.3956 - Accuracy: 0.8799 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.2912 | 0.2752 | 0.4444 | 0.3400 | 0.8730 | | No log | 2.0 | 60 | 0.2772 | 0.4005 | 0.4589 | 0.4277 | 0.8911 | | No log | 3.0 | 90 | 0.2267 | 0.3642 | 0.5281 | 0.4311 | 0.9043 | | No log | 4.0 | 120 | 0.2129 | 0.3617 | 0.5455 | 0.4350 | 0.9140 | | No log | 5.0 | 150 | 0.2399 | 0.3797 | 0.5556 | 0.4511 | 0.9114 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27", "results": []}]}
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-04_48_27
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2698 - Precision: 0.3321 - Recall: 0.5265 - F1: 0.4073 - Accuracy: 0.8942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3314 | 0.1627 | 0.3746 | 0.2269 | 0.8419 | | No log | 2.0 | 60 | 0.2957 | 0.2887 | 0.4841 | 0.3617 | 0.8592 | | No log | 3.0 | 90 | 0.2905 | 0.2429 | 0.5141 | 0.3299 | 0.8651 | | No log | 4.0 | 120 | 0.2759 | 0.3137 | 0.5565 | 0.4013 | 0.8787 | | No log | 5.0 | 150 | 0.2977 | 0.3116 | 0.5565 | 0.3995 | 0.8796 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25", "results": []}]}
ali2066/bert-base-uncased_token_itr0_0.0001_all_01_03_2022-14_21_25
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2741 - Precision: 0.1936 - Recall: 0.3243 - F1: 0.2424 - Accuracy: 0.8764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3235 | 0.1062 | 0.2076 | 0.1405 | 0.8556 | | No log | 2.0 | 60 | 0.2713 | 0.1710 | 0.3080 | 0.2199 | 0.8872 | | No log | 3.0 | 90 | 0.3246 | 0.2010 | 0.3391 | 0.2524 | 0.8334 | | No log | 4.0 | 120 | 0.3008 | 0.2011 | 0.3685 | 0.2602 | 0.8459 | | No log | 5.0 | 150 | 0.2714 | 0.1780 | 0.3772 | 0.2418 | 0.8661 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10", "results": []}]}
ali2066/bert-base-uncased_token_itr0_2e-05_all_01_03_2022-04_40_10
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7632 - Accuracy: 0.8263 - F1: 0.8871 - Precision: 0.8551 - Recall: 0.9215 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.3986 | 0.8305 | 0.8903 | 0.8868 | 0.8938 | | 0.4561 | 2.0 | 780 | 0.4018 | 0.8439 | 0.9009 | 0.8805 | 0.9223 | | 0.3111 | 3.0 | 1170 | 0.4306 | 0.8354 | 0.8924 | 0.8974 | 0.8875 | | 0.1739 | 4.0 | 1560 | 0.5499 | 0.8378 | 0.9002 | 0.8547 | 0.9509 | | 0.1739 | 5.0 | 1950 | 0.6223 | 0.85 | 0.9052 | 0.8814 | 0.9303 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15", "results": []}]}
ali2066/bert_base_uncased_itr0_0.0001_all_01_03_2022-14_08_15
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ali2066/bert_base_uncased_itr0_0.0001_webDiscourse_01_03_2022-16_08_12
null
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2711 - Precision: 0.3373 - Recall: 0.5670 - F1: 0.4230 - Accuracy: 0.8943 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3783 | 0.1833 | 0.3975 | 0.2509 | 0.8413 | | No log | 2.0 | 60 | 0.3021 | 0.3280 | 0.4820 | 0.3904 | 0.8876 | | No log | 3.0 | 90 | 0.3196 | 0.3504 | 0.5036 | 0.4133 | 0.8918 | | No log | 4.0 | 120 | 0.3645 | 0.3434 | 0.5306 | 0.4170 | 0.8759 | | No log | 5.0 | 150 | 0.4027 | 0.3217 | 0.5486 | 0.4056 | 0.8797 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19", "results": []}]}
ali2066/correct_BERT_token_itr0_0.0001_all_01_03_2022-15_52_19
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1059 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.1103 | 0.12 | 0.0135 | 0.0243 | 0.9772 | | No log | 2.0 | 30 | 0.0842 | 0.12 | 0.0135 | 0.0243 | 0.9772 | | No log | 3.0 | 45 | 0.0767 | 0.12 | 0.0135 | 0.0243 | 0.9772 | | No log | 4.0 | 60 | 0.0754 | 0.12 | 0.0135 | 0.0243 | 0.9772 | | No log | 5.0 | 75 | 0.0735 | 0.12 | 0.0135 | 0.0243 | 0.9772 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21", "results": []}]}
ali2066/correct_BERT_token_itr0_0.0001_editorials_01_03_2022-15_50_21
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1801 - Precision: 0.6153 - Recall: 0.7301 - F1: 0.6678 - Accuracy: 0.9346 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.2746 | 0.4586 | 0.5922 | 0.5169 | 0.9031 | | No log | 2.0 | 22 | 0.2223 | 0.5233 | 0.6181 | 0.5668 | 0.9148 | | No log | 3.0 | 33 | 0.2162 | 0.5335 | 0.6699 | 0.5940 | 0.9274 | | No log | 4.0 | 44 | 0.2053 | 0.5989 | 0.7055 | 0.6478 | 0.9237 | | No log | 5.0 | 55 | 0.2123 | 0.5671 | 0.7249 | 0.6364 | 0.9267 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47", "results": []}]}
ali2066/correct_BERT_token_itr0_0.0001_essays_01_03_2022-15_48_47
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6542 - Precision: 0.0092 - Recall: 0.0403 - F1: 0.0150 - Accuracy: 0.7291 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.5856 | 0.0012 | 0.0125 | 0.0022 | 0.6950 | | No log | 2.0 | 20 | 0.5933 | 0.0 | 0.0 | 0.0 | 0.7282 | | No log | 3.0 | 30 | 0.5729 | 0.0051 | 0.025 | 0.0085 | 0.7155 | | No log | 4.0 | 40 | 0.6178 | 0.0029 | 0.0125 | 0.0047 | 0.7143 | | No log | 5.0 | 50 | 0.6707 | 0.0110 | 0.0375 | 0.0170 | 0.7178 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14", "results": []}]}
ali2066/correct_BERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_47_14
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3343 - Precision: 0.1651 - Recall: 0.3039 - F1: 0.2140 - Accuracy: 0.8493 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4801 | 0.0352 | 0.0591 | 0.0441 | 0.7521 | | No log | 2.0 | 60 | 0.3795 | 0.0355 | 0.0795 | 0.0491 | 0.8020 | | No log | 3.0 | 90 | 0.3359 | 0.0591 | 0.1294 | 0.0812 | 0.8334 | | No log | 4.0 | 120 | 0.3205 | 0.0785 | 0.1534 | 0.1039 | 0.8486 | | No log | 5.0 | 150 | 0.3144 | 0.0853 | 0.1571 | 0.1105 | 0.8516 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47", "results": []}]}
ali2066/correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1206 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.1222 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 2.0 | 30 | 0.1159 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 3.0 | 45 | 0.1082 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 4.0 | 60 | 0.1042 | 0.12 | 0.0139 | 0.0249 | 0.9736 | | No log | 5.0 | 75 | 0.1029 | 0.12 | 0.0139 | 0.0249 | 0.9736 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32", "results": []}]}
ali2066/correct_distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_42_32
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3097 - Precision: 0.2769 - Recall: 0.4391 - F1: 0.3396 - Accuracy: 0.8878 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.4573 | 0.0094 | 0.0027 | 0.0042 | 0.7702 | | No log | 2.0 | 22 | 0.3660 | 0.1706 | 0.3253 | 0.2239 | 0.8516 | | No log | 3.0 | 33 | 0.3096 | 0.2339 | 0.408 | 0.2974 | 0.8827 | | No log | 4.0 | 44 | 0.2868 | 0.2963 | 0.4693 | 0.3633 | 0.8928 | | No log | 5.0 | 55 | 0.2798 | 0.3141 | 0.48 | 0.3797 | 0.8960 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29", "results": []}]}
ali2066/correct_distilBERT_token_itr0_1e-05_essays_01_03_2022-15_41_29
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5794 - Precision: 0.0094 - Recall: 0.0147 - F1: 0.0115 - Accuracy: 0.7156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.6319 | 0.08 | 0.0312 | 0.0449 | 0.6753 | | No log | 2.0 | 20 | 0.6265 | 0.0364 | 0.0312 | 0.0336 | 0.6764 | | No log | 3.0 | 30 | 0.6216 | 0.0351 | 0.0312 | 0.0331 | 0.6762 | | No log | 4.0 | 40 | 0.6193 | 0.0274 | 0.0312 | 0.0292 | 0.6759 | | No log | 5.0 | 50 | 0.6183 | 0.0222 | 0.0312 | 0.0260 | 0.6773 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24", "results": []}]}
ali2066/correct_distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_40_24
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2876 - Precision: 0.2345 - Recall: 0.4281 - F1: 0.3030 - Accuracy: 0.8728 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3907 | 0.0433 | 0.0824 | 0.0568 | 0.7626 | | No log | 2.0 | 60 | 0.3046 | 0.2302 | 0.4095 | 0.2947 | 0.8598 | | No log | 3.0 | 90 | 0.2945 | 0.2084 | 0.4095 | 0.2762 | 0.8668 | | No log | 4.0 | 120 | 0.2687 | 0.2847 | 0.4607 | 0.3519 | 0.8761 | | No log | 5.0 | 150 | 0.2643 | 0.2779 | 0.4444 | 0.3420 | 0.8788 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04", "results": []}]}
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_all_01_03_2022-15_36_04
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1138 - Precision: 0.5788 - Recall: 0.4712 - F1: 0.5195 - Accuracy: 0.9688 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.1316 | 0.04 | 0.0021 | 0.0040 | 0.9624 | | No log | 2.0 | 30 | 0.1016 | 0.6466 | 0.4688 | 0.5435 | 0.9767 | | No log | 3.0 | 45 | 0.0899 | 0.5873 | 0.4625 | 0.5175 | 0.9757 | | No log | 4.0 | 60 | 0.0849 | 0.5984 | 0.4813 | 0.5335 | 0.9761 | | No log | 5.0 | 75 | 0.0835 | 0.5984 | 0.4813 | 0.5335 | 0.9761 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51", "results": []}]}
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_editorials_01_03_2022-15_33_51
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2663 - Precision: 0.3644 - Recall: 0.4985 - F1: 0.4210 - Accuracy: 0.8997 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.5174 | 0.0120 | 0.0061 | 0.0081 | 0.6997 | | No log | 2.0 | 22 | 0.4029 | 0.1145 | 0.3098 | 0.1672 | 0.8265 | | No log | 3.0 | 33 | 0.3604 | 0.2539 | 0.4448 | 0.3233 | 0.8632 | | No log | 4.0 | 44 | 0.3449 | 0.2992 | 0.4755 | 0.3673 | 0.8704 | | No log | 5.0 | 55 | 0.3403 | 0.3340 | 0.4816 | 0.3945 | 0.8760 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16", "results": []}]}
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_essays_01_03_2022-15_32_16
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base](https://huggingface.co/cardiffnlp/twitter-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6169 - Precision: 0.0031 - Recall: 0.0357 - F1: 0.0057 - Accuracy: 0.6464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.6339 | 0.0116 | 0.0120 | 0.0118 | 0.6662 | | No log | 2.0 | 20 | 0.6182 | 0.0064 | 0.0120 | 0.0084 | 0.6688 | | No log | 3.0 | 30 | 0.6139 | 0.0029 | 0.0241 | 0.0052 | 0.6659 | | No log | 4.0 | 40 | 0.6172 | 0.0020 | 0.0241 | 0.0037 | 0.6622 | | No log | 5.0 | 50 | 0.6165 | 0.0019 | 0.0241 | 0.0036 | 0.6599 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39", "results": []}]}
ali2066/correct_twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-15_30_39
null
[ "transformers", "pytorch", "tensorboard", "roberta", "token-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2811 - Precision: 0.3231 - Recall: 0.5151 - F1: 0.3971 - Accuracy: 0.8913 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.2881 | 0.2089 | 0.3621 | 0.2650 | 0.8715 | | No log | 2.0 | 60 | 0.2500 | 0.2619 | 0.3842 | 0.3115 | 0.8845 | | No log | 3.0 | 90 | 0.2571 | 0.2327 | 0.4338 | 0.3030 | 0.8809 | | No log | 4.0 | 120 | 0.2479 | 0.3051 | 0.4761 | 0.3719 | 0.8949 | | No log | 5.0 | 150 | 0.2783 | 0.3287 | 0.4761 | 0.3889 | 0.8936 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12", "results": []}]}
ali2066/distilBERT_token_itr0_0.0001_all_01_03_2022-15_22_12
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1290 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.0733 | 0.04 | 0.0055 | 0.0097 | 0.9861 | | No log | 2.0 | 30 | 0.0732 | 0.04 | 0.0055 | 0.0097 | 0.9861 | | No log | 3.0 | 45 | 0.0731 | 0.04 | 0.0055 | 0.0097 | 0.9861 | | No log | 4.0 | 60 | 0.0716 | 0.04 | 0.0055 | 0.0097 | 0.9861 | | No log | 5.0 | 75 | 0.0635 | 0.04 | 0.0055 | 0.0097 | 0.9861 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12", "results": []}]}
ali2066/distilBERT_token_itr0_0.0001_editorials_01_03_2022-15_20_12
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1832 - Precision: 0.6138 - Recall: 0.7169 - F1: 0.6613 - Accuracy: 0.9332 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.2740 | 0.4554 | 0.5460 | 0.4966 | 0.8943 | | No log | 2.0 | 22 | 0.2189 | 0.5470 | 0.6558 | 0.5965 | 0.9193 | | No log | 3.0 | 33 | 0.2039 | 0.5256 | 0.6706 | 0.5893 | 0.9198 | | No log | 4.0 | 44 | 0.2097 | 0.5401 | 0.6795 | 0.6018 | 0.9237 | | No log | 5.0 | 55 | 0.2255 | 0.6117 | 0.6825 | 0.6452 | 0.9223 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35", "results": []}]}
ali2066/distilBERT_token_itr0_0.0001_essays_01_03_2022-15_18_35
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5923 - Precision: 0.0039 - Recall: 0.0212 - F1: 0.0066 - Accuracy: 0.7084 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.6673 | 0.0476 | 0.0128 | 0.0202 | 0.6652 | | No log | 2.0 | 20 | 0.6211 | 0.0 | 0.0 | 0.0 | 0.6707 | | No log | 3.0 | 30 | 0.6880 | 0.0038 | 0.0128 | 0.0058 | 0.6703 | | No log | 4.0 | 40 | 0.6566 | 0.0030 | 0.0128 | 0.0049 | 0.6690 | | No log | 5.0 | 50 | 0.6036 | 0.0 | 0.0 | 0.0 | 0.6868 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57", "results": []}]}
ali2066/distilBERT_token_itr0_0.0001_webDiscourse_01_03_2022-15_16_57
null
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3121 - Precision: 0.1204 - Recall: 0.2430 - F1: 0.1611 - Accuracy: 0.8538 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4480 | 0.0209 | 0.0223 | 0.0216 | 0.7794 | | No log | 2.0 | 60 | 0.3521 | 0.0559 | 0.1218 | 0.0767 | 0.8267 | | No log | 3.0 | 90 | 0.3177 | 0.1208 | 0.2504 | 0.1629 | 0.8487 | | No log | 4.0 | 120 | 0.3009 | 0.1296 | 0.2607 | 0.1731 | 0.8602 | | No log | 5.0 | 150 | 0.2988 | 0.1393 | 0.2693 | 0.1836 | 0.8599 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04", "results": []}]}
ali2066/distilBERT_token_itr0_1e-05_all_01_03_2022-15_14_04
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_12_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1194 - Precision: 0.0637 - Recall: 0.0080 - F1: 0.0141 - Accuracy: 0.9707 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 15 | 0.0877 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 2.0 | 30 | 0.0806 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 3.0 | 45 | 0.0758 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 4.0 | 60 | 0.0741 | 0.12 | 0.0194 | 0.0333 | 0.9830 | | No log | 5.0 | 75 | 0.0741 | 0.12 | 0.0194 | 0.0333 | 0.9830 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_12_47", "results": []}]}
ali2066/distilBERT_token_itr0_1e-05_editorials_01_03_2022-15_12_47
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3082 - Precision: 0.2796 - Recall: 0.4373 - F1: 0.3411 - Accuracy: 0.8887 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 11 | 0.5018 | 0.0192 | 0.0060 | 0.0091 | 0.7370 | | No log | 2.0 | 22 | 0.4066 | 0.1541 | 0.2814 | 0.1992 | 0.8340 | | No log | 3.0 | 33 | 0.3525 | 0.1768 | 0.3234 | 0.2286 | 0.8612 | | No log | 4.0 | 44 | 0.3250 | 0.2171 | 0.3503 | 0.2680 | 0.8766 | | No log | 5.0 | 55 | 0.3160 | 0.2353 | 0.3713 | 0.2880 | 0.8801 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44", "results": []}]}
ali2066/distilBERT_token_itr0_1e-05_essays_01_03_2022-15_11_44
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5867 - Precision: 0.0119 - Recall: 0.0116 - F1: 0.0118 - Accuracy: 0.6976 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 10 | 0.5730 | 0.0952 | 0.0270 | 0.0421 | 0.7381 | | No log | 2.0 | 20 | 0.5755 | 0.0213 | 0.0135 | 0.0165 | 0.7388 | | No log | 3.0 | 30 | 0.5635 | 0.0196 | 0.0135 | 0.016 | 0.7416 | | No log | 4.0 | 40 | 0.5549 | 0.0392 | 0.0270 | 0.032 | 0.7429 | | No log | 5.0 | 50 | 0.5530 | 0.0357 | 0.0270 | 0.0308 | 0.7438 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39", "results": []}]}
ali2066/distilBERT_token_itr0_1e-05_webDiscourse_01_03_2022-15_10_39
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/distilbert-base-uncased-finetuned-argumentative
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/distilbert-base-uncased-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ali2066/distilbert-base-uncased-finetuned-sst-2-english-finetuned-argmining
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
{}
ali2066/distilbert-base-uncased-finetuned-sst-2-english-finetuned-argumentative
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/distilbert-base-uncased-finetuned-sst-2-english_token_itr0_2e-05_all_01_03_2022-04_11_31
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2572 - Precision: 0.3363 - Recall: 0.5110 - F1: 0.4057 - Accuracy: 0.8931 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.3976 | 0.1405 | 0.3058 | 0.1925 | 0.7921 | | No log | 2.0 | 60 | 0.3511 | 0.2360 | 0.4038 | 0.2979 | 0.8260 | | No log | 3.0 | 90 | 0.3595 | 0.1863 | 0.3827 | 0.2506 | 0.8211 | | No log | 4.0 | 120 | 0.3591 | 0.2144 | 0.4288 | 0.2859 | 0.8299 | | No log | 5.0 | 150 | 0.3605 | 0.1989 | 0.4212 | 0.2702 | 0.8343 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58", "results": []}]}
ali2066/distilbert_token_itr0_0.0001_all_01_03_2022-14_30_58
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3255 - Precision: 0.1412 - Recall: 0.25 - F1: 0.1805 - Accuracy: 0.8491 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4549 | 0.0228 | 0.0351 | 0.0276 | 0.7734 | | No log | 2.0 | 60 | 0.3577 | 0.0814 | 0.1260 | 0.0989 | 0.8355 | | No log | 3.0 | 90 | 0.3116 | 0.1534 | 0.2648 | 0.1943 | 0.8611 | | No log | 4.0 | 120 | 0.2975 | 0.1792 | 0.2967 | 0.2234 | 0.8690 | | No log | 5.0 | 150 | 0.2935 | 0.1873 | 0.2998 | 0.2305 | 0.8715 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33", "results": []}]}
ali2066/distilbert_token_itr0_1e-05_all_01_03_2022-14_33_33
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-token-argumentative This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1573 - Precision: 0.3777 - Recall: 0.3919 - F1: 0.3847 - Accuracy: 0.9497 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 75 | 0.3241 | 0.1109 | 0.2178 | 0.1470 | 0.8488 | | No log | 2.0 | 150 | 0.3145 | 0.1615 | 0.2462 | 0.1950 | 0.8606 | | No log | 3.0 | 225 | 0.3035 | 0.1913 | 0.3258 | 0.2411 | 0.8590 | | No log | 4.0 | 300 | 0.3080 | 0.2199 | 0.3220 | 0.2613 | 0.8612 | | No log | 5.0 | 375 | 0.3038 | 0.2209 | 0.3277 | 0.2639 | 0.8630 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "finetuned-token-argumentative", "results": []}]}
ali2066/finetuned-token-argumentative
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43", "results": []}]}
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-17_55_43
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4064 - Accuracy: 0.8289 - F1: 0.8901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4163 | 0.8085 | 0.8780 | | No log | 2.0 | 390 | 0.4098 | 0.8268 | 0.8878 | | 0.312 | 3.0 | 585 | 0.5892 | 0.8244 | 0.8861 | | 0.312 | 4.0 | 780 | 0.7580 | 0.8232 | 0.8845 | | 0.312 | 5.0 | 975 | 0.9028 | 0.8183 | 0.8824 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17", "results": []}]}
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-19_11_17
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3825 - Accuracy: 0.8144 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3975 | 0.8122 | 0.8795 | | No log | 2.0 | 390 | 0.4376 | 0.8085 | 0.8673 | | 0.3169 | 3.0 | 585 | 0.5736 | 0.8171 | 0.8790 | | 0.3169 | 4.0 | 780 | 0.8178 | 0.8098 | 0.8754 | | 0.3169 | 5.0 | 975 | 0.9244 | 0.8073 | 0.8738 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53", "results": []}]}
ali2066/finetuned_sentence_itr0_0.0002_all_27_02_2022-22_30_53
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0926 - Accuracy: 0.9772 - F1: 0.9883 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0539 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0282 | 0.9885 | 0.9942 | | No log | 3.0 | 312 | 0.0317 | 0.9914 | 0.9956 | | No log | 4.0 | 416 | 0.0462 | 0.9885 | 0.9942 | | 0.0409 | 5.0 | 520 | 0.0517 | 0.9885 | 0.9942 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36", "results": []}]}
ali2066/finetuned_sentence_itr0_0.0002_editorials_27_02_2022-19_42_36
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3358 - Accuracy: 0.8688 - F1: 0.9225 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4116 | 0.8382 | 0.9027 | | No log | 2.0 | 162 | 0.4360 | 0.8382 | 0.8952 | | No log | 3.0 | 243 | 0.5719 | 0.8382 | 0.8995 | | No log | 4.0 | 324 | 0.7251 | 0.8493 | 0.9021 | | No log | 5.0 | 405 | 0.8384 | 0.8456 | 0.9019 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10", "results": []}]}
ali2066/finetuned_sentence_itr0_0.0002_essays_27_02_2022-19_33_10
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5777 - Accuracy: 0.6794 - F1: 0.5010 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.6059 | 0.63 | 0.4932 | | No log | 2.0 | 96 | 0.6327 | 0.705 | 0.5630 | | No log | 3.0 | 144 | 0.7003 | 0.695 | 0.5197 | | No log | 4.0 | 192 | 0.9368 | 0.69 | 0.4655 | | No log | 5.0 | 240 | 1.1935 | 0.685 | 0.4425 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06", "results": []}]}
ali2066/finetuned_sentence_itr0_0.0002_webDiscourse_27_02_2022-19_25_06
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4787 - Accuracy: 0.8138 - F1: 0.8785 - Precision: 0.8489 - Recall: 0.9101 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.4335 | 0.7732 | 0.8533 | 0.8209 | 0.8883 | | 0.5141 | 2.0 | 780 | 0.4196 | 0.8037 | 0.8721 | 0.8446 | 0.9015 | | 0.3368 | 3.0 | 1170 | 0.4519 | 0.8098 | 0.8779 | 0.8386 | 0.9212 | | 0.2677 | 4.0 | 1560 | 0.4787 | 0.8122 | 0.8785 | 0.8452 | 0.9146 | | 0.2677 | 5.0 | 1950 | 0.4912 | 0.8146 | 0.8794 | 0.8510 | 0.9097 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32", "results": []}]}
ali2066/finetuned_sentence_itr0_1e-05_all_01_03_2022-13_25_32
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51 This model is a fine-tuned version of [siebert/sentiment-roberta-large-english](https://huggingface.co/siebert/sentiment-roberta-large-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4563 - Accuracy: 0.8440 - F1: 0.8954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4302 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3970 | 0.8220 | 0.8875 | | 0.3703 | 3.0 | 585 | 0.3972 | 0.8402 | 0.8934 | | 0.3703 | 4.0 | 780 | 0.4945 | 0.8390 | 0.8935 | | 0.3703 | 5.0 | 975 | 0.5354 | 0.8305 | 0.8898 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-02_53_51
null
[ "transformers", "pytorch", "tensorboard", "roberta", "text-classification", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_27_05
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4208 - Accuracy: 0.8283 - F1: 0.8915 - Precision: 0.8487 - Recall: 0.9389 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.4443 | 0.7768 | 0.8589 | 0.8072 | 0.9176 | | 0.4532 | 2.0 | 780 | 0.4603 | 0.8098 | 0.8791 | 0.8302 | 0.9341 | | 0.2608 | 3.0 | 1170 | 0.5284 | 0.8061 | 0.8713 | 0.8567 | 0.8863 | | 0.1577 | 4.0 | 1560 | 0.6398 | 0.8085 | 0.8749 | 0.8472 | 0.9044 | | 0.1577 | 5.0 | 1950 | 0.7089 | 0.8085 | 0.8741 | 0.8516 | 0.8979 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-05_32_03
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6168 - Accuracy: 0.8286 - F1: 0.8887 - Precision: 0.8628 - Recall: 0.9162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 390 | 0.3890 | 0.8110 | 0.8749 | 0.8631 | 0.8871 | | 0.4535 | 2.0 | 780 | 0.3921 | 0.8439 | 0.8984 | 0.8721 | 0.9264 | | 0.266 | 3.0 | 1170 | 0.4454 | 0.8415 | 0.8947 | 0.8860 | 0.9034 | | 0.16 | 4.0 | 1560 | 0.5610 | 0.8427 | 0.8957 | 0.8850 | 0.9067 | | 0.16 | 5.0 | 1950 | 0.6180 | 0.8488 | 0.9010 | 0.8799 | 0.9231 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_all_01_03_2022-13_11_55
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_54_19
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4345 - Accuracy: 0.8321 - F1: 0.8904 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3922 | 0.8061 | 0.8747 | | No log | 2.0 | 390 | 0.3764 | 0.8171 | 0.8837 | | 0.4074 | 3.0 | 585 | 0.3873 | 0.8220 | 0.8843 | | 0.4074 | 4.0 | 780 | 0.4361 | 0.8232 | 0.8854 | | 0.4074 | 5.0 | 975 | 0.4555 | 0.8159 | 0.8793 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_all_26_02_2022-03_57_45
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5002 - Accuracy: 0.8103 - F1: 0.8764 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4178 | 0.7963 | 0.8630 | | No log | 2.0 | 390 | 0.3935 | 0.8061 | 0.8770 | | 0.4116 | 3.0 | 585 | 0.4037 | 0.8085 | 0.8735 | | 0.4116 | 4.0 | 780 | 0.4696 | 0.8146 | 0.8796 | | 0.4116 | 5.0 | 975 | 0.4849 | 0.8207 | 0.8823 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-17_27_47
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4917 - Accuracy: 0.8231 - F1: 0.8833 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3883 | 0.8146 | 0.8833 | | No log | 2.0 | 390 | 0.3607 | 0.8390 | 0.8964 | | 0.4085 | 3.0 | 585 | 0.3812 | 0.8488 | 0.9042 | | 0.4085 | 4.0 | 780 | 0.3977 | 0.8549 | 0.9077 | | 0.4085 | 5.0 | 975 | 0.4233 | 0.8573 | 0.9092 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-19_05_42
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4638 - Accuracy: 0.8247 - F1: 0.8867 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4069 | 0.7976 | 0.875 | | No log | 2.0 | 390 | 0.4061 | 0.8134 | 0.8838 | | 0.4074 | 3.0 | 585 | 0.4075 | 0.8134 | 0.8798 | | 0.4074 | 4.0 | 780 | 0.4746 | 0.8256 | 0.8885 | | 0.4074 | 5.0 | 975 | 0.4881 | 0.8220 | 0.8845 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_all_27_02_2022-22_25_09
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0914 - Accuracy: 0.9746 - F1: 0.9870 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0501 | 0.9828 | 0.9913 | | No log | 2.0 | 208 | 0.0435 | 0.9828 | 0.9913 | | No log | 3.0 | 312 | 0.0414 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0424 | 0.9799 | 0.9898 | | 0.0547 | 5.0 | 520 | 0.0482 | 0.9828 | 0.9913 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ali2066/finetuned_sentence_itr0_2e-05_essays_01_03_2022-13_20_40
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3455 - Accuracy: 0.8609 - F1: 0.9156 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4468 | 0.8235 | 0.8929 | | No log | 2.0 | 162 | 0.4497 | 0.8382 | 0.9 | | No log | 3.0 | 243 | 0.4861 | 0.8309 | 0.8940 | | No log | 4.0 | 324 | 0.5087 | 0.8235 | 0.8879 | | No log | 5.0 | 405 | 0.5228 | 0.8199 | 0.8858 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_essays_27_02_2022-19_30_22
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7224 - Accuracy: 0.6979 - F1: 0.4736 - Precision: 0.5074 - Recall: 0.4440 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 95 | 0.6009 | 0.65 | 0.2222 | 0.625 | 0.1351 | | No log | 2.0 | 190 | 0.6140 | 0.675 | 0.3689 | 0.6552 | 0.2568 | | No log | 3.0 | 285 | 0.6580 | 0.67 | 0.4590 | 0.5833 | 0.3784 | | No log | 4.0 | 380 | 0.7560 | 0.665 | 0.4806 | 0.5636 | 0.4189 | | No log | 5.0 | 475 | 0.8226 | 0.665 | 0.464 | 0.5686 | 0.3919 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1", "precision", "recall"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_01_03_2022-13_17_55
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-18_51_55
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5819 - Accuracy: 0.7058 - F1: 0.4267 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.6110 | 0.665 | 0.0 | | No log | 2.0 | 96 | 0.5706 | 0.685 | 0.2588 | | No log | 3.0 | 144 | 0.5484 | 0.725 | 0.5299 | | No log | 4.0 | 192 | 0.5585 | 0.71 | 0.4727 | | No log | 5.0 | 240 | 0.5616 | 0.725 | 0.5133 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29", "results": []}]}
ali2066/finetuned_sentence_itr0_2e-05_webDiscourse_27_02_2022-19_22_29
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48", "results": []}]}
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-18_23_48
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3944 - Accuracy: 0.8279 - F1: 0.8901 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3946 | 0.8012 | 0.8743 | | No log | 2.0 | 390 | 0.3746 | 0.8329 | 0.8929 | | 0.3644 | 3.0 | 585 | 0.4288 | 0.8268 | 0.8849 | | 0.3644 | 4.0 | 780 | 0.5352 | 0.8232 | 0.8841 | | 0.3644 | 5.0 | 975 | 0.5768 | 0.8268 | 0.8864 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53", "results": []}]}
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-19_16_53
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6071 - Accuracy: 0.8337 - F1: 0.8922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3920 | 0.7988 | 0.8624 | | No log | 2.0 | 390 | 0.3873 | 0.8171 | 0.8739 | | 0.3673 | 3.0 | 585 | 0.4354 | 0.8256 | 0.8835 | | 0.3673 | 4.0 | 780 | 0.5358 | 0.8293 | 0.8887 | | 0.3673 | 5.0 | 975 | 0.5616 | 0.8366 | 0.8923 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26", "results": []}]}
ali2066/finetuned_sentence_itr0_3e-05_all_27_02_2022-22_36_26
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0890 - Accuracy: 0.9750 - F1: 0.9873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0485 | 0.9885 | 0.9942 | | No log | 2.0 | 208 | 0.0558 | 0.9857 | 0.9927 | | No log | 3.0 | 312 | 0.0501 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0593 | 0.9828 | 0.9913 | | 0.04 | 5.0 | 520 | 0.0653 | 0.9828 | 0.9913 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22", "results": []}]}
ali2066/finetuned_sentence_itr0_3e-05_editorials_27_02_2022-19_46_22
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3767 - Accuracy: 0.8638 - F1: 0.9165 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 81 | 0.4489 | 0.8309 | 0.8969 | | No log | 2.0 | 162 | 0.4429 | 0.8272 | 0.8915 | | No log | 3.0 | 243 | 0.5154 | 0.8529 | 0.9083 | | No log | 4.0 | 324 | 0.5552 | 0.8309 | 0.8925 | | No log | 5.0 | 405 | 0.5896 | 0.8309 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56", "results": []}]}
ali2066/finetuned_sentence_itr0_3e-05_essays_27_02_2022-19_35_56
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6020 - Accuracy: 0.7032 - F1: 0.4851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5914 | 0.67 | 0.0294 | | No log | 2.0 | 96 | 0.5616 | 0.695 | 0.2824 | | No log | 3.0 | 144 | 0.5596 | 0.73 | 0.5909 | | No log | 4.0 | 192 | 0.6273 | 0.73 | 0.5 | | No log | 5.0 | 240 | 0.6370 | 0.71 | 0.5 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41", "results": []}]}
ali2066/finetuned_sentence_itr0_3e-05_webDiscourse_27_02_2022-19_27_41
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22", "results": []}]}
ali2066/finetuned_sentence_itr1_0.0002_all_27_02_2022-18_01_22
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26", "results": []}]}
ali2066/finetuned_sentence_itr1_2e-05_all_26_02_2022-04_03_26
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22", "results": []}]}
ali2066/finetuned_sentence_itr1_2e-05_all_27_02_2022-17_33_22
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09", "results": []}]}
ali2066/finetuned_sentence_itr1_2e-05_webDiscourse_27_02_2022-18_54_09
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24", "results": []}]}
ali2066/finetuned_sentence_itr1_3e-05_all_27_02_2022-18_29_24
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59", "results": []}]}
ali2066/finetuned_sentence_itr2_0.0002_all_27_02_2022-18_06_59
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01", "results": []}]}
ali2066/finetuned_sentence_itr2_2e-05_all_26_02_2022-04_09_01
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58", "results": []}]}
ali2066/finetuned_sentence_itr2_2e-05_all_27_02_2022-17_38_58
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32", "results": []}]}
ali2066/finetuned_sentence_itr2_2e-05_webDiscourse_27_02_2022-18_56_32
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02", "results": []}]}
ali2066/finetuned_sentence_itr2_3e-05_all_27_02_2022-18_35_02
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34", "results": []}]}
ali2066/finetuned_sentence_itr3_0.0002_all_27_02_2022-18_12_34
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37", "results": []}]}
ali2066/finetuned_sentence_itr3_2e-05_all_26_02_2022-04_14_37
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32", "results": []}]}
ali2066/finetuned_sentence_itr3_2e-05_all_27_02_2022-17_44_32
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6049 - Accuracy: 0.6926 - F1: 0.4160 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 48 | 0.5835 | 0.71 | 0.0333 | | No log | 2.0 | 96 | 0.5718 | 0.715 | 0.3871 | | No log | 3.0 | 144 | 0.5731 | 0.715 | 0.4 | | No log | 4.0 | 192 | 0.6009 | 0.705 | 0.3516 | | No log | 5.0 | 240 | 0.6122 | 0.7 | 0.4000 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05", "results": []}]}
ali2066/finetuned_sentence_itr3_2e-05_webDiscourse_27_02_2022-18_59_05
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40", "results": []}]}
ali2066/finetuned_sentence_itr3_3e-05_all_27_02_2022-18_40_40
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7600 - Accuracy: 0.8144 - F1: 0.8788 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3514 | 0.8427 | 0.8979 | | No log | 2.0 | 390 | 0.3853 | 0.8293 | 0.8936 | | 0.3147 | 3.0 | 585 | 0.5494 | 0.8268 | 0.8868 | | 0.3147 | 4.0 | 780 | 0.6235 | 0.8427 | 0.8995 | | 0.3147 | 5.0 | 975 | 0.8302 | 0.8378 | 0.8965 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11", "results": []}]}
ali2066/finetuned_sentence_itr4_0.0002_all_27_02_2022-18_18_11
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09", "results": []}]}
ali2066/finetuned_sentence_itr4_2e-05_all_26_02_2022-04_20_09
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4095 - Accuracy: 0.8263 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3685 | 0.8293 | 0.8911 | | No log | 2.0 | 390 | 0.3495 | 0.8415 | 0.8992 | | 0.4065 | 3.0 | 585 | 0.3744 | 0.8463 | 0.9014 | | 0.4065 | 4.0 | 780 | 0.4260 | 0.8427 | 0.8980 | | 0.4065 | 5.0 | 975 | 0.4548 | 0.8366 | 0.8940 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05", "results": []}]}
ali2066/finetuned_sentence_itr4_2e-05_all_27_02_2022-17_50_05
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ali2066/finetuned_sentence_itr4_2e-05_webDiscourse_27_02_2022-19_01_41
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3962 - Accuracy: 0.8231 - F1: 0.8873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.3591 | 0.8366 | 0.8950 | | No log | 2.0 | 390 | 0.3558 | 0.8415 | 0.9012 | | 0.3647 | 3.0 | 585 | 0.4049 | 0.8427 | 0.8983 | | 0.3647 | 4.0 | 780 | 0.5030 | 0.8378 | 0.8949 | | 0.3647 | 5.0 | 975 | 0.5719 | 0.8354 | 0.8943 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19", "results": []}]}
ali2066/finetuned_sentence_itr4_3e-05_all_27_02_2022-18_46_19
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39", "results": []}]}
ali2066/finetuned_sentence_itr5_2e-05_all_26_02_2022-04_25_39
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4676 - Accuracy: 0.8299 - F1: 0.8892 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 195 | 0.4087 | 0.8073 | 0.8754 | | No log | 2.0 | 390 | 0.3952 | 0.8159 | 0.8803 | | 0.4084 | 3.0 | 585 | 0.4183 | 0.8195 | 0.8831 | | 0.4084 | 4.0 | 780 | 0.4596 | 0.8280 | 0.8867 | | 0.4084 | 5.0 | 975 | 0.4919 | 0.8280 | 0.8873 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["accuracy", "f1"], "model-index": [{"name": "finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13", "results": []}]}
ali2066/finetuned_sentence_itr6_2e-05_all_26_02_2022-04_31_13
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ali2066/finetuned_sentence_itr7_2e-05_all_26_02_2022-04_36_45
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/finetuned_token_2e-05_15_02_2022-23_42_20
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/finetuned_token_2e-05_16_02_2022-00_58_25
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/finetuned_token_2e-05_16_02_2022-01_05_29
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
token-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_token_2e-05_16_02_2022-01_30_30 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1748 - Precision: 0.3384 - Recall: 0.3492 - F1: 0.3437 - Accuracy: 0.9442 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 38 | 0.3180 | 0.0985 | 0.1648 | 0.1233 | 0.8643 | | No log | 2.0 | 76 | 0.2667 | 0.1962 | 0.2698 | 0.2272 | 0.8926 | | No log | 3.0 | 114 | 0.2374 | 0.2268 | 0.3005 | 0.2585 | 0.9062 | | No log | 4.0 | 152 | 0.2305 | 0.2248 | 0.3247 | 0.2657 | 0.9099 | | No log | 5.0 | 190 | 0.2289 | 0.2322 | 0.3166 | 0.2679 | 0.9102 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.10.1+cu113 - Datasets 1.18.0 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "metrics": ["precision", "recall", "f1", "accuracy"], "model-index": [{"name": "finetuned_token_2e-05_16_02_2022-01_30_30", "results": []}]}
ali2066/finetuned_token_2e-05_16_02_2022-01_30_30
null
[ "transformers", "pytorch", "tensorboard", "distilbert", "token-classification", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ali2066/finetuned_token_2e-05_16_02_2022-01_53_40
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00