Search is not available for this dataset
pipeline_tag
stringclasses
48 values
library_name
stringclasses
205 values
text
stringlengths
0
18.3M
metadata
stringlengths
2
1.07B
id
stringlengths
5
122
last_modified
null
tags
listlengths
1
1.84k
sha
null
created_at
stringlengths
25
25
null
null
{}
Zazik/t5-small-finetuned-xsum
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# ZerO DialoGTP Model
{"tags": ["conversational"]}
Zeer0/DialoGPT-small-ZerO
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zelalem-Getahun/wav2vec2-large-xlsr-turkish-demo-colab
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
null
{"tags": ["conversational"]}
Zen1/Derekbot
null
[ "conversational", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# My Awesome Model
{"tags": ["conversational"]}
Zen1/test1
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Rick DialoGPT Model
{"tags": ["conversational"]}
Zeph/DialoGPT-small-rick
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zephaus/Chrombot
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Chrombot
{"tags": ["conversational"]}
Zephaus/Chromrepo
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zer4/Arcane
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zerv/bert-finetuned-ner
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zeus/DialoGPT-small-root
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zeyang/first
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# T5-Base Fine-Tuned on SQuAD for Question Generation ### Model in Action: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration trained_model_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation' trained_tokenizer_path = 'ZhangCheng/T5-Base-Fine-Tuned-for-Question-Generation' class QuestionGeneration: def __init__(self, model_dir=None): self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path) self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path) self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') self.model = self.model.to(self.device) self.model.eval() def generate(self, answer: str, context: str): input_text = '<answer> %s <context> %s ' % (answer, context) encoding = self.tokenizer.encode_plus( input_text, return_tensors='pt' ) input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] outputs = self.model.generate( input_ids=input_ids, attention_mask=attention_mask ) question = self.tokenizer.decode( outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True ) return {'question': question, 'answer': answer, 'context': context} if __name__ == "__main__": context = 'ZhangCheng fine-tuned T5 on SQuAD dataset for question generation.' answer = 'ZhangCheng' QG = QuestionGeneration() qa = QG.generate(answer, context) print(qa['question']) # Output: # Who fine-tuned T5 on SQuAD dataset for question generation? ```
{"language": "en", "tags": ["Question Generation"], "datasets": ["squad"], "widget": [{"text": "<answer> T5 <context> Cheng fine-tuned T5 on SQuAD for question generation.", "example_title": "Example 1"}, {"text": "<answer> SQuAD <context> Cheng fine-tuned T5 on SQuAD dataset for question generation.", "example_title": "Example 2"}, {"text": "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio.", "example_title": "Example 3"}]}
ZhangCheng/T5-Base-finetuned-for-Question-Generation
null
[ "transformers", "pytorch", "tf", "safetensors", "t5", "text2text-generation", "Question Generation", "en", "dataset:squad", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text2text-generation
transformers
# T5v1.1-Base Fine-Tuned on SQuAD for Question Generation ### Model in Action: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration trained_model_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation' trained_tokenizer_path = 'ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation' class QuestionGeneration: def __init__(self): self.model = T5ForConditionalGeneration.from_pretrained(trained_model_path) self.tokenizer = T5Tokenizer.from_pretrained(trained_tokenizer_path) self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') self.model = self.model.to(self.device) self.model.eval() def generate(self, answer:str, context:str): input_text = '<answer> %s <context> %s ' % (answer, context) encoding = self.tokenizer.encode_plus( input_text, return_tensors='pt' ) input_ids = encoding['input_ids'].to(self.device) attention_mask = encoding['attention_mask'].to(self.device) outputs = self.model.generate( input_ids = input_ids, attention_mask = attention_mask ) question = self.tokenizer.decode( outputs[0], skip_special_tokens = True, clean_up_tokenization_spaces = True ) return {'question': question, 'answer': answer} if __name__ == "__main__": context = 'ZhangCheng fine-tuned T5v1.1 on SQuAD dataset for question generation.' answer = 'ZhangCheng' QG = QuestionGeneration() qa = QG.generate(answer, context) print(qa['question']) # Output: # Who fine-tuned T5v1.1 on SQuAD? ```
{"language": "en", "tags": ["Question Generation"], "datasets": ["squad"], "widget": [{"text": "<answer> T5v1.1 <context> Cheng fine-tuned T5v1.1 on SQuAD for question generation.", "example_title": "Example 1"}, {"text": "<answer> SQuAD <context> Cheng fine-tuned T5v1.1 on SQuAD dataset for question generation.", "example_title": "Example 2"}, {"text": "<answer> thousands <context> Transformers provides thousands of pre-trained models to perform tasks on different modalities such as text, vision, and audio.", "example_title": "Example 3"}]}
ZhangCheng/T5v1.1-Base-Fine-Tuned-for-Question-Generation
null
[ "transformers", "pytorch", "safetensors", "t5", "text2text-generation", "Question Generation", "en", "dataset:squad", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
ZhaoyiGUAN/Bert_Fintuning_Test1
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
fill-mask
transformers
{}
ZhaoyiGUAN/Bert_cn_finetuning_1
null
[ "transformers", "pytorch", "bert", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ZhichenRen/siku
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ZhuXinyuan/ZXY
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Ziang/distilbert-base-uncased-finetuned-squad
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
transformers
# SpERT SpERT is the Relation Extraction model [(SpERT)Span-based Entity and Relation Transformer](https://github.com/lavis-nlp/spert).This is the model trained with CoNLL04 Dataset. ## Use ## References ``` Markus Eberts, Adrian Ulges. Span-based Joint Entity and Relation Extraction with Transformer Pre-training. 24th European Conference on Artificial Intelligence, 2020. ```
{}
Zichuu/spert
null
[ "transformers", "pytorch", "bert", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
{}
ZikXewen/wav2vec2-large-xlsr-53-thai-demo
null
[ "transformers", "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
automatic-speech-recognition
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-colab This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-chinese-zh-cn) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu111 - Datasets 1.18.3 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "wav2vec2-base-timit-demo-colab", "results": []}]}
Zirk/wav2vec2-base-timit-demo-colab
null
[ "transformers", "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "generated_from_trainer", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ZiweiG/ziwei-bert-imdb
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
ZiweiG/ziwei-bertimdb-prob
null
[ "transformers", "pytorch", "jax", "bert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zixiang/chinese-roberta-wwm-ext-large
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#BDBot2
{"tags": ["conversational"]}
Zixtrauce/BDBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#BrandonBot4Epochs
{"tags": ["conversational"]}
Zixtrauce/BDBot4Epoch
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#BaekBot
{"tags": ["conversational"]}
Zixtrauce/BaekBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#BrandonBot
{"tags": ["conversational"]}
Zixtrauce/BrandonBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#BrandonBot2
{"tags": ["conversational"]}
Zixtrauce/BrandonBot2
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#JohnBot
{"tags": ["conversational"]}
Zixtrauce/JohnBot
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "has_space", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
#SelfAwareness
{"tags": ["conversational"]}
Zixtrauce/SelfAwareness
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zodic/Yfchinn
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zoe/model_covid
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilgpt2-finetuned-restaurant-reviews This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a subset of the Yelp restaurant reviews dataset. It achieves the following results on the evaluation set: - Loss: 3.4668 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.6331 | 1.0 | 2536 | 3.5280 | | 3.5676 | 2.0 | 5072 | 3.4793 | | 3.5438 | 3.0 | 7608 | 3.4668 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu102 - Datasets 1.18.2 - Tokenizers 0.11.0
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilgpt2-finetuned-restaurant-reviews", "results": []}]}
Zohar/distilgpt2-finetuned-restaurant-reviews
null
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zongo/DialoGPT-medium-chocola
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zora/Tutorial1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zsuuee/A
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zuabir10/Transformer
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zubair2019/bert-base-cased-finetuned-en-to-ro
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# Gandalf DialoGPT Model
{"tags": ["conversational"]}
Zuha/DialoGPT-small-gandalf
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
ZunaHexus/DialoGPT-small-joshua
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zunn/1
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
Zwrok/Start
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# BART-LARGE finetuned on SQuADv2 This is bart-large model finetuned on SQuADv2 dataset for question answering task ## Model details BART was propsed in the [paper](https://arxiv.org/abs/1910.13461) **BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension**. BART is a seq2seq model intended for both NLG and NLU tasks. To use BART for question answering tasks, we feed the complete document into the encoder and decoder, and use the top hidden state of the decoder as a representation for each word. This representation is used to classify the token. As given in the paper bart-large achives comparable to ROBERTa on SQuAD. Another notable thing about BART is that it can handle sequences with upto 1024 tokens. | Param | #Value | |---------------------|--------| | encoder layers | 12 | | decoder layers | 12 | | hidden size | 4096 | | num attetion heads | 16 | | on disk size | 1.63GB | ## Model training This model was trained with following parameters using simpletransformers wrapper: ``` train_args = { 'learning_rate': 1e-5, 'max_seq_length': 512, 'doc_stride': 512, 'overwrite_output_dir': True, 'reprocess_input_data': False, 'train_batch_size': 8, 'num_train_epochs': 2, 'gradient_accumulation_steps': 2, 'no_cache': True, 'use_cached_eval_features': False, 'save_model_every_epoch': False, 'output_dir': "bart-squadv2", 'eval_batch_size': 32, 'fp16_opt_level': 'O2', } ``` [You can even train your own model using this colab notebook](https://colab.research.google.com/drive/1I5cK1M_0dLaf5xoewh6swcm5nAInfwHy?usp=sharing) ## Results ```{"correct": 6832, "similar": 4409, "incorrect": 632, "eval_loss": -14.950117511952177}``` ## Model in Action 🚀 ```python3 from transformers import BartTokenizer, BartForQuestionAnswering import torch tokenizer = BartTokenizer.from_pretrained('a-ware/bart-squadv2') model = BartForQuestionAnswering.from_pretrained('a-ware/bart-squadv2') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" encoding = tokenizer(question, text, return_tensors='pt') input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2] all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]) answer = tokenizer.convert_tokens_to_ids(answer.split()) answer = tokenizer.decode(answer) #answer => 'a nice puppet' ``` > Created with ❤️ by A-ware UG [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/aware-ai)
{"datasets": ["squad_v2"]}
aware-ai/bart-squadv2
null
[ "transformers", "pytorch", "safetensors", "bart", "question-answering", "dataset:squad_v2", "arxiv:1910.13461", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aware-ai/distilbart-xsum-12-3-squadv2
null
[ "transformers", "pytorch", "safetensors", "bart", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aware-ai/distilbart-xsum-12-6-squadv2
null
[ "transformers", "pytorch", "safetensors", "bart", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aware-ai/longformer-QA
null
[ "transformers", "pytorch", "tf", "safetensors", "longformer", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aware-ai/longformer-squadv2
null
[ "transformers", "pytorch", "tf", "safetensors", "longformer", "question-answering", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# Mobile-Bert fine-tuned on Squad V2 dataset This is based on mobile bert architecture suitable for handy devices or device with low resources. ## usage using transformers library first load model and Tokenizer ``` from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline model_name = "aware-ai/mobilebert-squadv2" model = AutoModelForQuestionAnswering.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) ``` use question answering pipeline ``` qa_engine = pipeline('question-answering', model=model, tokenizer=tokenizer) QA_input = { 'question': 'your question?', 'context': '. your context ................ ' } res = qa_engine (QA_input) ```
{"language": ["en"], "library_name": "transformers", "datasets": ["squad_v2"], "pipeline_tag": "question-answering"}
aware-ai/mobilebert-squadv2
null
[ "transformers", "pytorch", "safetensors", "mobilebert", "question-answering", "en", "dataset:squad_v2", "endpoints_compatible", "has_space", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
# Roberta-LARGE finetuned on SQuADv2 This is roberta-large model finetuned on SQuADv2 dataset for question answering answerability classification ## Model details This model is simply an Sequenceclassification model with two inputs (context and question) in a list. The result is either [1] for answerable or [0] if it is not answerable. It was trained over 4 epochs on squadv2 dataset and can be used to filter out which context is good to give into the QA model to avoid bad answers. ## Model training This model was trained with following parameters using simpletransformers wrapper: ``` train_args = { 'learning_rate': 1e-5, 'max_seq_length': 512, 'overwrite_output_dir': True, 'reprocess_input_data': False, 'train_batch_size': 4, 'num_train_epochs': 4, 'gradient_accumulation_steps': 2, 'no_cache': True, 'use_cached_eval_features': False, 'save_model_every_epoch': False, 'output_dir': "bart-squadv2", 'eval_batch_size': 8, 'fp16_opt_level': 'O2', } ``` ## Results ```{"accuracy": 90.48%}``` ## Model in Action 🚀 ```python3 from simpletransformers.classification import ClassificationModel model = ClassificationModel('roberta', 'a-ware/roberta-large-squadv2', num_labels=2, args=train_args) predictions, raw_outputs = model.predict([["my dog is an year old. he loves to go into the rain", "how old is my dog ?"]]) print(predictions) ==> [1] ``` > Created with ❤️ by A-ware UG [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/aware-ai)
{"datasets": ["squad_v2"]}
aware-ai/roberta-large-squad-classification
null
[ "transformers", "pytorch", "jax", "safetensors", "roberta", "text-classification", "dataset:squad_v2", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aware-ai/roberta-large-squadv2
null
[ "transformers", "pytorch", "jax", "roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
{}
aware-ai/xlmroberta-QA
null
[ "transformers", "pytorch", "xlm-roberta", "question-answering", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
question-answering
transformers
# XLM-ROBERTA-LARGE finetuned on SQuADv2 This is xlm-roberta-large model finetuned on SQuADv2 dataset for question answering task ## Model details XLM-Roberta was propsed in the [paper](https://arxiv.org/pdf/1911.02116.pdf) **XLM-R: State-of-the-art cross-lingual understanding through self-supervision ## Model training This model was trained with following parameters using simpletransformers wrapper: ``` train_args = { 'learning_rate': 1e-5, 'max_seq_length': 512, 'doc_stride': 512, 'overwrite_output_dir': True, 'reprocess_input_data': False, 'train_batch_size': 8, 'num_train_epochs': 2, 'gradient_accumulation_steps': 2, 'no_cache': True, 'use_cached_eval_features': False, 'save_model_every_epoch': False, 'output_dir': "bart-squadv2", 'eval_batch_size': 32, 'fp16_opt_level': 'O2', } ``` ## Results ```{"correct": 6961, "similar": 4359, "incorrect": 553, "eval_loss": -12.177856394381962}``` ## Model in Action 🚀 ```python3 from transformers import XLMRobertaTokenizer, XLMRobertaForQuestionAnswering import torch tokenizer = XLMRobertaTokenizer.from_pretrained('a-ware/xlmroberta-squadv2') model = XLMRobertaForQuestionAnswering.from_pretrained('a-ware/xlmroberta-squadv2') question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet" encoding = tokenizer(question, text, return_tensors='pt') input_ids = encoding['input_ids'] attention_mask = encoding['attention_mask'] start_scores, end_scores = model(input_ids, attention_mask=attention_mask, output_attentions=False)[:2] all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0]) answer = ' '.join(all_tokens[torch.argmax(start_scores) : torch.argmax(end_scores)+1]) answer = tokenizer.convert_tokens_to_ids(answer.split()) answer = tokenizer.decode(answer) #answer => 'a nice puppet' ``` > Created with ❤️ by A-ware UG [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/aware-ai)
{"datasets": ["squad_v2"]}
aware-ai/xlmroberta-squadv2
null
[ "transformers", "pytorch", "safetensors", "xlm-roberta", "question-answering", "dataset:squad_v2", "arxiv:1911.02116", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-generation
transformers
# DialoGPT model fine tuned to conservative muslim discord messages
{"tags": ["conversational"]}
a01709042/DialoGPT-medium
null
[ "transformers", "pytorch", "gpt2", "text-generation", "conversational", "autotrain_compatible", "endpoints_compatible", "text-generation-inference", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
a01709042/DialoGPT-small
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
a01709042/MY_MODEL_NAME
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") model = AutoModelWithLMHead.from_pretrained("r3dhummingbird/DialoGPT-medium-joshua") # Let's chat for 4 lines for step in range(4): # encode the new user input, add the eos_token and return a tensor in Pytorch new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') # print(new_user_input_ids) # append the new user input tokens to the chat history bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids # generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) # pretty print last ouput tokens from bot print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
{}
a1fadog13/DialoGPT-small-joshua
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
summarization
transformers
# BART for Gigaword - This model was created by fine-tuning the `facebook/bart-large-cnn` weights (also on HuggingFace) for the Gigaword dataset. The model was fine-tuned on the Gigaword training set for 3 epochs, and the model with the highest ROUGE-1 score on the training set batches was kept. - The BART Tokenizer for CNN-Dailymail was used in the fine-tuning process and that is the tokenizer that will be loaded automatically when doing: ``` from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("a1noack/bart-large-gigaword") ``` # Summary generation - This model achieves ROUGE-1 / ROUGE-2 / ROUGE-L of 37.28 / 18.58 / 34.53 on the Gigaword test set; this is pretty good when compared to PEGASUS, `google/pegasus-gigaword`, which achieves 39.12 / 19.86 / 36.24. - To achieve these results, generate text using the code below. `text_list` is a list of input text string. ``` input_ids_list = tokenizer(text_list, truncation=True, max_length=128, return_tensors='pt', padding=True)['input_ids'] output_ids_list = model.generate(input_ids_list, min_length=0) outputs_list = tokenizer.batch_decode(output_ids_list, skip_special_tokens=True, clean_up_tokenization_spaces=False) ```
{"license": "mit", "tags": ["summarization"], "datasets": ["gigaword"], "thumbnail": "https://en.wikipedia.org/wiki/Bart_Simpson#/media/File:Bart_Simpson_200px.png"}
a1noack/bart-large-gigaword
null
[ "transformers", "pytorch", "bart", "summarization", "dataset:gigaword", "license:mit", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
aAronhun/DialoGPT-small-chicken_run
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
aRchMaGe/whatever
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_emotion_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9818 - F1: 0.7348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.551070618629693e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.7431 | 0.6530 | | No log | 2.0 | 408 | 0.6943 | 0.7333 | | 0.5176 | 3.0 | 612 | 0.8456 | 0.7326 | | 0.5176 | 4.0 | 816 | 0.9818 | 0.7348 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_emotion_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7348035780583043, "name": "F1"}]}]}]}
aXhyra/demo_emotion_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_emotion_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9818 - F1: 0.7348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.551070618629693e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.7431 | 0.6530 | | No log | 2.0 | 408 | 0.6943 | 0.7333 | | 0.5176 | 3.0 | 612 | 0.8456 | 0.7326 | | 0.5176 | 4.0 | 816 | 0.9818 | 0.7348 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_emotion_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7348035780583043, "name": "F1"}]}]}]}
aXhyra/demo_emotion_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_emotion_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9818 - F1: 0.7348 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.551070618629693e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.7431 | 0.6530 | | No log | 2.0 | 408 | 0.6943 | 0.7333 | | 0.5176 | 3.0 | 612 | 0.8456 | 0.7326 | | 0.5176 | 4.0 | 816 | 0.9818 | 0.7348 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_emotion_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7348035780583043, "name": "F1"}]}]}]}
aXhyra/demo_emotion_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_hate_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8697 - F1: 0.7773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.320702985778492e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 282 | 0.4850 | 0.7645 | | 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 | | 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 | | 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_hate_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7772939485986298, "name": "F1"}]}]}]}
aXhyra/demo_hate_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_hate_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8697 - F1: 0.7773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.320702985778492e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 282 | 0.4850 | 0.7645 | | 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 | | 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 | | 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_hate_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7772939485986298, "name": "F1"}]}]}]}
aXhyra/demo_hate_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_hate_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8697 - F1: 0.7773 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.320702985778492e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 282 | 0.4850 | 0.7645 | | 0.3877 | 2.0 | 564 | 0.5160 | 0.7856 | | 0.3877 | 3.0 | 846 | 0.6927 | 0.7802 | | 0.1343 | 4.0 | 1128 | 0.8697 | 0.7773 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_hate_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7772939485986298, "name": "F1"}]}]}]}
aXhyra/demo_hate_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_irony_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2905 - F1: 0.6858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7735294032820418e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 358 | 0.5872 | 0.6786 | | 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 | | 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 | | 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_irony_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.685764300192161, "name": "F1"}]}]}]}
aXhyra/demo_irony_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_irony_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2905 - F1: 0.6858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7735294032820418e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 358 | 0.5872 | 0.6786 | | 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 | | 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 | | 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_irony_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.685764300192161, "name": "F1"}]}]}]}
aXhyra/demo_irony_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_irony_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.2905 - F1: 0.6858 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7735294032820418e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 358 | 0.5872 | 0.6786 | | 0.5869 | 2.0 | 716 | 0.6884 | 0.6952 | | 0.3417 | 3.0 | 1074 | 0.9824 | 0.6995 | | 0.3417 | 4.0 | 1432 | 1.2905 | 0.6858 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_irony_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.685764300192161, "name": "F1"}]}]}]}
aXhyra/demo_irony_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_sentiment_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6332 - F1: 0.7114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.62486660723695e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 | | 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 | | 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 | | 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_sentiment_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7113620044371958, "name": "F1"}]}]}]}
aXhyra/demo_sentiment_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_sentiment_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6332 - F1: 0.7114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.62486660723695e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 | | 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 | | 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 | | 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_sentiment_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7113620044371958, "name": "F1"}]}]}]}
aXhyra/demo_sentiment_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # demo_sentiment_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6332 - F1: 0.7114 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8.62486660723695e-06 - train_batch_size: 64 - eval_batch_size: 64 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7592 | 1.0 | 713 | 0.6509 | 0.6834 | | 0.6389 | 2.0 | 1426 | 0.6318 | 0.7011 | | 0.5647 | 3.0 | 2139 | 0.6320 | 0.7041 | | 0.5391 | 4.0 | 2852 | 0.6332 | 0.7114 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "demo_sentiment_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7113620044371958, "name": "F1"}]}]}]}
aXhyra/demo_sentiment_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
null
null
{}
aXhyra/distilbert-base-cased-finetuned-sentiment
null
[ "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9051 - F1: 0.7302 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.961635072722524e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.6480 | 0.7231 | | No log | 2.0 | 408 | 0.6114 | 0.7403 | | 0.5045 | 3.0 | 612 | 0.7592 | 0.7311 | | 0.5045 | 4.0 | 816 | 0.9051 | 0.7302 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "emotion_trained_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7301562209701973, "name": "F1"}]}]}]}
aXhyra/emotion_trained_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_trained_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9274 - F1: 0.7198 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.961635072722524e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.6177 | 0.7137 | | No log | 2.0 | 408 | 0.7489 | 0.6761 | | 0.5082 | 3.0 | 612 | 0.8233 | 0.7283 | | 0.5082 | 4.0 | 816 | 0.9274 | 0.7198 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "emotion_trained_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.719757533529152, "name": "F1"}]}]}]}
aXhyra/emotion_trained_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_trained_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9012 - F1: 0.7361 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.961635072722524e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 204 | 0.6131 | 0.6955 | | No log | 2.0 | 408 | 0.5816 | 0.7297 | | 0.5148 | 3.0 | 612 | 0.8942 | 0.7199 | | 0.5148 | 4.0 | 816 | 0.9012 | 0.7361 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "emotion_trained_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7361210540311689, "name": "F1"}]}]}]}
aXhyra/emotion_trained_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # emotion_trained_final This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9349 - F1: 0.7469 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1.502523631581398e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.9013 | 1.0 | 815 | 0.7822 | 0.6470 | | 0.5008 | 2.0 | 1630 | 0.7142 | 0.7419 | | 0.3684 | 3.0 | 2445 | 0.8621 | 0.7443 | | 0.2182 | 4.0 | 3260 | 0.9349 | 0.7469 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "emotion_trained_final", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7469065445487402, "name": "F1"}]}]}]}
aXhyra/emotion_trained_final
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.7912 - F1: 0.7751 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4835 | 1.0 | 563 | 0.4881 | 0.7534 | | 0.3236 | 2.0 | 1126 | 0.5294 | 0.7610 | | 0.219 | 3.0 | 1689 | 0.6095 | 0.7717 | | 0.1409 | 4.0 | 2252 | 0.7912 | 0.7751 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "hate_trained_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7750768993843997, "name": "F1"}]}]}]}
aXhyra/hate_trained_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8568 - F1: 0.7729 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.482 | 1.0 | 563 | 0.4973 | 0.7672 | | 0.3316 | 2.0 | 1126 | 0.4931 | 0.7794 | | 0.2308 | 3.0 | 1689 | 0.7073 | 0.7593 | | 0.1444 | 4.0 | 2252 | 0.8568 | 0.7729 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "hate_trained_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7729447444817463, "name": "F1"}]}]}]}
aXhyra/hate_trained_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8994 - F1: 0.7712 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.7272339744854407e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.4835 | 1.0 | 563 | 0.4855 | 0.7556 | | 0.3277 | 2.0 | 1126 | 0.5354 | 0.7704 | | 0.2112 | 3.0 | 1689 | 0.6870 | 0.7751 | | 0.1384 | 4.0 | 2252 | 0.8994 | 0.7712 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "hate_trained_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7712319060633668, "name": "F1"}]}]}]}
aXhyra/hate_trained_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # hate_trained_final This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.5543 - F1: 0.7698 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.460503761236833e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.463 | 1.0 | 1125 | 0.5213 | 0.7384 | | 0.3943 | 2.0 | 2250 | 0.5134 | 0.7534 | | 0.3407 | 3.0 | 3375 | 0.5400 | 0.7666 | | 0.3121 | 4.0 | 4500 | 0.5543 | 0.7698 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "hate_trained_final", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7697890540753396, "name": "F1"}]}]}]}
aXhyra/hate_trained_final
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # irony_trained This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.6471 - F1: 0.6851 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6774391860025942e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6589 | 1.0 | 716 | 0.6187 | 0.6646 | | 0.5494 | 2.0 | 1432 | 0.9314 | 0.6793 | | 0.3369 | 3.0 | 2148 | 1.3468 | 0.6833 | | 0.2129 | 4.0 | 2864 | 1.6471 | 0.6851 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "irony_trained", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6851011633121422, "name": "F1"}]}]}]}
aXhyra/irony_trained
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # irony_trained_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.6580 - F1: 0.6766 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6774391860025942e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6608 | 1.0 | 716 | 0.6057 | 0.6704 | | 0.5329 | 2.0 | 1432 | 0.8935 | 0.6621 | | 0.3042 | 3.0 | 2148 | 1.3871 | 0.6822 | | 0.1769 | 4.0 | 2864 | 1.6580 | 0.6766 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "irony_trained_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6765645067647214, "name": "F1"}]}]}]}
aXhyra/irony_trained_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # irony_trained_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.6608 - F1: 0.6690 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6774391860025942e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6547 | 1.0 | 716 | 0.6173 | 0.6508 | | 0.57 | 2.0 | 1432 | 0.8629 | 0.6577 | | 0.2955 | 3.0 | 2148 | 1.4836 | 0.6722 | | 0.1903 | 4.0 | 2864 | 1.6608 | 0.6690 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "irony_trained_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6690050628690761, "name": "F1"}]}]}]}
aXhyra/irony_trained_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # irony_trained_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.5669 - F1: 0.6786 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2.6774391860025942e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6669 | 1.0 | 716 | 0.6291 | 0.6198 | | 0.5655 | 2.0 | 1432 | 0.7332 | 0.6771 | | 0.3764 | 3.0 | 2148 | 1.4193 | 0.6554 | | 0.229 | 4.0 | 2864 | 1.5669 | 0.6786 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "irony_trained_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6785912258473235, "name": "F1"}]}]}]}
aXhyra/irony_trained_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # irony_trained_final This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.4770 - F1: 0.6879 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4.842398023893579e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6852 | 1.0 | 716 | 0.6488 | 0.6530 | | 0.6263 | 2.0 | 1432 | 0.7647 | 0.6511 | | 0.4511 | 3.0 | 2148 | 1.2251 | 0.6764 | | 0.2578 | 4.0 | 2864 | 1.4770 | 0.6879 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "irony_trained_final", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6879413493337545, "name": "F1"}]}]}]}
aXhyra/irony_trained_final
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_emotion_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.0237 - F1: 0.7273 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.18796906442746e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1189 | 1.0 | 408 | 0.6827 | 0.7164 | | 1.0678 | 2.0 | 816 | 0.6916 | 0.7396 | | 0.6582 | 3.0 | 1224 | 0.9281 | 0.7276 | | 0.0024 | 4.0 | 1632 | 1.0237 | 0.7273 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_emotion_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7272977042723248, "name": "F1"}]}]}]}
aXhyra/presentation_emotion_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_emotion_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.1243 - F1: 0.7149 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.18796906442746e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.73 | 1.0 | 408 | 0.8206 | 0.6491 | | 0.3868 | 2.0 | 816 | 0.7733 | 0.7230 | | 0.0639 | 3.0 | 1224 | 0.9962 | 0.7101 | | 0.0507 | 4.0 | 1632 | 1.1243 | 0.7149 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_emotion_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.7148501877297316, "name": "F1"}]}]}]}
aXhyra/presentation_emotion_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_emotion_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.0989 - F1: 0.7329 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.18796906442746e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3703 | 1.0 | 408 | 0.6624 | 0.7029 | | 0.2122 | 2.0 | 816 | 0.6684 | 0.7258 | | 0.9452 | 3.0 | 1224 | 1.0001 | 0.7041 | | 0.0023 | 4.0 | 1632 | 1.0989 | 0.7329 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_emotion_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emotion"}, "metrics": [{"type": "f1", "value": 0.732897530282475, "name": "F1"}]}]}]}
aXhyra/presentation_emotion_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_hate_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8438 - F1: 0.7680 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.436235805743952e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6027 | 1.0 | 282 | 0.5186 | 0.7209 | | 0.3537 | 2.0 | 564 | 0.4989 | 0.7619 | | 0.0969 | 3.0 | 846 | 0.6405 | 0.7697 | | 0.0514 | 4.0 | 1128 | 0.8438 | 0.7680 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_hate_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7679568806891273, "name": "F1"}]}]}]}
aXhyra/presentation_hate_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_hate_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8632 - F1: 0.7730 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.436235805743952e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.363 | 1.0 | 282 | 0.4997 | 0.7401 | | 0.2145 | 2.0 | 564 | 0.5071 | 0.7773 | | 0.1327 | 3.0 | 846 | 0.7109 | 0.7645 | | 0.0157 | 4.0 | 1128 | 0.8632 | 0.7730 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_hate_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7729508817074093, "name": "F1"}]}]}]}
aXhyra/presentation_hate_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_hate_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.8711 - F1: 0.7692 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.436235805743952e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5207 | 1.0 | 282 | 0.4815 | 0.7513 | | 0.3047 | 2.0 | 564 | 0.5557 | 0.7510 | | 0.2335 | 3.0 | 846 | 0.6627 | 0.7585 | | 0.0056 | 4.0 | 1128 | 0.8711 | 0.7692 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_hate_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "hate"}, "metrics": [{"type": "f1", "value": 0.7692074096568478, "name": "F1"}]}]}]}
aXhyra/presentation_hate_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_irony_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9493 - F1: 0.6746 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.1637764704815665e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 1234567 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5514 | 1.0 | 90 | 0.5917 | 0.6767 | | 0.6107 | 2.0 | 180 | 0.6123 | 0.6730 | | 0.1327 | 3.0 | 270 | 0.7463 | 0.6970 | | 0.1068 | 4.0 | 360 | 0.9493 | 0.6746 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_irony_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.674604535422547, "name": "F1"}]}]}]}
aXhyra/presentation_irony_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_irony_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9694 - F1: 0.6754 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.1637764704815665e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 31415 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6601 | 1.0 | 90 | 0.6298 | 0.6230 | | 0.4887 | 2.0 | 180 | 0.6039 | 0.6816 | | 0.2543 | 3.0 | 270 | 0.7362 | 0.6803 | | 0.1472 | 4.0 | 360 | 0.9694 | 0.6754 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_irony_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6753923142373446, "name": "F1"}]}]}]}
aXhyra/presentation_irony_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_irony_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.9344 - F1: 0.6745 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.1637764704815665e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.6675 | 1.0 | 90 | 0.5988 | 0.6684 | | 0.5872 | 2.0 | 180 | 0.6039 | 0.6742 | | 0.3953 | 3.0 | 270 | 0.8549 | 0.6557 | | 0.0355 | 4.0 | 360 | 0.9344 | 0.6745 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_irony_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "irony"}, "metrics": [{"type": "f1", "value": 0.6745358521762839, "name": "F1"}]}]}]}
aXhyra/presentation_irony_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_sentiment_1234567 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.0860 - F1: 0.7183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.2792011721188e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3747 | 1.0 | 11404 | 0.6515 | 0.7045 | | 0.6511 | 2.0 | 22808 | 0.7334 | 0.7188 | | 0.0362 | 3.0 | 34212 | 0.9498 | 0.7195 | | 1.0576 | 4.0 | 45616 | 1.0860 | 0.7183 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_sentiment_1234567", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.71829420028644, "name": "F1"}]}]}]}
aXhyra/presentation_sentiment_1234567
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_sentiment_31415 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 1.0860 - F1: 0.7183 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.2792011721188e-06 - train_batch_size: 4 - eval_batch_size: 4 - seed: 0 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.3747 | 1.0 | 11404 | 0.6515 | 0.7045 | | 0.6511 | 2.0 | 22808 | 0.7334 | 0.7188 | | 0.0362 | 3.0 | 34212 | 0.9498 | 0.7195 | | 1.0576 | 4.0 | 45616 | 1.0860 | 0.7183 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_sentiment_31415", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.71829420028644, "name": "F1"}]}]}]}
aXhyra/presentation_sentiment_31415
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # presentation_sentiment_42 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 0.6491 - F1: 0.7176 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6.923967812567773e-06 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 0.4391 | 1.0 | 2851 | 0.6591 | 0.6953 | | 0.6288 | 2.0 | 5702 | 0.6265 | 0.7158 | | 0.4071 | 3.0 | 8553 | 0.6401 | 0.7179 | | 0.6532 | 4.0 | 11404 | 0.6491 | 0.7176 | ### Framework versions - Transformers 4.12.5 - Pytorch 1.9.1 - Datasets 1.16.1 - Tokenizers 0.10.3
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["tweet_eval"], "metrics": ["f1"], "model-index": [{"name": "presentation_sentiment_42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "sentiment"}, "metrics": [{"type": "f1", "value": 0.7175864613336908, "name": "F1"}]}]}]}
aXhyra/presentation_sentiment_42
null
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00
text-classification
transformers
{}
aXhyra/sentiment_temp
null
[ "transformers", "pytorch", "distilbert", "text-classification", "autotrain_compatible", "endpoints_compatible", "region:us" ]
null
2022-03-02T23:29:05+00:00