modelId
stringlengths
4
81
tags
sequence
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
camembert-base
[ "pytorch", "tf", "safetensors", "camembert", "fill-mask", "fr", "dataset:oscar", "arxiv:1911.03894", "transformers", "license:mit", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "CamembertForMaskedLM" ], "model_type": "camembert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,440,898
2022-05-27T14:12:21Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 37.48 +/- 98.28 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
distilbert-base-cased-distilled-squad
[ "pytorch", "tf", "rust", "safetensors", "openvino", "distilbert", "question-answering", "en", "dataset:squad", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "model-index", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
257,745
2022-05-27T14:23:07Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: juancopi81/distilbert-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # juancopi81/distilbert-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.8630 - Validation Loss: 2.5977 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -688, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.8630 | 2.5977 | 0 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.0 - Datasets 2.2.2 - Tokenizers 0.12.1
distilbert-base-cased
[ "pytorch", "tf", "onnx", "distilbert", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "transformers", "license:apache-2.0", "has_space" ]
null
{ "architectures": null, "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
574,859
2022-05-27T14:29:11Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 274.67 +/- 15.11 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
distilbert-base-multilingual-cased
[ "pytorch", "tf", "onnx", "safetensors", "distilbert", "fill-mask", "multilingual", "af", "sq", "ar", "an", "hy", "ast", "az", "ba", "eu", "bar", "be", "bn", "inc", "bs", "br", "bg", "my", "ca", "ceb", "ce", "zh", "cv", "hr", "cs", "da", "nl", "en", "et", "fi", "fr", "gl", "ka", "de", "el", "gu", "ht", "he", "hi", "hu", "is", "io", "id", "ga", "it", "ja", "jv", "kn", "kk", "ky", "ko", "la", "lv", "lt", "roa", "nds", "lm", "mk", "mg", "ms", "ml", "mr", "mn", "min", "ne", "new", "nb", "nn", "oc", "fa", "pms", "pl", "pt", "pa", "ro", "ru", "sco", "sr", "scn", "sk", "sl", "aze", "es", "su", "sw", "sv", "tl", "tg", "th", "ta", "tt", "te", "tr", "uk", "ud", "uz", "vi", "vo", "war", "cy", "fry", "pnb", "yo", "dataset:wikipedia", "arxiv:1910.01108", "arxiv:1910.09700", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8,339,633
2022-05-27T14:35:02Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.14.0 - Tokenizers 0.10.3
t5-small
[ "pytorch", "tf", "jax", "rust", "safetensors", "t5", "text2text-generation", "en", "fr", "ro", "de", "multilingual", "dataset:c4", "arxiv:1805.12471", "arxiv:1708.00055", "arxiv:1704.05426", "arxiv:1606.05250", "arxiv:1808.09121", "arxiv:1810.12885", "arxiv:1905.10044", "arxiv:1910.09700", "transformers", "summarization", "translation", "license:apache-2.0", "autotrain_compatible", "has_space" ]
translation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
1,886,928
2022-05-27T17:08:32Z
Models for: https://github.com/k2-fsa/icefall/pull/387
xlm-mlm-en-2048
[ "pytorch", "tf", "xlm", "fill-mask", "en", "arxiv:1901.07291", "arxiv:1911.02116", "arxiv:1910.09700", "transformers", "exbert", "license:cc-by-nc-4.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "XLMWithLMHeadModel" ], "model_type": "xlm", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7,043
2022-05-27T18:14:29Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.9881481481481481 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0508 - Accuracy: 0.9881 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2241 | 1.0 | 1518 | 0.0886 | 0.9719 | | 0.082 | 2.0 | 3036 | 0.0705 | 0.9815 | | 0.101 | 3.0 | 4554 | 0.0508 | 0.9881 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
xlm-roberta-large-finetuned-conll03-german
[ "pytorch", "rust", "xlm-roberta", "token-classification", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "lo", "lt", "lv", "mg", "mk", "ml", "mn", "mr", "ms", "my", "ne", "nl", "no", "om", "or", "pa", "pl", "ps", "pt", "ro", "ru", "sa", "sd", "si", "sk", "sl", "so", "sq", "sr", "su", "sv", "sw", "ta", "te", "th", "tl", "tr", "ug", "uk", "ur", "uz", "vi", "xh", "yi", "zh", "arxiv:1911.02116", "arxiv:1910.09700", "transformers", "autotrain_compatible", "has_space" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3,929
2022-05-27T19:38:54Z
--- license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-german-cased-finetuned-subj_preTrained_with_noisyData_v1.1 This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0179 - Precision: 0.9249 - Recall: 0.8776 - F1: 0.9006 - Accuracy: 0.9942 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 245 | 0.0244 | 0.9252 | 0.8120 | 0.8649 | 0.9924 | | No log | 2.0 | 490 | 0.0179 | 0.9249 | 0.8776 | 0.9006 | 0.9942 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Pinwheel/wav2vec2-large-xls-r-1b-hindi
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
Indonli + CommonVoice8.0 Dataset --> Train + Validation + Test WER : 0.216 WER with LM: 0.104
Abozoroov/Me
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-05-29T08:32:40Z
--- tags: - conversational --- # Alastor The Radio Demon Demon DialoGPT Model
AdapterHub/bert-base-uncased-pf-hotpotqa
[ "bert", "en", "dataset:hotpot_qa", "arxiv:2104.08247", "adapter-transformers", "question-answering" ]
question-answering
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-05-29T12:08:56Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-hindi-epochs35-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-hindi-epochs35-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 35 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
AdapterHub/bert-base-uncased-pf-qqp
[ "bert", "en", "arxiv:2104.08247", "adapter-transformers", "text-classification", "adapterhub:sts/qqp" ]
text-classification
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 80.85 +/- 107.50 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AdapterHub/roberta-base-pf-quoref
[ "roberta", "en", "dataset:quoref", "arxiv:2104.08247", "adapter-transformers", "question-answering" ]
question-answering
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-05-29T19:38:07Z
--- tags: - unconditional-image-generation library_name: keras --- ## Model description This repo contains the model for the notebook [Neural style transfer](https://keras.io/examples/generative/neural_style_transfer/). Full credits go to [fchollet](https://twitter.com/fchollet) Reproduced by [Rushi Chaudhari](https://github.com/rushic24) Style transfer consists in generating an image with the same "content" as a base image, but with the "style" of a different picture (typically artistic) by optimizing style loss, content loss, and total variation loss ## Dataset This is a pre-trained model of VGG19 trained on imagenet <details> <summary> View Model Plot </summary> ![Model Image](./model.png) </details>
AdapterHub/roberta-base-pf-trec
[ "roberta", "en", "dataset:trec", "arxiv:2104.08247", "adapter-transformers", "text-classification" ]
text-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer model-index: - name: ptt5-base-portuguese-vocab-summarizacao-PTT-BR results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ptt5-base-portuguese-vocab-summarizacao-PTT-BR This model is a fine-tuned version of [unicamp-dl/ptt5-base-portuguese-vocab](https://huggingface.co/unicamp-dl/ptt5-base-portuguese-vocab) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 3.6954 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 15 | 4.6282 | | No log | 2.0 | 30 | 3.9111 | | No log | 3.0 | 45 | 3.6954 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
AdapterHub/roberta-base-pf-ud_deprel
[ "roberta", "en", "dataset:universal_dependencies", "arxiv:2104.08247", "adapter-transformers", "token-classification", "adapterhub:deprel/ud_ewt" ]
token-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-05-29T23:34:45Z
--- library_name: keras --- ## Model description This repo contains the model for the notebook [Image similarity estimation using a Siamese Network with a contrastive loss](https://keras.io/examples/vision/siamese_contrastive/). Full credits go to Mehdi Reproduced by [Rushi Chaudhari](https://github.com/rushic24) [Siamese Networks](https://en.wikipedia.org/wiki/Siamese_neural_network) are neural networks which share weights between two or more sister networks, each producing embedding vectors of its respective inputs. In supervised similarity learning, the networks are then trained to maximize the contrast (distance) between embeddings of inputs of different classes, while minimizing the distance between embeddings of similar classes, resulting in embedding spaces that reflect the class segmentation of the training inputs. ## Dataset [MNIST dataset](https://www.tensorflow.org/datasets/catalog/mnist) of handwritten digits ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: ``` epochs = 10 batch_size = 16 margin = 1 ``` ### Training results ![Contrastive loss](./contrastiveloss.png) ![Accuracy](./accuracy.png) <details> <summary> View Model Plot </summary> ![Model Image](./model.png) </details>
AethiQs-Max/aethiqs-base_bertje-data_rotterdam-epochs_30-epoch_30
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: autotrain language: unk widget: - text: "I love AutoTrain 🤗" datasets: - CH0KUN/autotrain-data-TNC_Data2500_WangchanBERTa co2_eq_emissions: 0.07293362913158113 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 928030564 - CO2 Emissions (in grams): 0.07293362913158113 ## Validation Metrics - Loss: 0.4989683926105499 - Accuracy: 0.8445845697329377 - Macro F1: 0.8407629450432429 - Micro F1: 0.8445845697329377 - Weighted F1: 0.8407629450432429 - Macro Precision: 0.8390327354531153 - Micro Precision: 0.8445845697329377 - Weighted Precision: 0.8390327354531154 - Macro Recall: 0.8445845697329377 - Micro Recall: 0.8445845697329377 - Weighted Recall: 0.8445845697329377 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("CH0KUN/autotrain-TNC_Data2500_WangchanBERTa-928030564", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Akame/Vi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-05-30T10:59:05Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5353 - Wer: 0.3360 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.5345 | 1.0 | 500 | 1.8229 | 0.9810 | | 0.8731 | 2.01 | 1000 | 0.5186 | 0.5165 | | 0.4455 | 3.01 | 1500 | 0.4386 | 0.4572 | | 0.3054 | 4.02 | 2000 | 0.4396 | 0.4286 | | 0.2354 | 5.02 | 2500 | 0.4454 | 0.4051 | | 0.1897 | 6.02 | 3000 | 0.4465 | 0.3925 | | 0.1605 | 7.03 | 3500 | 0.4776 | 0.3974 | | 0.1413 | 8.03 | 4000 | 0.5254 | 0.4062 | | 0.1211 | 9.04 | 4500 | 0.5123 | 0.3913 | | 0.1095 | 10.04 | 5000 | 0.4171 | 0.3711 | | 0.1039 | 11.04 | 5500 | 0.4258 | 0.3732 | | 0.0932 | 12.05 | 6000 | 0.4879 | 0.3701 | | 0.0867 | 13.05 | 6500 | 0.4725 | 0.3637 | | 0.0764 | 14.06 | 7000 | 0.5041 | 0.3636 | | 0.0661 | 15.06 | 7500 | 0.4692 | 0.3646 | | 0.0647 | 16.06 | 8000 | 0.4804 | 0.3612 | | 0.0576 | 17.07 | 8500 | 0.5545 | 0.3628 | | 0.0577 | 18.07 | 9000 | 0.5004 | 0.3557 | | 0.0481 | 19.08 | 9500 | 0.5341 | 0.3558 | | 0.0466 | 20.08 | 10000 | 0.5056 | 0.3514 | | 0.0433 | 21.08 | 10500 | 0.4864 | 0.3481 | | 0.0362 | 22.09 | 11000 | 0.4994 | 0.3473 | | 0.0325 | 23.09 | 11500 | 0.5327 | 0.3446 | | 0.0351 | 24.1 | 12000 | 0.5360 | 0.3445 | | 0.0284 | 25.1 | 12500 | 0.5085 | 0.3399 | | 0.027 | 26.1 | 13000 | 0.5344 | 0.3426 | | 0.0247 | 27.11 | 13500 | 0.5310 | 0.3357 | | 0.0251 | 28.11 | 14000 | 0.5201 | 0.3355 | | 0.0228 | 29.12 | 14500 | 0.5353 | 0.3360 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.11.0+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
Akash7897/distilbert-base-uncased-finetuned-sst2
[ "pytorch", "tensorboard", "distilbert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0", "model-index" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-non-slippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Sadhaklal/q-FrozenLake-v1-4x4-non-slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Akash7897/gpt2-wikitext2
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1522479920714240001/wi1LPddl_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">sydney</div> <div style="text-align: center; font-size: 14px;">@ultrafungi</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from sydney. | Data | sydney | | --- | --- | | Tweets downloaded | 125 | | Retweets | 35 | | Short tweets | 9 | | Tweets kept | 81 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/wk3rd28k/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @ultrafungi's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3cil1w2p) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3cil1w2p/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/ultrafungi') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Akash7897/my-newtokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en-US license: apache-2.0 tags: - minds14 - google/xtreme_s - generated_from_trainer datasets: - xtreme_s metrics: - f1 - accuracy model-index: - name: xtreme_s_xlsr_300m_mt5-small_minds14.en-US results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xtreme_s_xlsr_300m_mt5-small_minds14.en-US This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.EN-US dataset. It achieves the following results on the evaluation set: - Loss: 4.7321 - F1: 0.0154 - Accuracy: 0.0638 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 50.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:--------:| | 2.6067 | 3.95 | 20 | 2.6501 | 0.0112 | 0.0851 | | 2.5614 | 7.95 | 40 | 2.8018 | 0.0133 | 0.0603 | | 2.2836 | 11.95 | 60 | 3.0786 | 0.0084 | 0.0603 | | 1.9597 | 15.95 | 80 | 3.2288 | 0.0126 | 0.0638 | | 1.5566 | 19.95 | 100 | 3.6934 | 0.0178 | 0.0567 | | 1.3168 | 23.95 | 120 | 3.9135 | 0.0150 | 0.0638 | | 1.0598 | 27.95 | 140 | 4.2618 | 0.0084 | 0.0603 | | 0.5721 | 31.95 | 160 | 3.7973 | 0.0354 | 0.0780 | | 0.4402 | 35.95 | 180 | 4.6233 | 0.0179 | 0.0638 | | 0.6113 | 39.95 | 200 | 4.6149 | 0.0208 | 0.0674 | | 0.3938 | 43.95 | 220 | 4.7886 | 0.0159 | 0.0638 | | 0.2473 | 47.95 | 240 | 4.7321 | 0.0154 | 0.0638 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0+cu113 - Datasets 2.1.0 - Tokenizers 0.12.1
Akashpb13/Galician_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "gl", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - seals/Hopper-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 2228.87 +/- 43.40 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/Hopper-v0 type: seals/Hopper-v0 --- # **PPO** Agent playing **seals/Hopper-v0** This is a trained model of a **PPO** agent playing **seals/Hopper-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo --env seals/Hopper-v0 -orga ernestumorga -f logs/ python enjoy.py --algo ppo --env seals/Hopper-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env seals/Hopper-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo --env seals/Hopper-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 512), ('clip_range', 0.1), ('ent_coef', 0.0010159833764878474), ('gae_lambda', 0.98), ('gamma', 0.995), ('learning_rate', 0.0003904770450788824), ('max_grad_norm', 0.9), ('n_envs', 1), ('n_epochs', 20), ('n_steps', 2048), ('n_timesteps', 1000000.0), ('normalize', True), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(activation_fn=nn.ReLU, net_arch=[dict(pi=[64, 64], vf=[64, ' '64])])'), ('vf_coef', 0.20315938606555833), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
Akashpb13/Hausa_xlsr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "ha", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index", "has_space" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- library_name: stable-baselines3 tags: - seals/Humanoid-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: -43.69 +/- 155.83 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: seals/Humanoid-v0 type: seals/Humanoid-v0 --- # **PPO** Agent playing **seals/Humanoid-v0** This is a trained model of a **PPO** agent playing **seals/Humanoid-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m utils.load_from_hub --algo ppo --env seals/Humanoid-v0 -orga ernestumorga -f logs/ python enjoy.py --algo ppo --env seals/Humanoid-v0 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo --env seals/Humanoid-v0 -f logs/ # Upload the model and generate video (when possible) python -m utils.push_to_hub --algo ppo --env seals/Humanoid-v0 -f logs/ -orga ernestumorga ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('clip_range', 0.2), ('ent_coef', 2.0745206045994986e-05), ('gae_lambda', 0.92), ('gamma', 0.999), ('learning_rate', 2.0309225666232827e-05), ('max_grad_norm', 0.5), ('n_envs', 1), ('n_epochs', 20), ('n_steps', 2048), ('n_timesteps', 10000000.0), ('normalize', True), ('policy', 'MlpPolicy'), ('policy_kwargs', 'dict(activation_fn=nn.ReLU, net_arch=[dict(pi=[256, 256], ' 'vf=[256, 256])])'), ('vf_coef', 0.819262464558427), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
Akashpb13/Kabyle_xlsr
[ "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "kab", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "sw", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - vivos_dataset model-index: - name: wav2vec2-base-vios results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-vios This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the vivos_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.3729 - Wer: 0.2427 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.4755 | 1.37 | 500 | 0.7991 | 0.5957 | | 0.5424 | 2.75 | 1000 | 0.4290 | 0.3653 | | 0.3586 | 4.12 | 1500 | 0.3809 | 0.2890 | | 0.2824 | 5.49 | 2000 | 0.3808 | 0.2749 | | 0.2249 | 6.87 | 2500 | 0.3467 | 0.2389 | | 0.1745 | 8.24 | 3000 | 0.3688 | 0.2384 | | 0.1459 | 9.61 | 3500 | 0.3729 | 0.2427 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
Akashpb13/xlsr_kurmanji_kurdish
[ "pytorch", "safetensors", "wav2vec2", "automatic-speech-recognition", "kmr", "ku", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "robust-speech-event", "model_for_talk", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 120.82 +/- 109.98 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AkshatSurolia/DeiT-FaceMask-Finetuned
[ "pytorch", "deit", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible" ]
image-classification
{ "architectures": [ "DeiTForImageClassification" ], "model_type": "deit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46
null
--- license: other tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # MobileViT (extra extra small-sized model) MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/mobilevit-xx-small") model = MobileViTForImageClassification.from_pretrained("apple/mobilevit-xx-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The MobileViT model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes. ## Training procedure ### Preprocessing Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping. To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320). At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB. ### Pretraining The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling. ## Evaluation results | Model | ImageNet top-1 accuracy | ImageNet top-5 accuracy | # params | URL | |-------------------|-------------------------|-------------------------|-----------|-------------------------------------------------| | **MobileViT-XXS** | **69.0** | **88.9** | **1.3 M** | https://huggingface.co/apple/mobilevit-xx-small | | MobileViT-XS | 74.8 | 92.3 | 2.3 M | https://huggingface.co/apple/mobilevit-x-small | | MobileViT-S | 78.4 | 94.1 | 5.6 M | https://huggingface.co/apple/mobilevit-small | ### BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2110.02178} } ```
AkshatSurolia/ICD-10-Code-Prediction
[ "pytorch", "bert", "transformers", "text-classification", "license:apache-2.0", "has_space" ]
text-classification
{ "architectures": null, "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
994
null
--- license: other tags: - vision - image-segmentation datasets: - pascal-voc widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg example_title: Cat --- # MobileViT + DeepLabV3 (small-sized model) MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings. The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-small") model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_mask = logits.argmax(1).squeeze(0) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset. ## Training procedure ### Preprocessing At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB. ### Pretraining The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling. To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs. ## Evaluation results | Model | PASCAL VOC mIOU | # params | URL | |------------------|-----------------|-----------|-----------------------------------------------------------| | MobileViT-XXS | 73.6 | 1.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-xx-small | | MobileViT-XS | 77.1 | 2.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-x-small | | **MobileViT-S** | **79.1** | **6.4 M** | https://huggingface.co/apple/deeplabv3-mobilevit-small | ### BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2110.02178} } ```
AkshatSurolia/ViT-FaceMask-Finetuned
[ "pytorch", "safetensors", "vit", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible" ]
image-classification
{ "architectures": [ "ViTForImageClassification" ], "model_type": "vit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
40
null
--- license: other tags: - vision - image-segmentation datasets: - pascal-voc widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-2.jpg example_title: Cat --- # MobileViT + DeepLabV3 (extra small-sized model) MobileViT model pre-trained on PASCAL VOC at resolution 512x512. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings. The model in this repo adds a [DeepLabV3](https://arxiv.org/abs/1706.05587) head to the MobileViT backbone for semantic segmentation. ## Intended uses & limitations You can use the raw model for semantic segmentation. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model: ```python from transformers import MobileViTFeatureExtractor, MobileViTForSemanticSegmentation from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained("apple/deeplabv3-mobilevit-x-small") model = MobileViTForSemanticSegmentation.from_pretrained("apple/deeplabv3-mobilevit-x-small") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits predicted_mask = logits.argmax(1).squeeze(0) ``` Currently, both the feature extractor and model support PyTorch. ## Training data The MobileViT + DeepLabV3 model was pretrained on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k), a dataset consisting of 1 million images and 1,000 classes, and then fine-tuned on the [PASCAL VOC2012](http://host.robots.ox.ac.uk/pascal/VOC/) dataset. ## Training procedure ### Preprocessing At inference time, images are center-cropped at 512x512. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB. ### Pretraining The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling. To obtain the DeepLabV3 model, MobileViT was fine-tuned on the PASCAL VOC dataset using 4 NVIDIA A100 GPUs. ## Evaluation results | Model | PASCAL VOC mIOU | # params | URL | |------------------|-----------------|-----------|-----------------------------------------------------------| | MobileViT-XXS | 73.6 | 1.9 M | https://huggingface.co/apple/deeplabv3-mobilevit-xx-small | | **MobileViT-XS** | **77.1** | **2.9 M** | https://huggingface.co/apple/deeplabv3-mobilevit-x-small | | MobileViT-S | 79.1 | 6.4 M | https://huggingface.co/apple/deeplabv3-mobilevit-small | ### BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2110.02178} } ```
AkshaySg/gramCorrection
[ "pytorch", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
4
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}') model = AutoModel.from_pretrained('{MODEL_NAME}') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 1658 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss` Parameters of the fit()-Method: ``` { "epochs": 4, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 664, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AlanDev/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 260.43 +/- 13.38 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AlbertHSU/BertTEST
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 270.02 +/- 32.44 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Alberto15Romero/GptNeo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer datasets: - opus_books model-index: - name: mbart-large-50-finetuned-opus-en-pt-translation-finetuned-en-to-pt-dataset-opus-books results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-large-50-finetuned-opus-en-pt-translation-finetuned-en-to-pt-dataset-opus-books This model is a fine-tuned version of [Narrativa/mbart-large-50-finetuned-opus-en-pt-translation](https://huggingface.co/Narrativa/mbart-large-50-finetuned-opus-en-pt-translation) on the opus_books dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 79 | 1.5854 | 31.2219 | 26.9149 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Aleenbo/Arcane
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 193.83 +/- 12.64 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Aleksandar/bert-srb-base-cased-oscar
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: keras tags: - image-segmentation --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ## Training Metrics | Epochs | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | |--- |--- |--- |--- |--- | | 1| 1.206| 0.636| 2.55| 0.555| | 2| 0.957| 0.696| 2.671| 0.598| | 3| 0.847| 0.729| 1.431| 0.612| | 4| 0.774| 0.751| 1.008| 0.689| | 5| 0.712| 0.771| 1.016| 0.705| ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Aleksandar/bert-srb-ner-setimes
[ "pytorch", "bert", "token-classification", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2022-05-30T15:12:45Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: cewinharhar/iceCream results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # cewinharhar/iceCream This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.1909 - Validation Loss: 3.0925 - Epoch: 92 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.9926 | 4.0419 | 0 | | 3.9831 | 3.8247 | 1 | | 3.8396 | 3.7337 | 2 | | 3.7352 | 3.6509 | 3 | | 3.6382 | 3.5948 | 4 | | 3.5595 | 3.5458 | 5 | | 3.4845 | 3.4667 | 6 | | 3.4140 | 3.4460 | 7 | | 3.3546 | 3.4035 | 8 | | 3.2939 | 3.3571 | 9 | | 3.2420 | 3.3465 | 10 | | 3.1867 | 3.2970 | 11 | | 3.1418 | 3.2716 | 12 | | 3.0865 | 3.2609 | 13 | | 3.0419 | 3.2318 | 14 | | 2.9962 | 3.2279 | 15 | | 2.9551 | 3.1991 | 16 | | 2.9178 | 3.1656 | 17 | | 2.8701 | 3.1654 | 18 | | 2.8348 | 3.1372 | 19 | | 2.7988 | 3.1281 | 20 | | 2.7597 | 3.0978 | 21 | | 2.7216 | 3.1019 | 22 | | 2.6844 | 3.0388 | 23 | | 2.6489 | 3.0791 | 24 | | 2.6192 | 3.0885 | 25 | | 2.5677 | 3.0388 | 26 | | 2.5478 | 3.0530 | 27 | | 2.5136 | 3.0403 | 28 | | 2.4756 | 3.0521 | 29 | | 2.4454 | 3.0173 | 30 | | 2.4203 | 3.0079 | 31 | | 2.3882 | 3.0325 | 32 | | 2.3596 | 3.0066 | 33 | | 2.3279 | 2.9919 | 34 | | 2.2947 | 2.9871 | 35 | | 2.2712 | 2.9834 | 36 | | 2.2311 | 2.9917 | 37 | | 2.2022 | 2.9796 | 38 | | 2.1703 | 2.9641 | 39 | | 2.1394 | 2.9571 | 40 | | 2.1237 | 2.9662 | 41 | | 2.0949 | 2.9358 | 42 | | 2.0673 | 2.9653 | 43 | | 2.0417 | 2.9416 | 44 | | 2.0194 | 2.9531 | 45 | | 2.0009 | 2.9417 | 46 | | 1.9716 | 2.9325 | 47 | | 1.9488 | 2.9476 | 48 | | 1.9265 | 2.9559 | 49 | | 1.8975 | 2.9477 | 50 | | 1.8815 | 2.9429 | 51 | | 1.8552 | 2.9119 | 52 | | 1.8358 | 2.9377 | 53 | | 1.8226 | 2.9605 | 54 | | 1.7976 | 2.9446 | 55 | | 1.7677 | 2.9162 | 56 | | 1.7538 | 2.9292 | 57 | | 1.7376 | 2.9968 | 58 | | 1.7156 | 2.9525 | 59 | | 1.7001 | 2.9275 | 60 | | 1.6806 | 2.9714 | 61 | | 1.6582 | 2.9903 | 62 | | 1.6436 | 2.9363 | 63 | | 1.6254 | 2.9714 | 64 | | 1.6093 | 2.9804 | 65 | | 1.5900 | 2.9740 | 66 | | 1.5686 | 2.9835 | 67 | | 1.5492 | 3.0018 | 68 | | 1.5371 | 3.0088 | 69 | | 1.5245 | 2.9780 | 70 | | 1.5021 | 3.0176 | 71 | | 1.4839 | 2.9917 | 72 | | 1.4726 | 3.0602 | 73 | | 1.4568 | 3.0055 | 74 | | 1.4435 | 3.0186 | 75 | | 1.4225 | 2.9948 | 76 | | 1.4088 | 3.0270 | 77 | | 1.3947 | 3.0676 | 78 | | 1.3780 | 3.0615 | 79 | | 1.3627 | 3.0780 | 80 | | 1.3445 | 3.0491 | 81 | | 1.3293 | 3.0534 | 82 | | 1.3130 | 3.0460 | 83 | | 1.2980 | 3.0846 | 84 | | 1.2895 | 3.0709 | 85 | | 1.2737 | 3.0903 | 86 | | 1.2557 | 3.0854 | 87 | | 1.2499 | 3.1101 | 88 | | 1.2353 | 3.1181 | 89 | | 1.2104 | 3.1111 | 90 | | 1.2101 | 3.1153 | 91 | | 1.1909 | 3.0925 | 92 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.9.1 - Datasets 2.1.0 - Tokenizers 0.12.1
Aleksandar/distilbert-srb-base-cased-oscar
[ "pytorch", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: clementgyj/roberta-finetuned-squad-50k results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # clementgyj/roberta-finetuned-squad-50k This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.5281 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 9462, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.0876 | 0 | | 0.6879 | 1 | | 0.5281 | 2 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.0 - Datasets 2.2.2 - Tokenizers 0.12.1
Aleksandar/electra-srb-ner-setimes-lr
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - swag model-index: - name: bert-base-uncased-finetuned-swag results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-swag This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Aleksandar/electra-srb-oscar
[ "pytorch", "electra", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Tinchoroman/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Aleksandar1932/gpt2-rock-124439808
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-05-30T16:24:15Z
--- tags: - espnet - audio - text-to-speech language: ko datasets: - kss license: cc-by-4.0 --- ## ESPnet2 TTS model ### `imdanboy/kss_tts_train_jets_raw_phn_korean_cleaner_korean_jaso_train.total_count.ave` This model was trained by satoshi.2020 using kss recipe in [espnet](https://github.com/espnet/espnet/). ### Demo: How to use in ESPnet2 ```bash cd espnet git checkout c173c30930631731e6836c274a591ad571749741 pip install -e . cd egs2/kss/tts1 ./run.sh --skip_data_prep false --skip_train true --download_model imdanboy/kss_tts_train_jets_raw_phn_korean_cleaner_korean_jaso_train.total_count.ave ``` ## TTS config <details><summary>expand</summary> ``` config: conf/tuning/train_jets.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/tts_train_jets_raw_phn_korean_cleaner_korean_jaso ngpu: 1 seed: 777 num_workers: 4 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 4 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 46357 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: false collect_stats: false write_collected_feats: false max_epoch: 1000 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - text2mel_loss - min - - train - text2mel_loss - min - - train - total_count - max keep_nbest_models: 5 nbest_averaging_interval: 0 grad_clip: -1 grad_clip_type: 2.0 grad_noise: false accum_grad: 1 no_forward_run: false resume: true train_dtype: float32 use_amp: false log_interval: 50 use_matplotlib: true use_tensorboard: true use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: 1000 batch_size: 20 valid_batch_size: null batch_bins: 2000000 valid_batch_bins: null train_shape_file: - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/train/text_shape.phn - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/train/speech_shape valid_shape_file: - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/valid/text_shape.phn - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/valid/speech_shape batch_type: numel valid_batch_type: null fold_length: - 150 - 204800 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/tr_no_dev/text - text - text - - dump/raw/tr_no_dev/wav.scp - speech - sound - - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/train/collect_feats/pitch.scp - pitch - npy - - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/train/collect_feats/energy.scp - energy - npy valid_data_path_and_name_and_type: - - dump/raw/dev/text - text - text - - dump/raw/dev/wav.scp - speech - sound - - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/valid/collect_feats/pitch.scp - pitch - npy - - exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/valid/collect_feats/energy.scp - energy - npy allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adamw optim_conf: lr: 0.0002 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler: exponentiallr scheduler_conf: gamma: 0.999875 optim2: adamw optim2_conf: lr: 0.0002 betas: - 0.8 - 0.99 eps: 1.0e-09 weight_decay: 0.0 scheduler2: exponentiallr scheduler2_conf: gamma: 0.999875 generator_first: true token_list: - <blank> - <unk> - <space> - ᄋ - ᅡ - ᅵ - ᅳ - ᆫ - ᄀ - ᅥ - ᆯ - ᄌ - ᄉ - ᄂ - ᅩ - ᄃ - ᄒ - . - ᅮ - ᄅ - ᅦ - ᆼ - ᄆ - ᅢ - ᅧ - ᅭ - ᄇ - ᆨ - ᆷ - ᆻ - ᆸ - ᄎ - ᅪ - '?' - ᄐ - ᄑ - ᆺ - ᄁ - ᅴ - ᅬ - ᅣ - ᄄ - ᅯ - ᆭ - ᅨ - ᅱ - ᇂ - ᄏ - ᄊ - ᆹ - ᅲ - ᆽ - ᇀ - ᄈ - ᇁ - ᄍ - ᆮ - ᅫ - ',' - ᆾ - '!' - ᆩ - ᆰ - ᆶ - ᅤ - ':' - ᆲ - ᆱ - ᆬ - ᅰ - '''' - '-' - ᆿ - ᆴ - ᆪ - <sos/eos> odim: null model_conf: {} use_preprocessor: true token_type: phn bpemodel: null non_linguistic_symbols: null cleaner: korean_cleaner g2p: korean_jaso feats_extract: fbank feats_extract_conf: n_fft: 1024 hop_length: 256 win_length: null fs: 24000 fmin: 0 fmax: null n_mels: 80 normalize: global_mvn normalize_conf: stats_file: exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/train/feats_stats.npz tts: jets tts_conf: generator_type: jets_generator generator_params: adim: 256 aheads: 2 elayers: 4 eunits: 1024 dlayers: 4 dunits: 1024 positionwise_layer_type: conv1d positionwise_conv_kernel_size: 3 duration_predictor_layers: 2 duration_predictor_chans: 256 duration_predictor_kernel_size: 3 use_masking: true encoder_normalize_before: true decoder_normalize_before: true encoder_type: transformer decoder_type: transformer conformer_rel_pos_type: latest conformer_pos_enc_layer_type: rel_pos conformer_self_attn_layer_type: rel_selfattn conformer_activation_type: swish use_macaron_style_in_conformer: true use_cnn_in_conformer: true conformer_enc_kernel_size: 7 conformer_dec_kernel_size: 31 init_type: xavier_uniform transformer_enc_dropout_rate: 0.2 transformer_enc_positional_dropout_rate: 0.2 transformer_enc_attn_dropout_rate: 0.2 transformer_dec_dropout_rate: 0.2 transformer_dec_positional_dropout_rate: 0.2 transformer_dec_attn_dropout_rate: 0.2 pitch_predictor_layers: 5 pitch_predictor_chans: 256 pitch_predictor_kernel_size: 5 pitch_predictor_dropout: 0.5 pitch_embed_kernel_size: 1 pitch_embed_dropout: 0.0 stop_gradient_from_pitch_predictor: true energy_predictor_layers: 2 energy_predictor_chans: 256 energy_predictor_kernel_size: 3 energy_predictor_dropout: 0.5 energy_embed_kernel_size: 1 energy_embed_dropout: 0.0 stop_gradient_from_energy_predictor: false generator_out_channels: 1 generator_channels: 512 generator_global_channels: -1 generator_kernel_size: 7 generator_upsample_scales: - 8 - 8 - 2 - 2 generator_upsample_kernel_sizes: - 16 - 16 - 4 - 4 generator_resblock_kernel_sizes: - 3 - 7 - 11 generator_resblock_dilations: - - 1 - 3 - 5 - - 1 - 3 - 5 - - 1 - 3 - 5 generator_use_additional_convs: true generator_bias: true generator_nonlinear_activation: LeakyReLU generator_nonlinear_activation_params: negative_slope: 0.1 generator_use_weight_norm: true segment_size: 64 idim: 76 odim: 80 discriminator_type: hifigan_multi_scale_multi_period_discriminator discriminator_params: scales: 1 scale_downsample_pooling: AvgPool1d scale_downsample_pooling_params: kernel_size: 4 stride: 2 padding: 2 scale_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 15 - 41 - 5 - 3 channels: 128 max_downsample_channels: 1024 max_groups: 16 bias: true downsample_scales: - 2 - 2 - 4 - 4 - 1 nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false follow_official_norm: false periods: - 2 - 3 - 5 - 7 - 11 period_discriminator_params: in_channels: 1 out_channels: 1 kernel_sizes: - 5 - 3 channels: 32 downsample_scales: - 3 - 3 - 3 - 3 - 1 max_downsample_channels: 1024 bias: true nonlinear_activation: LeakyReLU nonlinear_activation_params: negative_slope: 0.1 use_weight_norm: true use_spectral_norm: false generator_adv_loss_params: average_by_discriminators: false loss_type: mse discriminator_adv_loss_params: average_by_discriminators: false loss_type: mse feat_match_loss_params: average_by_discriminators: false average_by_layers: false include_final_outputs: true mel_loss_params: fs: 24000 n_fft: 1024 hop_length: 256 win_length: null window: hann n_mels: 80 fmin: 0 fmax: null log_base: null lambda_adv: 1.0 lambda_mel: 45.0 lambda_feat_match: 2.0 lambda_var: 1.0 lambda_align: 2.0 sampling_rate: 24000 cache_generator_outputs: true pitch_extract: dio pitch_extract_conf: reduction_factor: 1 use_token_averaged_f0: false fs: 24000 n_fft: 1024 hop_length: 256 f0max: 400 f0min: 80 pitch_normalize: global_mvn pitch_normalize_conf: stats_file: exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/train/pitch_stats.npz energy_extract: energy energy_extract_conf: reduction_factor: 1 use_token_averaged_energy: false fs: 24000 n_fft: 1024 hop_length: 256 win_length: null energy_normalize: global_mvn energy_normalize_conf: stats_file: exp/tts_stats_raw_phn_korean_cleaner_korean_jaso/train/energy_stats.npz required: - output_dir - token_list version: '202204' distributed: true ``` </details> ### Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Aleksandar1932/gpt2-soul
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - yelp_review_full model-index: - name: modelo-teste results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # modelo-teste This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 125 | 1.1553 | 0.57 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Aleksandar1932/gpt2-spanish-classics
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1362800111118659591/O6gxa7NN_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">erinkhoo.x</div> <div style="text-align: center; font-size: 14px;">@erinkhoo</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from erinkhoo.x. | Data | erinkhoo.x | | --- | --- | | Tweets downloaded | 3216 | | Retweets | 1795 | | Short tweets | 181 | | Tweets kept | 1240 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/navmzjcl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @erinkhoo's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3uoi8z43) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3uoi8z43/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/erinkhoo') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Aleksandra/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: nl license: mit datasets: - dbrd model-index: - name: robbert-v2-dutch-sentiment results: - task: type: text-classification name: Text Classification dataset: name: dbrd type: sentiment-analysis split: test metrics: - name: Accuracy type: accuracy value: 0.93325 widget: - text: "Ik erken dat dit een boek is, daarmee is alles gezegd." - text: "Prachtig verhaal, heel mooi verteld en een verrassend einde... Een topper!" thumbnail: "https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo.png" tags: - Dutch - Flemish - RoBERTa - RobBERT --- <p align="center"> <img src="https://github.com/iPieter/RobBERT/raw/master/res/robbert_logo_with_name.png" alt="RobBERT: A Dutch RoBERTa-based Language Model" width="75%"> </p> # RobBERT finetuned for sentiment analysis on DBRD This is a finetuned model based on [RobBERT (v2)](https://huggingface.co/pdelobelle/robbert-v2-dutch-base). We used [DBRD](https://huggingface.co/datasets/dbrd), which consists of book reviews from [hebban.nl](https://hebban.nl). Hence our example sentences about books. We did some limited experiments to test if this also works for other domains, but this was not exactly amazing. We released a distilled model and a `base`-sized model. Both models perform quite well, so there is only a slight performance tradeoff: | Model | Identifier | Layers | #Params. | Accuracy | |----------------|------------------------------------------------------------------------|--------|-----------|-----------| | RobBERT (v2) | [`DTAI-KULeuven/robbert-v2-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment) | 12 | 116 M |93.3* | | RobBERTje - Merged (p=0.5)| [`DTAI-KULeuven/robbertje-merged-dutch-sentiment`](https://huggingface.co/DTAI-KULeuven/robbertje-merged-dutch-sentiment) | 6 | 74 M |92.9 | *The results of RobBERT are of a different run than the one reported in the paper. # Training data and setup We used the [Dutch Book Reviews Dataset (DBRD)](https://huggingface.co/datasets/dbrd) from van der Burgh et al. (2019). Originally, these reviews got a five-star rating, but this has been converted to positive (⭐️⭐️⭐️⭐️ and ⭐️⭐️⭐️⭐️⭐️), neutral (⭐️⭐️⭐️) and negative (⭐️ and ⭐️⭐️). We used 19.5k reviews for the training set, 528 reviews for the validation set and 2224 to calculate the final accuracy. The validation set was used to evaluate a random hyperparameter search over the learning rate, weight decay and gradient accumulation steps. The full training details are available in [`training_args.bin`](https://huggingface.co/DTAI-KULeuven/robbert-v2-dutch-sentiment/blob/main/training_args.bin) as a binary PyTorch file. # Limitations and biases - The domain of the reviews is limited to book reviews. - Most authors of the book reviews were women, which could have caused [a difference in performance for reviews written by men and women](https://www.aclweb.org/anthology/2020.findings-emnlp.292). - This is _not_ the same model as we discussed in our paper, due to some conversion issues between the original training two years ago and now, it was easier to retrain this model. The accuracy is slightly lower, but the model was trained on the beginning of the reviews instead of the end of the reviews. ## Credits and citation This project is created by [Pieter Delobelle](https://people.cs.kuleuven.be/~pieter.delobelle), [Thomas Winters](https://thomaswinters.be) and [Bettina Berendt](https://people.cs.kuleuven.be/~bettina.berendt/). If you would like to cite our paper or models, you can use the following BibTeX: ``` @inproceedings{delobelle2020robbert, title = "{R}ob{BERT}: a {D}utch {R}o{BERT}a-based {L}anguage {M}odel", author = "Delobelle, Pieter and Winters, Thomas and Berendt, Bettina", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.findings-emnlp.292", doi = "10.18653/v1/2020.findings-emnlp.292", pages = "3255--3265" } ```
AlekseyKorshuk/comedy-scripts
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- language: - en thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg tags: - question-answering license: apache-2.0 datasets: - squad metrics: - squad --- # DistilBERT with a second step of distillation ## Model description This model replicates the "DistilBERT (D)" model from Table 2 of the [DistilBERT paper](https://arxiv.org/pdf/1910.01108.pdf). In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, but with a BERT model (also fine-tuned on SQuAD v1.1) acting as a teacher for a second step of task-specific distillation. In this version, the following pre-trained models were used: * Student: `distilbert-base-uncased` * Teacher: `lewtun/bert-base-uncased-finetuned-squad-v1` ## Training data This model was trained on the SQuAD v1.1 dataset which can be obtained from the `datasets` library as follows: ```python from datasets import load_dataset squad = load_dataset('squad') ``` ## Training procedure ## Eval results | | Exact Match | F1 | |------------------|-------------|------| | DistilBERT paper | 79.1 | 86.9 | | Ours | 78.4 | 86.5 | The scores were calculated using the `squad` metric from `datasets`. ### BibTeX entry and citation info ```bibtex @misc{sanh2020distilbert, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, year={2020}, eprint={1910.01108}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
AlekseyKulnevich/Pegasus-Summarization
[ "pytorch", "pegasus", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "PegasusForConditionalGeneration" ], "model_type": "pegasus", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="voleg44/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Alerosae/SocratesGPT-2
[ "pytorch", "gpt2", "feature-extraction", "en", "transformers", "text-generation" ]
text-generation
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-scratch-powo_mgh_pt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-scratch-powo_mgh_pt This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.0408 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 5 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 40 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 6.4584 | 0.2 | 200 | 4.7806 | | 4.6385 | 0.41 | 400 | 4.3704 | | 4.2219 | 0.61 | 600 | 4.0727 | | 3.994 | 0.81 | 800 | 3.8772 | | 3.8048 | 1.01 | 1000 | 3.6894 | | 3.6722 | 1.22 | 1200 | 3.5732 | | 3.4828 | 1.42 | 1400 | 3.4203 | | 3.3648 | 1.62 | 1600 | 3.3634 | | 3.3918 | 1.83 | 1800 | 3.2685 | | 3.3919 | 2.03 | 2000 | 3.2027 | | 3.1715 | 2.23 | 2200 | 3.1365 | | 3.0635 | 2.43 | 2400 | 3.1228 | | 3.0804 | 2.64 | 2600 | 3.0595 | | 3.0468 | 2.84 | 2800 | 3.0318 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Alessandro/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-05-30T17:46:51Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="PraveenKishore/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
AlexKay/xlm-roberta-large-qa-multilingual-finedtuned-ru
[ "pytorch", "xlm-roberta", "question-answering", "en", "ru", "multilingual", "arxiv:1912.09723", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
question-answering
{ "architectures": [ "XLMRobertaForQuestionAnswering" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10,012
null
--- license: mit tags: - generated_from_trainer model-index: - name: bart-cnn-science-v3-e1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-cnn-science-v3-e1 This model is a fine-tuned version of [theojolliffe/bart-cnn-science](https://huggingface.co/theojolliffe/bart-cnn-science) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 398 | 1.0643 | 51.6454 | 31.8213 | 33.7711 | 49.3471 | 141.5926 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AlexN/xls-r-300m-fr
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "fr", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
17
null
--- license: mit tags: - generated_from_keras_callback model-index: - name: nouman10/robertabase-claims-3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # nouman10/robertabase-claims-3 This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0310 - Validation Loss: 0.1227 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -861, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.1380 | 0.1630 | 0 | | 0.0310 | 0.1227 | 1 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.0 - Datasets 2.2.2 - Tokenizers 0.12.1
AlexN/xls-r-300m-pt
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "pt", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "robust-speech-event", "mozilla-foundation/common_voice_8_0", "generated_from_trainer", "hf-asr-leaderboard", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion-tweets results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.9355 - name: F1 type: f1 value: 0.9358599960917737 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-tweets This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1572 - Accuracy: 0.9355 - F1: 0.9359 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.1672 | 0.932 | 0.9320 | | No log | 2.0 | 500 | 0.1572 | 0.9355 | 0.9359 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AlexaMerens/Owl
[ "license:cc" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer model-index: - name: deberta-base-combined-squad1-aqa-newsqa-50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deberta-base-combined-squad1-aqa-newsqa-50 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7756 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.9401 | 1.0 | 18532 | 0.8266 | | 0.6811 | 2.0 | 37064 | 0.7756 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Alexander-Learn/bert-finetuned-ner
[ "pytorch", "tensorboard", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - da tags: - summarization widget: - text: "Det nye studie Cognitive Science på Aarhus Universitet, som i år havde Østjyllands højeste adgangskrav på 11,7 i karaktergennemsnit, udklækker det første hold bachelorer til sommer. Men når de skal læse videre på kandidaten må de til udlandet, hvis ikke de vil skifte til et andet fag. Aarhus Universitet kan nemlig ikke nå at oprette en kandidat i Cognitive Science til næste sommer, hvor det første hold bachelorer er færdige. Det rammer blandt andre Julie Sohn, der startede på uddannelsen i sommeren 2015, og derfor kun mangler et år, før hun er bachelor. - Jeg synes, at det er ærgerligt, at vi som nye studerende på et populært studie ikke kan tage en kandidat i Danmark, siger hun. Bacheloruddannelsen i Cognitive Science blev oprettet af Aarhus Universitet i 2015, og uddannelsen kombinerer viden om menneskelig adfærd med avanceret statistik. Da der endnu ikke er oprettet en kandidatuddannelse indenfor dette område, har Julie Sohn i stedet mulighed for at læse en kandidatgrad i for eksempel informationsvidenskab. Hun vil dog hellere fortsætte på Cognitive Science, og derfor overvejer hun nu at læse videre i udlandet. - Det ser ud til, at det er den eneste mulighed, hvis man gerne vil læse videre på noget, der faktisk passer ind til vores studie, siger hun. Nye regler giver forsinkelse På Aarhus Universitet havde man håbet på at have kandidatuddannelsen klar, når det første hold bachelorer bliver færdige til sommer. Arbejdet er dog blevet forsinket, fordi der er kommet nye regler for, hvornår man må oprette en uddannelse, fortæller Niels Lehmann, prodekan på fakultetet Arts, som Cognitive Science hører under. Det er nogle meget dygtige studerende, der kommer ind på uddannelsen, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast. NIELS LEHMANN, PRODEKAN, AARHUS UNIVERSITET Tidligere skulle Danmarks Akkrediteringsinstitution se alle nye uddannelser efter i sømmene for at sikre, at kvaliteten var i orden. Nu skal uddannelsesinstitutionerne selv stå for det kvalitetstjek. Men det tjek har Aarhus Universitet endnu ikke fået grønt lys til selv at udføre, fortæller prodekanen. - Vi ville meget gerne have kunnet nå at få et udbud på kandidaten i gang i 2018, men så længe man er under institutionsakkreditering, så kan man ikke ansøge om nye uddannelser, siger han. Det er endnu usikkert, hvornår Aarhus Universitet kan oprette kandidaten i Cognitive Science. Hvis de får alle de nødvendige godkendelser, kan den tidligst være klar i 2019. Prodekan Niels Lehmann frygter, at Danmark kommer til at miste nogle af landets skarpeste studerende, hvis de er nødt til at rejse til udlandet for at gøre deres uddannelse færdig. - Det er nogle meget, meget dygtige studerende, der kommer ind på denne uddannelse, og det er klart, at de i et vist omfang vil orientere sig mod udlandet, hvor man så kan forestille sig, at de bider sig fast, siger han. Hos Danmarks Akkrediteringsinstitution forstår man godt, at universitets ansatte og studenrede ærgrer sig. - Jeg kan godt forstå, at Aarhus Universitet ærgrer sig over, at det trækker ud, og at der går noget tid, før man får mulighed for at oprette nye uddannelser, og at man ikke har fået den genvej til at oprette nye uddannelser, som ville være fuldt med, hvis man havde opnået en positiv institutionsakkreditering, siger kommunikationsansvarlig Daniel Sebastian Larsen. I år var Cognitive Science i Aarhus den uddannelse i Danmark, der havde det fjerde højeste karakterkrav - det højeste var 'AP Graduate in Marketing Management' på Erhvervsakademi Sjælland med et krav på 12,3." example_title: "Summarization" --- This repository contains a model for Danish abstractive summarisation of news articles. The summariser is based on a language-specific mT5-base, where the vocabulary is condensed to include tokens used in Danish and English. The model is fine-tuned using an abstractive subset of the DaNewsroom dataset (Varab & Schluter, 2020), according to the binned density categories employed in Newsroom (Grusky et al., 2019).
Alexander-Learn/bert-finetuned-squad-accelerate
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: keras tags: - image-segmentation --- ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 0.001, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ## Training Metrics | Epochs | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | |--- |--- |--- |--- |--- | | 1| 1.189| 0.639| 2.374| 0.596| | 2| 0.954| 0.697| 1.89| 0.59| | 3| 0.84| 0.732| 1.3| 0.651| | 4| 0.77| 0.753| 1.014| 0.677| | 5| 0.704| 0.773| 1.053| 0.668| ## Model Plot <details> <summary>View Model Plot</summary> ![Model Image](./model.png) </details>
Amit29/t5-small-finetuned-xsum
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: gpl-3.0 --- Wav2vec2 model trained with audio clips from Arabic shows using the Emirati dialect.
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en thumbnail: http://www.huggingtweets.com/binance-dydx-magiceden/1653996837144/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1529814669493682176/BqZU57Cf_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1490589455786573824/M5_HK15F_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1364590285255290882/hjnIm9bV_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Magic Eden 🪄 & Binance & dYdX</div> <div style="text-align: center; font-size: 14px;">@binance-dydx-magiceden</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Magic Eden 🪄 & Binance & dYdX. | Data | Magic Eden 🪄 | Binance | dYdX | | --- | --- | --- | --- | | Tweets downloaded | 3249 | 3250 | 1679 | | Retweets | 141 | 194 | 463 | | Short tweets | 908 | 290 | 40 | | Tweets kept | 2200 | 2766 | 1176 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/28typldl/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @binance-dydx-magiceden's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/196gmkng) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/196gmkng/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/binance-dydx-magiceden') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/roberta-base_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - vision datasets: - imagenet-21k inference: false --- # Vision Transformer (base-sized model) Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) by Dosovitskiy et al. and first released in [this repository](https://github.com/google-research/vision_transformer). However, the weights were converted from the [timm repository](https://github.com/rwightman/pytorch-image-models) by Ross Wightman, who already converted the weights from JAX to PyTorch. Credits go to him. Disclaimer: The team releasing ViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description The Vision Transformer (ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Images are presented to the model as a sequence of fixed-size patches (resolution 16x16), which are linearly embedded. One also adds a [CLS] token to the beginning of a sequence to use it for classification tasks. One also adds absolute position embeddings before feeding the sequence to the layers of the Transformer encoder. Note that this model does not provide any fine-tuned heads, as these were zero'd by Google researchers. However, the model does include the pre-trained pooler, which can be used for downstream tasks (such as image classification). By pre-training the model, it learns an inner representation of images that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled images for instance, you can train a standard classifier by placing a linear layer on top of the pre-trained encoder. One typically places a linear layer on top of the [CLS] token, as the last hidden state of this token can be seen as a representation of an entire image. ## Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=google/vit) to look for fine-tuned versions on a task that interests you. ### How to use Here is how to use this model in PyTorch: ```python from transformers import ViTFeatureExtractor, ViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = ViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Here is how to use this model in JAX/Flax: ```python from transformers import ViTFeatureExtractor, FlaxViTModel from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained('google/vit-base-patch16-224-in21k') model = FlaxViTModel.from_pretrained('google/vit-base-patch16-224-in21k') inputs = feature_extractor(images=image, return_tensors="np") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` ## Training data The ViT model was pretrained on [ImageNet-21k](http://www.image-net.org/), a dataset consisting of 14 million images and 21k classes. ## Training procedure ### Preprocessing The exact details of preprocessing of images during training/validation can be found [here](https://github.com/google-research/vision_transformer/blob/master/vit_jax/input_pipeline.py). Images are resized/rescaled to the same resolution (224x224) and normalized across the RGB channels with mean (0.5, 0.5, 0.5) and standard deviation (0.5, 0.5, 0.5). ### Pretraining The model was trained on TPUv3 hardware (8 cores). All model variants are trained with a batch size of 4096 and learning rate warmup of 10k steps. For ImageNet, the authors found it beneficial to additionally apply gradient clipping at global norm 1. Pre-training resolution is 224. ## Evaluation results For evaluation results on several image classification benchmarks, we refer to tables 2 and 5 of the original paper. Note that for fine-tuning, the best results are obtained with a higher resolution (384x384). Of course, increasing the model size will result in better performance. ### BibTeX entry and citation info ```bibtex @misc{wu2020visual, title={Visual Transformers: Token-based Image Representation and Processing for Computer Vision}, author={Bichen Wu and Chenfeng Xu and Xiaoliang Dai and Alvin Wan and Peizhao Zhang and Zhicheng Yan and Masayoshi Tomizuka and Joseph Gonzalez and Kurt Keutzer and Peter Vajda}, year={2020}, eprint={2006.03677}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @inproceedings{deng2009imagenet, title={Imagenet: A large-scale hierarchical image database}, author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li}, booktitle={2009 IEEE conference on computer vision and pattern recognition}, pages={248--255}, year={2009}, organization={Ieee} } ```
AnonymousSub/rule_based_bert_hier_diff_equal_wts_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: finetuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [federicopascual/finetuning-sentiment-model-3000-samples](https://huggingface.co/federicopascual/finetuning-sentiment-model-3000-samples) on the imdb dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b5-segments-warehouse1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b5-segments-warehouse1 This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset. It achieves the following results on the evaluation set: - Loss: 0.1610 - Mean Iou: 0.6952 - Mean Accuracy: 0.8014 - Overall Accuracy: 0.9648 - Per Category Iou: [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979] - Per Category Accuracy: [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 0.1656 | 1.0 | 787 | 0.1917 | 0.5943 | 0.6937 | 0.9348 | [0.0, 0.8760430595457738, 0.8113714411434076, 0.9533787339343942, 0.8499988352439646, 0.9330256290984922, 0.964368918196211, 0.6984009498117659, 0.9341093239597545, 0.288411561596369, 0.0, 0.6496866199024376, 0.4510074387900882, 0.5206343319728309, 0.6377305875444397, 0.5391733301507737, 0.1395685713288422, 0.390702947845805, 0.6999919374344916, 0.548023343373494] | [nan, 0.9502542152644661, 0.9516900451328754, 0.9788975544390225, 0.921821413759201, 0.9534230318615367, 0.9778020069070933, 0.8108538425970355, 0.970571911491369, 0.2993067645848501, 0.0, 0.7454496363566233, 0.5849840255591054, 0.5858306866277158, 0.7137540570947559, 0.6925710548100606, 0.16576498144808574, 0.4165357186026834, 0.8142326593390103, 0.6474578532983408] | | 0.0948 | 2.0 | 1574 | 0.2058 | 0.6310 | 0.7305 | 0.9442 | [0.0, 0.904077233776714, 0.8616556242304713, 0.9604692135700761, 0.8306854004041632, 0.9459690932012119, 0.9714777936344227, 0.7463801249809481, 0.9197830038961162, 0.4759644364074744, 0.0, 0.7133768631713745, 0.4878118726699168, 0.5403469048526253, 0.6267211124010835, 0.6280780328151242, 0.11116434156063161, 0.4757211293446132, 0.7386220435315599, 0.6814722192019137] | [nan, 0.9530795697109564, 0.9481439135801821, 0.9753750826203033, 0.9328161802391284, 0.9783733696392768, 0.9831560736299451, 0.8544532947139754, 0.9700176894451403, 0.5598936405938401, 0.0, 0.8212854589792271, 0.5434504792332269, 0.5765256977221256, 0.7602586827898242, 0.745275787709383, 0.12024542420662065, 0.5128732019823522, 0.8080522939565592, 0.8363729371469241] | | 0.0595 | 3.0 | 2361 | 0.1363 | 0.6578 | 0.7540 | 0.9494 | [0.0, 0.9109388123768081, 0.8466263269727539, 0.965583073696094, 0.8848508600101197, 0.9507919193853351, 0.9742807972055659, 0.7672266040033193, 0.9571650494933543, 0.5580972230045627, 0.0, 0.7572676505482382, 0.5338298840118263, 0.5743160573368553, 0.6964399439112182, 0.6369583059750492, 0.19255896751223853, 0.49017131449756574, 0.7563405327946686, 0.7018448645266491] | [nan, 0.9587813659877967, 0.9568298005631468, 0.9842947615263231, 0.9380059570384915, 0.9734457175747111, 0.9839202800499454, 0.863077218359317, 0.9757816512090675, 0.6272609287455287, 0.0, 0.8589569413670591, 0.5999361022364217, 0.6161844118746441, 0.7983763527021668, 0.793146442915981, 0.2242190576871256, 0.5288397085810358, 0.8216978654762351, 0.8232729860771318] | | 0.0863 | 4.0 | 3148 | 0.1706 | 0.6597 | 0.7678 | 0.9537 | [0.0, 0.5911845175607978, 0.8922572171811833, 0.9657396689703207, 0.8726664918778465, 0.948172990516989, 0.9741643734457509, 0.7832072821045744, 0.9578631876788363, 0.5869565217391305, 0.0, 0.7602876424039574, 0.5747447162194254, 0.6642950791717092, 0.6978602093118107, 0.7122118073263809, 0.21745086578505152, 0.5091171801864137, 0.763416879968237, 0.7220314268720861] | [nan, 0.9656626144746107, 0.9588916966191391, 0.9766109980050623, 0.9234167566678667, 0.9783156758536367, 0.9891284919047324, 0.8876447135391675, 0.9773653302095363, 0.6623721946123896, 0.0, 0.8391697702425289, 0.6185942492012779, 0.6961703584876796, 0.8060121894956657, 0.8277923697200732, 0.24677155234956366, 0.5498060503499884, 0.8475353565667555, 0.8369956852453183] | | 0.0849 | 5.0 | 3935 | 0.1529 | 0.6489 | 0.7616 | 0.9535 | [0.0, 0.34717493700692625, 0.9200786785121082, 0.9707860061715432, 0.9064316496153364, 0.9571373496125165, 0.9765647396031262, 0.7914886053951578, 0.9636858999629485, 0.5253852888123762, 0.0, 0.7668434757450091, 0.6228696113699357, 0.5646135260344276, 0.7194371537530142, 0.7276571750775304, 0.13134474327628362, 0.5398065590178835, 0.8087983436006237, 0.7371620697069805] | [nan, 0.9673995855258336, 0.9622823082917784, 0.9832096263122092, 0.9590923200613435, 0.9794833291868915, 0.9849481430590119, 0.8741570190973889, 0.9814726613968338, 0.5661042702035389, 0.0, 0.8519369313384734, 0.674888178913738, 0.5955861885708164, 0.7973710835377057, 0.8440933293815855, 0.139191177994735, 0.5807830511082053, 0.8902258318640507, 0.8387304835194164] | | 0.0652 | 6.0 | 4722 | 0.1776 | 0.6701 | 0.7802 | 0.9598 | [0.0, 0.442020662403383, 0.9221209597093164, 0.9723970198449976, 0.9094898951877407, 0.958969887541612, 0.9774286126326331, 0.8043337900190548, 0.9641322534475246, 0.524194500874002, 0.0, 0.7732021981650511, 0.6714277552419585, 0.6791383524722951, 0.7265590222386986, 0.7252668038047013, 0.25612624095650144, 0.512317443386938, 0.8223912256195354, 0.7602526763224181] | [nan, 0.9667776521571092, 0.968306375662177, 0.9871287057126554, 0.9515142073239339, 0.9800501491032743, 0.9870913605013194, 0.8911998464531551, 0.9789458602211063, 0.5619638504637396, 0.0, 0.8429926328466184, 0.750926517571885, 0.7091730161871252, 0.8058454540303847, 0.8431735260151052, 0.2957320232987169, 0.5489159698031933, 0.8944742469145065, 0.8592366887593968] | | 0.0516 | 7.0 | 5509 | 0.2204 | 0.6782 | 0.7854 | 0.9562 | [0.0, 0.5972965874238374, 0.9024890361234837, 0.9727685140940331, 0.915582953759141, 0.9598962357171329, 0.9798718588278901, 0.8112726586102719, 0.9047252363294271, 0.6408527982442389, 0.0, 0.7886848740988032, 0.676712646342877, 0.5672950158399087, 0.7336613818739761, 0.7298649456617311, 0.3028603088856569, 0.5060868673401364, 0.8269845785168136, 0.7471687598272396] | [nan, 0.9698273468544609, 0.9632905651879291, 0.9861640741314249, 0.9551792854314081, 0.9817079843391511, 0.9899518141518776, 0.8996100259110301, 0.9832172012468946, 0.6987812984710835, 0.0, 0.8565569379384828, 0.7460702875399361, 0.593452450290354, 0.8111955580377016, 0.848355084979611, 0.3625810998486827, 0.5422458600265925, 0.8997261507296395, 0.834927271918509] | | 0.1051 | 8.0 | 6296 | 0.1860 | 0.6731 | 0.7789 | 0.9575 | [0.0, 0.44805540920356957, 0.9045125103512419, 0.9742941726927242, 0.9171717803896707, 0.9608739687771942, 0.9806696534895757, 0.8165927346840907, 0.9677688538979997, 0.6195552331193943, 0.0, 0.795984684169727, 0.6862710467443778, 0.573071397774824, 0.7390593444665892, 0.746059006435751, 0.2037963564144674, 0.5303406505500898, 0.8387988518436741, 0.7590468131997875] | [nan, 0.9709112878685233, 0.966379770128131, 0.9872427322752713, 0.9529925896087971, 0.9834568092767589, 0.9900317817435064, 0.8913394344939497, 0.9851288999243455, 0.6704124592447216, 0.0, 0.871338387626268, 0.7448562300319489, 0.5994265432176736, 0.8121846392929121, 0.8435414473616973, 0.2212134402918558, 0.5609595288067426, 0.8906947518475448, 0.8579244695520661] | | 0.0619 | 9.0 | 7083 | 0.2919 | 0.6996 | 0.7903 | 0.9579 | [0.0, 0.934913158921961, 0.9053172937262943, 0.9749731654503406, 0.8705131863049136, 0.9625421596476281, 0.9801264786114002, 0.8223383305806123, 0.9066864104553713, 0.6468175775129386, 0.0, 0.7950479182280621, 0.7176821075997429, 0.5689160215594734, 0.7424713897302829, 0.7480081111150989, 0.3071719253739231, 0.5035704204000125, 0.8359422295252097, 0.7696666024282135] | [nan, 0.9682325320018036, 0.9702179964865137, 0.9871538608460199, 0.9606411126417358, 0.9816951395784177, 0.9890656141613147, 0.9035010425481796, 0.9836680314909386, 0.689949669209585, 0.0, 0.8547140781629688, 0.7850479233226837, 0.5903872774743949, 0.8138309496636962, 0.8520138583707216, 0.3614203096822337, 0.5292682658813446, 0.9065161120906329, 0.8882611983452693] | | 0.081 | 10.0 | 7870 | 0.2470 | 0.6804 | 0.7921 | 0.9583 | [0.0, 0.4404433924045006, 0.9318621565838054, 0.9751204660574527, 0.8701648407446415, 0.9625333515302946, 0.9811772580795882, 0.8257730976318673, 0.9694596723226286, 0.6262599628453287, 0.0, 0.8035308913444122, 0.7247258740455824, 0.5731919576321138, 0.7446832704519876, 0.7540709586972932, 0.2964031339031339, 0.5176075672651548, 0.8402309249924604, 0.7699341552529259] | [nan, 0.9683524762943433, 0.9703483634609842, 0.9874040565137937, 0.9560906426120769, 0.9828287794111833, 0.9897414692905638, 0.9071739528715878, 0.9809845681174846, 0.6616061536513564, 0.0, 0.8707555296507566, 0.8066453674121405, 0.5982298533423343, 0.8269010675926151, 0.8575633386818196, 0.3450448769769707, 0.5489928903442743, 0.9145158870090407, 0.8764289844757795] | | 0.0595 | 11.0 | 8657 | 0.1520 | 0.6754 | 0.7803 | 0.9583 | [0.0, 0.43998949915443775, 0.9316636729918347, 0.974311900634481, 0.90408659589869, 0.9621039259469353, 0.9814528086580536, 0.8173484866921386, 0.9299168519752622, 0.5981595278841879, 0.0, 0.79896542666047, 0.7130791649318979, 0.5767892232828117, 0.7434904893608313, 0.7476740572849074, 0.2818679619421856, 0.5013427236914975, 0.8417679322268942, 0.7636900967723242] | [nan, 0.9604694708457627, 0.9682111157218825, 0.9850226034689381, 0.9629913194164226, 0.9838887233262218, 0.9906282066977372, 0.8790295141463755, 0.9828138682520776, 0.6217973473457631, 0.0, 0.8472869246956067, 0.7660702875399361, 0.601589754313674, 0.8233235396482367, 0.8360910400932068, 0.3211657649814481, 0.5272243772183335, 0.8880687999399782, 0.8793425559361239] | | 0.0607 | 12.0 | 9444 | 0.1907 | 0.6792 | 0.7814 | 0.9611 | [0.0, 0.4394265102382861, 0.9325678358934418, 0.9751503005414947, 0.9213536629526586, 0.9630218995457999, 0.9808145244188059, 0.8160516650442948, 0.9402095421968347, 0.5678403556289702, 0.0, 0.7897903639847522, 0.717973174366617, 0.6351749265433101, 0.7451406149738536, 0.7539060338307724, 0.2810049109433409, 0.5169863186167534, 0.8447414560224139, 0.7628612943763745] | [nan, 0.964392093449931, 0.9699039597844642, 0.9860071181495944, 0.9689476561441872, 0.9817555601847723, 0.9915172012546744, 0.8703445207331861, 0.9829836512368835, 0.5919660662847014, 0.0, 0.8320126171608817, 0.7695846645367412, 0.6606869598697208, 0.8177192854656857, 0.8353858575122385, 0.31786995004456603, 0.541465665967056, 0.8991915819484563, 0.8640852275254659] | | 0.054 | 13.0 | 10231 | 0.1756 | 0.6845 | 0.7854 | 0.9633 | [0.0, 0.44063089620853896, 0.9319015227980866, 0.9747420439658205, 0.9230841377589553, 0.9626774348954341, 0.9806204202647846, 0.824089995398513, 0.9682449901582629, 0.6269069221957562, 0.0, 0.7878031759942226, 0.7230044147476434, 0.6870255399578931, 0.7273836360818303, 0.7465091396254238, 0.25750268946841265, 0.5202245077135331, 0.8455619310735664, 0.7623883906475817] | [nan, 0.9684613146338701, 0.9659761462687484, 0.985573907589379, 0.969242630837417, 0.9846717514218756, 0.9904148523034052, 0.8905935109009535, 0.9873657317056209, 0.6548320724256909, 0.0, 0.8321711888159841, 0.7743769968051119, 0.7167465941354711, 0.7672955669410517, 0.8485288256155018, 0.28777231930020936, 0.5469380130325374, 0.8955527628765427, 0.8564788043236511] | | 0.0908 | 14.0 | 11018 | 0.1677 | 0.6922 | 0.7956 | 0.9641 | [0.0, 0.4710389646938612, 0.9277225664822271, 0.9753445134184554, 0.9250469473155007, 0.9640090632546157, 0.9817333061419466, 0.8297056239192101, 0.970059681920668, 0.647379308685926, 0.0, 0.79693329490141, 0.7458423929012165, 0.6895638439061885, 0.7486849253355593, 0.7520096317485606, 0.30687537928818764, 0.49287677819238446, 0.848826224760963, 0.7700556938025832] | [nan, 0.9666066204807101, 0.9697912533607226, 0.9863864033340946, 0.9658514745108883, 0.9826761492096202, 0.9913739259863396, 0.9020659030037601, 0.9838249561044068, 0.6815485423063531, 0.0, 0.8412997732853904, 0.8109904153354632, 0.7185046709734403, 0.8232134618653327, 0.8490091673735526, 0.35638330949567815, 0.5181697306682197, 0.9016768578609746, 0.8671989680174369] | | 0.0584 | 15.0 | 11805 | 0.1610 | 0.6952 | 0.8014 | 0.9648 | [0.0, 0.47153295365063086, 0.9293854681828234, 0.9766069961659746, 0.927007550222462, 0.9649404794739765, 0.9824606440795911, 0.8340592613982738, 0.9706739467997174, 0.653761891900003, 0.0, 0.8080046149867717, 0.75033588410538, 0.6921465280057791, 0.7522124809345331, 0.7548461579766955, 0.3057219434101416, 0.5087799410519325, 0.84829211455404, 0.7730356409704979] | [nan, 0.9722884260421271, 0.9720560851996344, 0.9881427437833682, 0.9650114633107388, 0.9828538231066912, 0.9897027752946145, 0.9071521422402136, 0.9848998109819413, 0.6895634832705517, 0.0, 0.8704126720181029, 0.8207667731629393, 0.7189631369929214, 0.8238982104266324, 0.8620090549531412, 0.3522998155172771, 0.5387075151368637, 0.9081104400345125, 0.8794092789466661] | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu102 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en tags: - exbert license: apache-2.0 datasets: - bookcorpus - wikipedia --- # DistilBERT base model (uncased) This model is a distilled version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was introduced in [this paper](https://arxiv.org/abs/1910.01108). The code for the distillation process can be found [here](https://github.com/huggingface/transformers/tree/master/examples/distillation). This model is uncased: it does not make a difference between english and English. ## Model description DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model. More precisely, it was pretrained with three objectives: - Distillation loss: the model was trained to return the same probabilities as the BERT base model. - Masked language modeling (MLM): this is part of the original training loss of the BERT base model. When taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Cosine embedding loss: the model was also trained to generate hidden states as close as possible as the BERT base model. This way, the model learns the same inner representation of the English language than its teacher model, while being faster for inference or downstream tasks. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=distilbert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased') >>> unmasker("Hello I'm a [MASK] model.") [{'sequence': "[CLS] hello i'm a role model. [SEP]", 'score': 0.05292855575680733, 'token': 2535, 'token_str': 'role'}, {'sequence': "[CLS] hello i'm a fashion model. [SEP]", 'score': 0.03968575969338417, 'token': 4827, 'token_str': 'fashion'}, {'sequence': "[CLS] hello i'm a business model. [SEP]", 'score': 0.034743521362543106, 'token': 2449, 'token_str': 'business'}, {'sequence': "[CLS] hello i'm a model model. [SEP]", 'score': 0.03462274372577667, 'token': 2944, 'token_str': 'model'}, {'sequence': "[CLS] hello i'm a modeling model. [SEP]", 'score': 0.018145186826586723, 'token': 11643, 'token_str': 'modeling'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import DistilBertTokenizer, DistilBertModel tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') model = DistilBertModel.from_pretrained("distilbert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in TensorFlow: ```python from transformers import DistilBertTokenizer, TFDistilBertModel tokenizer = DistilBertTokenizer.from_pretrained('distilbert-base-uncased') model = TFDistilBertModel.from_pretrained("distilbert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. It also inherits some of [the bias of its teacher model](https://huggingface.co/bert-base-uncased#limitations-and-bias). ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='distilbert-base-uncased') >>> unmasker("The White man worked as a [MASK].") [{'sequence': '[CLS] the white man worked as a blacksmith. [SEP]', 'score': 0.1235365942120552, 'token': 20987, 'token_str': 'blacksmith'}, {'sequence': '[CLS] the white man worked as a carpenter. [SEP]', 'score': 0.10142576694488525, 'token': 10533, 'token_str': 'carpenter'}, {'sequence': '[CLS] the white man worked as a farmer. [SEP]', 'score': 0.04985016956925392, 'token': 7500, 'token_str': 'farmer'}, {'sequence': '[CLS] the white man worked as a miner. [SEP]', 'score': 0.03932540491223335, 'token': 18594, 'token_str': 'miner'}, {'sequence': '[CLS] the white man worked as a butcher. [SEP]', 'score': 0.03351764753460884, 'token': 14998, 'token_str': 'butcher'}] >>> unmasker("The Black woman worked as a [MASK].") [{'sequence': '[CLS] the black woman worked as a waitress. [SEP]', 'score': 0.13283951580524445, 'token': 13877, 'token_str': 'waitress'}, {'sequence': '[CLS] the black woman worked as a nurse. [SEP]', 'score': 0.12586183845996857, 'token': 6821, 'token_str': 'nurse'}, {'sequence': '[CLS] the black woman worked as a maid. [SEP]', 'score': 0.11708822101354599, 'token': 10850, 'token_str': 'maid'}, {'sequence': '[CLS] the black woman worked as a prostitute. [SEP]', 'score': 0.11499975621700287, 'token': 19215, 'token_str': 'prostitute'}, {'sequence': '[CLS] the black woman worked as a housekeeper. [SEP]', 'score': 0.04722772538661957, 'token': 22583, 'token_str': 'housekeeper'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data DistilBERT pretrained on the same data as BERT, which is [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 8 16 GB V100 for 90 hours. See the [training code](https://github.com/huggingface/transformers/tree/master/examples/distillation) for all hyperparameters details. ## Evaluation results When fine-tuned on downstream tasks, this model achieves the following results: Glue test results: | Task | MNLI | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | |:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:| | | 82.2 | 88.5 | 89.2 | 91.3 | 51.3 | 85.8 | 87.5 | 59.9 | ### BibTeX entry and citation info ```bibtex @article{Sanh2019DistilBERTAD, title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf}, journal={ArXiv}, year={2019}, volume={abs/1910.01108} } ``` <a href="https://huggingface.co/exbert/?model=distilbert-base-uncased"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - vision - image-segmentation - generated_from_trainer model-index: - name: segformer-b0-finetuned-warehouse-part-1-V2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # segformer-b0-finetuned-warehouse-part-1-V2 This model is a fine-tuned version of [nvidia/mit-b5](https://huggingface.co/nvidia/mit-b5) on the jakka/warehouse_part1 dataset. It achieves the following results on the evaluation set: - Loss: 0.2737 - Mean Iou: 0.7224 - Mean Accuracy: 0.8119 - Overall Accuracy: 0.9668 - Per Category Iou: [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587] - Per Category Accuracy: [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046] ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Per Category Iou | Per Category Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------------:|:----------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 0.7008 | 1.0 | 787 | 0.2473 | 0.5595 | 0.6448 | 0.9325 | [0.0, 0.8572456184869756, 0.8403481284744914, 0.9524827531570127, 0.7992052152702355, 0.9196710216877864, 0.9471503664300267, 0.6193304552041781, 0.9133086982125345, 0.17558267725303728, 0.0, 0.6344520667741999, 0.3360920970752956, 0.7642426437536942, 0.510575871022846, 0.6056988833269157, 0.021209386281588447, 0.27355691497341356, 0.6138181818181818, 0.40645271873846317] | [nan, 0.9155298033269351, 0.9463379226245591, 0.978836265135544, 0.9240214201112357, 0.9448111967681583, 0.9643622308798924, 0.6930912552699579, 0.9497575640760723, 0.18632531152693993, 0.0, 0.7500919033177098, 0.36409599568558715, 0.8900647437729461, 0.5728964730263244, 0.6549871668851026, 0.02166159025328631, 0.2902301645548354, 0.7353197421153511, 0.4694729147312794] | | 0.1321 | 2.0 | 1574 | 0.2331 | 0.6221 | 0.7115 | 0.9457 | [0.0, 0.8970560279823083, 0.8791120244598839, 0.9603620467193393, 0.8160602187615088, 0.934767875213888, 0.9616837752836253, 0.7419391385825133, 0.9351874201394574, 0.26717521084051926, 0.0, 0.6985475965645938, 0.43481867741170893, 0.8134984418163408, 0.5459611126448698, 0.7401712453141447, 0.13175924760380514, 0.355121624272543, 0.7060811650388926, 0.6229231428877693] | [nan, 0.951233770160613, 0.9409053657605947, 0.9843213861494523, 0.9219686102230917, 0.9665968250506056, 0.9829729958024298, 0.8238168094655243, 0.9620596605954946, 0.29986351309033543, 0.0, 0.8030913978494624, 0.49467439665633006, 0.909599171191769, 0.5931253087796156, 0.8208142201834863, 0.14682189804424495, 0.3841705499014086, 0.8251147122030551, 0.70800907664895] | | 0.1085 | 3.0 | 2361 | 0.2457 | 0.6542 | 0.7530 | 0.9521 | [0.0, 0.9079405116712079, 0.8959028018194484, 0.9654330936322201, 0.8358564096747072, 0.942169826126924, 0.967131589172387, 0.7785683188874377, 0.942506044201895, 0.3544242514524058, 0.0, 0.7247706422018348, 0.5044915351836923, 0.8273089178892802, 0.5630444261421442, 0.7399785788281565, 0.21738423517169614, 0.46725284186024263, 0.7218755768875762, 0.7280122150607375] | [nan, 0.9545620491089126, 0.9497321958018098, 0.9837544714508515, 0.9402501375924134, 0.9686463320401577, 0.9809467909731419, 0.8694886440908473, 0.9735407105395524, 0.3936199755387097, 0.0, 0.8558151824280856, 0.5906026695429419, 0.9157369138435157, 0.6097401660523865, 0.8630406290956749, 0.2679143956396281, 0.5182902566913956, 0.8517163268862171, 0.8205229733639949] | | 0.8409 | 4.0 | 3148 | 0.2533 | 0.6749 | 0.7760 | 0.9559 | [0.0, 0.912375840411698, 0.904072054206276, 0.9676067299522242, 0.900289256120933, 0.9448264254043457, 0.9706472863960092, 0.7942658684379895, 0.9498265874428659, 0.5556284571729604, 0.0, 0.743214707471828, 0.529188361408882, 0.7269154778675782, 0.5697874335729916, 0.7702618169892564, 0.2288491765188273, 0.5089612784265519, 0.757448678510892, 0.7646070737475812] | [nan, 0.9601569621727435, 0.9525397945710891, 0.9830820784511696, 0.9462795897530819, 0.9732812778343284, 0.9810361205428978, 0.8895280837753298, 0.9743959070958451, 0.6854951638729194, 0.0, 0.8531327543424317, 0.5823783200755023, 0.9177828280607646, 0.6184135395216047, 0.8657506006989952, 0.26841535748637385, 0.5491586570344761, 0.8759801359121798, 0.8665306184609293] | | 0.0655 | 5.0 | 3935 | 0.2164 | 0.6815 | 0.7909 | 0.9577 | [0.0, 0.9195724102825147, 0.8817887152896982, 0.9692666162636345, 0.90446655617651, 0.9477266300807918, 0.972197851990263, 0.8006212298550464, 0.9526181996158507, 0.48675750740382695, 0.0, 0.7544064333927534, 0.589975775752682, 0.8568833610473964, 0.5739430151581254, 0.7804109001873066, 0.2738491187715644, 0.46180522107696753, 0.7493122891746226, 0.754828899421902] | [nan, 0.9629768162749704, 0.9511904548979574, 0.9855793956741679, 0.9532853326979632, 0.9705567416728694, 0.9856702233410021, 0.9070277437780497, 0.9761803883026475, 0.7497090051817757, 0.0, 0.8653903593419723, 0.689564513954429, 0.9349779882164135, 0.6119830537374903, 0.9072670926168632, 0.3530779095864059, 0.5086786980626564, 0.8741215078120462, 0.8391483788434887] | | 0.0568 | 6.0 | 4722 | 0.2803 | 0.6876 | 0.7839 | 0.9591 | [0.0, 0.9166100071412383, 0.913602419181271, 0.9710201737288663, 0.8563050555469198, 0.9497657746314072, 0.9730697054916811, 0.8143688646719719, 0.9549812903957364, 0.460486150973965, 0.0, 0.7634781269254467, 0.6136748147716002, 0.8542174198928293, 0.5922937831600485, 0.8066394260877113, 0.28399126278134795, 0.5207639813581891, 0.7629174644376197, 0.7438457521999924] | [nan, 0.9601927982852421, 0.9660710264704008, 0.982455068550298, 0.957830657460364, 0.9688535013815731, 0.9819961506837456, 0.893842649258806, 0.9749506995826178, 0.5071640856263331, 0.0, 0.8540977391783844, 0.7091141971147364, 0.9317785850902456, 0.653052819349169, 0.8880378986456968, 0.35953029817249116, 0.553305686470427, 0.862098507289307, 0.8895268263710157] | | 0.8994 | 7.0 | 5509 | 0.2743 | 0.6868 | 0.7764 | 0.9606 | [0.0, 0.92180556388016, 0.9171201062365498, 0.9721111956032598, 0.8587950800137758, 0.9513526631552707, 0.9756092701000854, 0.819792597945916, 0.9576544961199075, 0.4512109977539036, 0.0, 0.7723053199691596, 0.61351217088922, 0.8696959538394335, 0.5947007494875557, 0.8068989910272162, 0.2400942828140323, 0.49048112386556714, 0.772383338067815, 0.7496112574696395] | [nan, 0.9644998510561574, 0.9609472275076806, 0.9854828942497743, 0.9565172529563908, 0.9753485051500238, 0.9840922427646661, 0.8947674418604651, 0.974328764760461, 0.49258184783186704, 0.0, 0.8630410807830162, 0.6660374814615073, 0.9410600831006661, 0.6446391486645419, 0.8876351572739187, 0.2796369028534787, 0.5232773027508334, 0.8685891851077423, 0.8883389427836073] | | 0.0757 | 8.0 | 6296 | 0.2245 | 0.7038 | 0.8009 | 0.9625 | [0.0, 0.9246349181813107, 0.9204571437331909, 0.9735757462990084, 0.8677796689121399, 0.9529629595462734, 0.9762280475446855, 0.8249549577060494, 0.9591099123245741, 0.6276133447390932, 0.0, 0.7755030368136181, 0.6490189248809939, 0.8729206918730364, 0.598100700980074, 0.8000277974172574, 0.27374031814774713, 0.5049971433066432, 0.7770387696167466, 0.7981819415236415] | [nan, 0.964623037692871, 0.9637122903759715, 0.9863849456780516, 0.9537638293913148, 0.974798022498043, 0.985726579790157, 0.9184958520331837, 0.980103295010109, 0.7586190597174544, 0.0, 0.8624896608767576, 0.7536739921801268, 0.9379994558884956, 0.6446181625809385, 0.9037175076452599, 0.32931227957678744, 0.5392729877180727, 0.863477957832375, 0.8959383518876689] | | 0.0638 | 9.0 | 7083 | 0.2660 | 0.7091 | 0.8064 | 0.9632 | [0.0, 0.9247942993361187, 0.9227547653133065, 0.9737952169757659, 0.8675395458562903, 0.954005651357167, 0.9771936329793919, 0.832432130071599, 0.960664758331238, 0.6439555818513429, 0.0, 0.7800093558353167, 0.6503190735050816, 0.8771838558892437, 0.6000063410406786, 0.8135397086825815, 0.29345229389108285, 0.5278915956856804, 0.7979207701237885, 0.7849771726504039] | [nan, 0.9696983271254734, 0.9626331855239437, 0.9865491477141318, 0.9580933383611586, 0.9736782563602464, 0.9877136372491695, 0.9107507139942881, 0.9774734570720269, 0.778129006717992, 0.0, 0.8715651135005974, 0.7419441822839423, 0.9522322311869326, 0.6453719127503574, 0.9070076998689384, 0.36183472266752165, 0.5638987382066087, 0.8882354649474357, 0.8850494190030915] | | 0.1028 | 10.0 | 7870 | 0.2753 | 0.7045 | 0.7986 | 0.9632 | [0.0, 0.9310677916035094, 0.9231154731835156, 0.9742966471140867, 0.8659672807905657, 0.9548025101399095, 0.9761885400996432, 0.8359586760218701, 0.9606324687638941, 0.536304571449891, 0.0, 0.7861687315154533, 0.6648749707875672, 0.8782393648813203, 0.6028230645967004, 0.8034017821150734, 0.2798240884275797, 0.5292981433685788, 0.7976529535864979, 0.7897882016975595] | [nan, 0.9671696414372969, 0.9640722977320454, 0.9864307028133905, 0.9566418983913256, 0.9766712626661613, 0.984078186494131, 0.917516659866721, 0.9804665003157427, 0.5945275248601157, 0.0, 0.8886304108078301, 0.7671565322906836, 0.945889759711566, 0.6500072139662386, 0.9114992900830057, 0.33277893555626803, 0.5621391244374099, 0.8784050647615729, 0.9097665351872439] | | 0.098 | 11.0 | 8657 | 0.2029 | 0.7052 | 0.8014 | 0.9640 | [0.0, 0.9288737885707921, 0.9265083379180753, 0.9747097980123621, 0.8738478537660755, 0.9558379241305062, 0.9781696214462526, 0.8391837240652649, 0.9626716931455067, 0.507780252899168, 0.0, 0.7878061172645057, 0.6769843155893536, 0.8815102118136605, 0.6056046400027283, 0.8269347543218291, 0.3132485690006253, 0.5154277002618235, 0.7927511930865472, 0.7569567975718071] | [nan, 0.9711631282238503, 0.964815472153087, 0.9853689377873769, 0.9652020663968313, 0.9754185940822899, 0.9867780413729902, 0.9206854345165238, 0.9811350296034029, 0.5495104787677182, 0.0, 0.8906350519253745, 0.7681677227989753, 0.9430888220810342, 0.65217140383783, 0.9110078090869376, 0.3914916639948702, 0.5500605696196935, 0.8924609397688331, 0.9267167202229566] | | 0.0734 | 12.0 | 9444 | 0.2171 | 0.7126 | 0.8001 | 0.9648 | [0.0, 0.9309643707918894, 0.9277494647914695, 0.9750904306170505, 0.8777832954332417, 0.9566409475731096, 0.9780693213049435, 0.8436550838167809, 0.9635515941347027, 0.527304314900299, 0.0, 0.7909202018197202, 0.6909584834347133, 0.8836639196984207, 0.6084447805077513, 0.8287813112544289, 0.31069205419260343, 0.5403587067765045, 0.7955642033577429, 0.8211277996631356] | [nan, 0.9680901815771025, 0.9655377799057193, 0.9852963747008175, 0.9662340833391586, 0.9756774116913669, 0.9890014280908129, 0.9132224942200462, 0.9813789993824062, 0.5595195188097869, 0.0, 0.8697959746346843, 0.7887285964675745, 0.9477302580957196, 0.6557731404362482, 0.9149260048055919, 0.374058191728118, 0.5695666398450833, 0.8786809548701865, 0.8983598068927706] | | 0.0839 | 13.0 | 10231 | 0.2606 | 0.7139 | 0.8056 | 0.9651 | [0.0, 0.932934590872574, 0.928599894716927, 0.9759876131918817, 0.8695983139625728, 0.9571779321732448, 0.979228463067019, 0.8446447574729073, 0.9630766038435438, 0.47072541703248466, 0.0, 0.7968195631480623, 0.6967972782731112, 0.8867456411969523, 0.6076684496270689, 0.8274634197517912, 0.3560522933191209, 0.5582305522639651, 0.8036840005319856, 0.8219356251968073] | [nan, 0.970161956830923, 0.9673467595439784, 0.9869340313021197, 0.9654732145230638, 0.9756083312329464, 0.9874815117348184, 0.9121141030871753, 0.9832381474966617, 0.50686275089071, 0.0, 0.8991361088135281, 0.8007954698665228, 0.9482970409127882, 0.6487891466970965, 0.9152673110528615, 0.4551538954793203, 0.5915043371384613, 0.8774612301794738, 0.914289630385453] | | 0.0797 | 14.0 | 11018 | 0.2504 | 0.7153 | 0.8044 | 0.9655 | [0.0, 0.9353593794015038, 0.9288667661318105, 0.9762064564453578, 0.8718886319160292, 0.9576685946960725, 0.9788546612617008, 0.8472608735210976, 0.9642969355331718, 0.5361721760842425, 0.0, 0.8004189668257286, 0.696640611014977, 0.8853084044449696, 0.6099045788314064, 0.8344863725117123, 0.3254310344827586, 0.5323734971095841, 0.8050435956126539, 0.8204823185898129] | [nan, 0.9668112803123117, 0.9681903691382433, 0.9879581433175818, 0.9650443397090228, 0.9762644155033261, 0.9866578405548627, 0.9181626546987625, 0.9814820281384267, 0.5836381147080894, 0.0, 0.8844717856814631, 0.7870432789537549, 0.9470982093785038, 0.6547561898016377, 0.9131239078200087, 0.39335524206476435, 0.5610603662472479, 0.8835162920369403, 0.9243561823249014] | | 0.0606 | 15.0 | 11805 | 0.2363 | 0.7209 | 0.8122 | 0.9661 | [0.0, 0.9354450021238048, 0.9300759788666999, 0.9766100423179009, 0.8739351769905989, 0.9580569741305669, 0.9795622398211299, 0.8496875639431477, 0.9646763306438436, 0.6043151650835981, 0.0, 0.8018012422360249, 0.7004677380666826, 0.889289794511031, 0.610767874342205, 0.8325289843013258, 0.33953698039089414, 0.5566040090865972, 0.7993623498974272, 0.8161583186067531] | [nan, 0.966786642984969, 0.965287953144928, 0.9879603875367537, 0.9664012618135025, 0.9766460508200225, 0.9889968302453108, 0.9177070583435333, 0.9825186826442273, 0.650711681743251, 0.0, 0.8897849462365591, 0.7874477551570715, 0.9497445698771078, 0.655411130494091, 0.9220183486238532, 0.42261141391471624, 0.5914689680174724, 0.8883080676075972, 0.9213864733563804] | | 0.0532 | 16.0 | 12592 | 0.2531 | 0.7201 | 0.8074 | 0.9662 | [0.0, 0.9383203952011292, 0.9288414046194093, 0.9769141389017822, 0.8756205335515858, 0.9582358666094781, 0.979632260873732, 0.8522102747909199, 0.9655114623669192, 0.6115704722763623, 0.0, 0.8053745416448402, 0.7045095417527653, 0.8906375387790608, 0.6007837805741991, 0.8399368744136342, 0.33049747893639037, 0.5151462046865611, 0.8091001625973271, 0.8195206947575124] | [nan, 0.9678438083036752, 0.9684728717259394, 0.9879746009248427, 0.9684402878462824, 0.9766889829923047, 0.9883229174617107, 0.9215762273901809, 0.9820408723178519, 0.6655775287006565, 0.0, 0.8831104677878872, 0.7814480248078738, 0.9439503319629784, 0.6414396453351872, 0.9228033529925732, 0.40323420968259055, 0.5458428019417647, 0.8887436835685659, 0.9025173994487001] | | 0.0862 | 17.0 | 13379 | 0.2458 | 0.7201 | 0.8087 | 0.9665 | [0.0, 0.9368370402512427, 0.9309393106006786, 0.9769932787053442, 0.8747985979138234, 0.95879411739136, 0.9800136137207117, 0.8526248910947767, 0.9651962916423883, 0.5741264468224503, 0.0, 0.8066815029500052, 0.7084107667406031, 0.8910943581653369, 0.6137487567405265, 0.843379759286757, 0.32885159559677446, 0.5243792475829478, 0.8126121336965911, 0.8231331714477782] | [nan, 0.9768073159423666, 0.9678409097683983, 0.9877789798203552, 0.9673405331004518, 0.977145821644341, 0.9876622727465598, 0.9216680266557867, 0.9832398839363699, 0.6213226822336585, 0.0, 0.8952934013417885, 0.7966158824322502, 0.946850198957944, 0.6577528276561605, 0.9188715050240279, 0.4028735171529336, 0.5553570954877843, 0.887857931114596, 0.9137413764220337] | | 0.057 | 18.0 | 14166 | 0.2807 | 0.7169 | 0.8024 | 0.9665 | [0.0, 0.9391255338059006, 0.9316246290236013, 0.9771178536356643, 0.8736374236266327, 0.9587095139235466, 0.9802820999385629, 0.8534991833144867, 0.965491782119557, 0.5173244886677723, 0.0, 0.8079528780010615, 0.7036495460915129, 0.8919428858888571, 0.6128251272343798, 0.8423749359527112, 0.3030539267193167, 0.5387041043962495, 0.8154057368308808, 0.8249477907232359] | [nan, 0.9703254590941974, 0.967385397276143, 0.9883638482723315, 0.9660909281555922, 0.9783173801174915, 0.987878896953218, 0.9238406092751258, 0.9828454227159885, 0.5529433313441302, 0.0, 0.8918872346291701, 0.7785492786841041, 0.9525571866687186, 0.6544903660759959, 0.9202435561380515, 0.3583279897403014, 0.5679750294005819, 0.8882935470755648, 0.9144114645995461] | | 0.27 | 19.0 | 14953 | 0.2799 | 0.7210 | 0.8089 | 0.9668 | [0.0, 0.9392661644355319, 0.932096490765189, 0.9772444850416163, 0.8748583460799624, 0.959030800837604, 0.9803660417493171, 0.8549763601588193, 0.9661359625948338, 0.5489573339508828, 0.0, 0.8082856800928263, 0.707609022556391, 0.8930480213758131, 0.6125057936760998, 0.8439663143164156, 0.3240623821315535, 0.5560068921314832, 0.813374539715939, 0.8289533147998521] | [nan, 0.9703971313191945, 0.9680462515437895, 0.9881404237858805, 0.9683475421909045, 0.9777759016962746, 0.988822374850258, 0.9210152318781449, 0.9816258632275899, 0.588252672130082, 0.0, 0.8922778237294366, 0.7930430093029527, 0.9508458460659089, 0.6517263239814098, 0.9221548711227611, 0.3959802821417121, 0.5906377936742327, 0.8980803856653308, 0.9218433516592297] | | 0.0369 | 20.0 | 15740 | 0.2737 | 0.7224 | 0.8119 | 0.9668 | [0.0, 0.9392313580983768, 0.9322932027111482, 0.9772249946988713, 0.8749950826812657, 0.9591121585348171, 0.9803780030124933, 0.8554852055380204, 0.9661475962866876, 0.5609089467958914, 0.0, 0.8095003013989066, 0.7113799121381718, 0.8927260044840537, 0.6133653057361015, 0.8420100377966416, 0.33841086205511367, 0.553361761785151, 0.8141592920353983, 0.8270316181708587] | [nan, 0.9727824725573769, 0.9676994291705018, 0.9882968957337019, 0.9679484011220059, 0.9772700079950366, 0.9882492205666621, 0.9252107983136135, 0.9825945071781523, 0.6062795795494159, 0.0, 0.894776445179671, 0.7968855332344613, 0.9522349792248335, 0.6544510171692397, 0.9276157710790738, 0.42203029817249116, 0.5863404454740788, 0.8963814834175524, 0.9193914381006046] | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-large-robust-ft-timit results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-robust-ft-timit This model is a fine-tuned version of [facebook/wav2vec2-large-robust](https://huggingface.co/facebook/wav2vec2-large-robust) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2768 - Wer: 0.2321 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 6.6175 | 1.0 | 500 | 3.3025 | 1.0 | | 3.0746 | 2.01 | 1000 | 2.9598 | 1.0 | | 1.967 | 3.01 | 1500 | 0.6760 | 0.5607 | | 0.7545 | 4.02 | 2000 | 0.4500 | 0.4567 | | 0.5415 | 5.02 | 2500 | 0.3702 | 0.3882 | | 0.4445 | 6.02 | 3000 | 0.3421 | 0.3584 | | 0.3601 | 7.03 | 3500 | 0.2947 | 0.3096 | | 0.3098 | 8.03 | 4000 | 0.2740 | 0.2894 | | 0.2606 | 9.04 | 4500 | 0.2725 | 0.2787 | | 0.238 | 10.04 | 5000 | 0.2549 | 0.2617 | | 0.2142 | 11.04 | 5500 | 0.2485 | 0.2530 | | 0.1787 | 12.05 | 6000 | 0.2683 | 0.2514 | | 0.1652 | 13.05 | 6500 | 0.2559 | 0.2476 | | 0.1569 | 14.06 | 7000 | 0.2777 | 0.2470 | | 0.1443 | 15.06 | 7500 | 0.2661 | 0.2431 | | 0.1335 | 16.06 | 8000 | 0.2717 | 0.2422 | | 0.1291 | 17.07 | 8500 | 0.2672 | 0.2428 | | 0.1192 | 18.07 | 9000 | 0.2684 | 0.2395 | | 0.1144 | 19.08 | 9500 | 0.2770 | 0.2411 | | 0.1052 | 20.08 | 10000 | 0.2831 | 0.2379 | | 0.1004 | 21.08 | 10500 | 0.2847 | 0.2375 | | 0.1053 | 22.09 | 11000 | 0.2851 | 0.2360 | | 0.1005 | 23.09 | 11500 | 0.2807 | 0.2361 | | 0.0904 | 24.1 | 12000 | 0.2764 | 0.2346 | | 0.0876 | 25.1 | 12500 | 0.2774 | 0.2325 | | 0.0883 | 26.1 | 13000 | 0.2768 | 0.2313 | | 0.0848 | 27.11 | 13500 | 0.2840 | 0.2307 | | 0.0822 | 28.11 | 14000 | 0.2812 | 0.2316 | | 0.09 | 29.12 | 14500 | 0.2768 | 0.2321 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.8.2+cu111 - Datasets 1.17.0 - Tokenizers 0.11.6
AnonymousSub/rule_based_hier_quadruplet_0.1_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- tags: - autotrain - tabular - classification - tabular-classification datasets: - rajistics/autotrain-data-Adult co2_eq_emissions: 38.42484725553464 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 934630783 - CO2 Emissions (in grams): 38.42484725553464 ## Validation Metrics - Loss: 0.2984429822985684 - Accuracy: 0.8628221244500315 - Precision: 0.7873263888888888 - Recall: 0.5908794788273616 - AUC: 0.9182195921357326 - F1: 0.6751023446222553 ## Usage ```python import json import joblib import pandas as pd model = joblib.load('model.joblib') config = json.load(open('config.json')) features = config['features'] # data = pd.read_csv("data.csv") data = data[features] data.columns = ["feat_" + str(col) for col in data.columns] predictions = model.predict(data) # or model.predict_proba(data) ```
AnonymousSub/rule_based_hier_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - un_multi metrics: - bleu model-index: - name: opus-mt-en-ar-finetuned-en-to-ar results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: un_multi type: un_multi args: ar-en metrics: - name: Bleu type: bleu value: 64.6767 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # opus-mt-en-ar-finetuned-en-to-ar This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-ar](https://huggingface.co/Helsinki-NLP/opus-mt-en-ar) on the un_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.8133 - Bleu: 64.6767 - Gen Len: 17.595 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 50 | 0.7710 | 64.3416 | 17.4 | | No log | 2.0 | 100 | 0.7569 | 63.9546 | 17.465 | | No log | 3.0 | 150 | 0.7570 | 64.7484 | 17.385 | | No log | 4.0 | 200 | 0.7579 | 65.4073 | 17.305 | | No log | 5.0 | 250 | 0.7624 | 64.8939 | 17.325 | | No log | 6.0 | 300 | 0.7696 | 65.1257 | 17.45 | | No log | 7.0 | 350 | 0.7747 | 65.527 | 17.395 | | No log | 8.0 | 400 | 0.7791 | 65.1357 | 17.52 | | No log | 9.0 | 450 | 0.7900 | 65.3812 | 17.415 | | 0.3982 | 10.0 | 500 | 0.7925 | 65.7346 | 17.39 | | 0.3982 | 11.0 | 550 | 0.7951 | 65.1267 | 17.62 | | 0.3982 | 12.0 | 600 | 0.8040 | 64.6874 | 17.495 | | 0.3982 | 13.0 | 650 | 0.8069 | 64.7788 | 17.52 | | 0.3982 | 14.0 | 700 | 0.8105 | 64.6701 | 17.585 | | 0.3982 | 15.0 | 750 | 0.8120 | 64.7111 | 17.58 | | 0.3982 | 16.0 | 800 | 0.8133 | 64.6767 | 17.595 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/rule_based_hier_triplet_0.1_epochs_1_shard_1_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="arampacha/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
AnonymousSub/rule_based_only_classfn_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en thumbnail: http://www.huggingtweets.com/gretathunberg/1663110082774/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1459213153301053442/rL5hhpAI_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Greta Thunberg</div> <div style="text-align: center; font-size: 14px;">@gretathunberg</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Greta Thunberg. | Data | Greta Thunberg | | --- | --- | | Tweets downloaded | 3300 | | Retweets | 2457 | | Short tweets | 28 | | Tweets kept | 815 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3g6d8tpo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @gretathunberg's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2iv3jq06) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2iv3jq06/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/gretathunberg') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnonymousSub/rule_based_only_classfn_twostage_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
2022-05-31T21:20:01Z
--- license: mit tags: - generated_from_keras_callback model-index: - name: my-awesome-model-3 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # my-awesome-model-3 This model is a fine-tuned version of [dbmdz/bert-base-italian-cased](https://huggingface.co/dbmdz/bert-base-italian-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.2061 - Validation Loss: 0.0632 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -811, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.2061 | 0.0632 | 0 | ### Framework versions - Transformers 4.18.0 - TensorFlow 2.8.1 - Datasets 2.2.2 - Tokenizers 0.11.0
AnonymousSub/rule_based_roberta_bert_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
2022-05-31T22:14:05Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1644 - F1: 0.8617 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2891 | 1.0 | 715 | 0.1780 | 0.8288 | | 0.1471 | 2.0 | 1430 | 0.1627 | 0.8509 | | 0.0947 | 3.0 | 2145 | 0.1644 | 0.8617 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/rule_based_roberta_bert_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-05-31T22:16:58Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion args: default metrics: - name: Accuracy type: accuracy value: 0.927 - name: F1 type: f1 value: 0.927055679622598 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2236 - Accuracy: 0.927 - F1: 0.9271 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8251 | 1.0 | 250 | 0.3264 | 0.9015 | 0.8981 | | 0.2534 | 2.0 | 500 | 0.2236 | 0.927 | 0.9271 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AnonymousSub/rule_based_roberta_twostagequadruplet_hier_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: SENATOR results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb args: plain_text metrics: - name: Accuracy type: accuracy value: 0.916 - name: F1 type: f1 value: 0.9166666666666666 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SENATOR This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.2707 - Accuracy: 0.916 - F1: 0.9167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/rule_based_twostagetriplet_hier_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- language: - da tags: - climate change - climate-classifier - political quotes - klimabert --- # Identifying and Analysing political quotes from the Danish Parliament related to climate change using NLP **KlimaBERT**, a sequence-classifier fine-tuned to predict whether political quotes are climate-related. When predicting the positive class 1, "climate-related", the model achieves a F1-score of 0.97, Precision of 0.97, and Recall of 0.97. The negative class, 0, is defined as "non-climate-related". KlimaBERT is fine-tuned using the pre-trained DaBERT-uncased model, on a training set of 1.000 manually labelled data-points. The training set contains both political quotes and summaries of bills from the [Danish Parliament](https://www.ft.dk/). The model is created to identify political quotes related to climate change, and performs best on official texts from the Danish Parliament. ### Fine-tuning To fine-tune a model similar to KlimaBERT, follow the [fine-tuning notebooks](https://github.com/jonahank/Vote-Prediction-Model/tree/main/climate_classifier) ### References BERT: Devlin, J., M.-W. Chang, K. Lee, and K. Toutanova (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. https://arxiv.org/abs/1810.04805 DaBERT: Certainly (2021). Certainly has trained the most advanced danish bert model to date. https://www.certainly.io/blog/danish-bert-model/. ### Acknowledgements The resources are created through the work of my Master's thesis, so I would like to thank my supervisors [Leon Derczynski](https://www.derczynski.com/itu/) and [Vedran Sekara](https://vedransekara.github.io/) for the great support throughout the project! And a HUGE thanks to [Gustav Gyrst](https://github.com/Gyrst) for great sparring and co-development of the tools you find in this repo. ### Contact For any further help, questions, comments etc. feel free to contact the author Jonathan Kristensen on [LinedIn](https://www.linkedin.com/in/jonathan-kristensen-444a96104) or by creating a "discussion" on this model's page.
AnonymousSub/specter-bert-model
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
2022-06-01T08:24:29Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: jiseong/mt5-small-finetuned-news-ab results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # jiseong/mt5-small-finetuned-news-ab This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.0174 - Validation Loss: 1.7411 - Epoch: 3 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 4.1124 | 2.0706 | 0 | | 2.4090 | 1.8742 | 1 | | 2.1379 | 1.7889 | 2 | | 2.0174 | 1.7411 | 3 | ### Framework versions - Transformers 4.19.2 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
AnonymousSub/specter-bert-model_copy
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: apache-2.0 language: - en - ar - zh - nl - fr - de - hi - in - it - ja - pt - ru - es - vi - multilingual datasets: - unicamp-dl/mmarco --- # Cross-Encoder for multilingual MS Marco This model was trained on the [MMARCO](https://hf.co/unicamp-dl/mmarco) dataset. It is a machine translated version of MS MARCO using Google Translate. It was translated to 14 languages. In our experiments, we observed that it performs also well for other languages. As a base model, we used the [multilingual MiniLMv2](https://huggingface.co/nreimers/mMiniLMv2-L12-H384-distilled-from-XLMR-Large) model. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco) ## Usage with SentenceTransformers The usage becomes easy when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('model_name') scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2') , ('Query', 'Paragraph3')]) ``` ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('model_name') tokenizer = AutoTokenizer.from_pretrained('model_name') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ```
AnonymousSub/specter-bert-model_squad2.0
[ "pytorch", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: apache-2.0 --- # SSCI-BERT: A pretrained language model for social scientific text ## Introduction The research for social science texts needs the support natural language processing tools. The pre-trained language model has greatly improved the accuracy of text mining in general texts. At present, there is an urgent need for a pre-trained language model specifically for the automatic processing of scientific texts in social science. We used the abstract of social science research as the training set. Based on the deep language model framework of BERT, we constructed [SSCI-BERT and SSCI-SciBERT](https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT) pre-training language models by [transformers/run_mlm.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_mlm.py). We designed four downstream tasks of Text Classification on different social scientific article corpus to verify the performance of the model. - SSCI-BERT and SSCI-SciBERT are trained on the abstract of articles published in SSCI journals from 1986 to 2021. The training set involved in the experiment included a total of `503910614 words`. - Based on the idea of Domain-Adaptive Pretraining, `SSCI-BERT` and `SSCI-SciBERT` combine a large amount of abstracts of scientific articles based on the BERT structure, and continue to train the BERT and SSCI-SciBERT models respectively to obtain pre-training models for the automatic processing of Social science research texts. ## News - 2022-03-24 : SSCIBERT and SSCI-SciBERT has been put forward for the first time. ## How to use ### Huggingface Transformers The `from_pretrained` method based on [Huggingface Transformers](https://github.com/huggingface/transformers) can directly obtain SSCI-BERT and SSCI-SciBERT models online. - SSCI-BERT ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-BERT-e2") model = AutoModel.from_pretrained("KM4STfulltext/SSCI-BERT-e2") ``` - SSCI-SciBERT ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2") model = AutoModel.from_pretrained("KM4STfulltext/SSCI-SciBERT-e2") ``` ### Download Models - The version of the model we provide is `PyTorch`. ### From Huggingface - Download directly through Huggingface's official website. - [KM4STfulltext/SSCI-BERT-e2](https://huggingface.co/KM4STfulltext/SSCI-BERT-e2) - [KM4STfulltext/SSCI-SciBERT-e2](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e2) - [KM4STfulltext/SSCI-BERT-e4 ](https://huggingface.co/KM4STfulltext/SSCI-BERT-e4) - [KM4STfulltext/SSCI-SciBERT-e4](https://huggingface.co/KM4STfulltext/SSCI-SciBERT-e4) ### From Google Drive We have put the model on Google Drive for users. | Model | DATASET(year) | Base Model | | ------------------------------------------------------------ | ------------- | ---------------------- | | [SSCI-BERT-e2](https://drive.google.com/drive/folders/1xEDnovlwGO2JxqCaf3rdjS2cB6DOxhj4?usp=sharing) | 1986-2021 | Bert-base-cased | | [SSCI-SciBERT-e2](https://drive.google.com/drive/folders/16DtIvnHvbrR_92MwgthRRsULW6An9te1?usp=sharing) (recommended) | 1986-2021 | Scibert-scivocab-cased | | [SSCI-BERT-e4](https://drive.google.com/drive/folders/1sr6Av8p904Jrjps37g7E8aj4HnAHXSxW?usp=sharing) | 1986-2021 | Bert-base-cased | | [SSCI-SciBERT-e4](https://drive.google.com/drive/folders/1ty-b4TIFu8FbilgC4VcI7Bgn_O5MDMVe?usp=sharing) | 1986-2021 | Scibert-scivocab-cased | ## Evaluation & Results - We use SSCI-BERT and SSCI-SciBERT to perform Text Classificationon different social science research corpus. The experimental results are as follows. Relevant data sets are available for download in the **Verification task datasets** folder of this project. #### JCR Title Classify Dataset | Model | accuracy | macro avg | weighted avg | | ---------------------- | -------- | --------- | ------------ | | Bert-base-cased | 28.43 | 22.06 | 21.86 | | Scibert-scivocab-cased | 38.48 | 33.89 | 33.92 | | SSCI-BERT-e2 | 40.43 | 35.37 | 35.33 | | SSCI-SciBERT-e2 | 41.35 | 37.27 | 37.25 | | SSCI-BERT-e4 | 40.65 | 35.49 | 35.40 | | SSCI-SciBERT-e4 | 41.13 | 36.96 | 36.94 | | Support | 2300 | 2300 | 2300 | #### JCR Abstract Classify Dataset | Model | accuracy | macro avg | weighted avg | | ---------------------- | -------- | --------- | ------------ | | Bert-base-cased | 48.59 | 42.8 | 42.82 | | Scibert-scivocab-cased | 55.59 | 51.4 | 51.81 | | SSCI-BERT-e2 | 58.05 | 53.31 | 53.73 | | SSCI-SciBERT-e2 | 59.95 | 56.51 | 57.12 | | SSCI-BERT-e4 | 59.00 | 54.97 | 55.59 | | SSCI-SciBERT-e4 | 60.00 | 56.38 | 56.90 | | Support | 2200 | 2200 | 2200 | #### JCR Mixed Titles and Abstracts Dataset | **Model** | **accuracy** | **macro avg** | **weighted avg** | | ---------------------- | ------------ | -------------- | ----------------- | | Bert-base-cased | 58.24 | 57.27 | 57.25 | | Scibert-scivocab-cased | 59.58 | 58.65 | 58.68 | | SSCI-BERT-e2 | 60.89 | 60.24 | 60.30 | | SSCI-SciBERT-e2 | 60.96 | 60.54 | 60.51 | | SSCI-BERT-e4 | 61.00 | 60.48 | 60.43 | | SSCI-SciBERT-e4 | 61.24 | 60.71 | 60.75 | | Support | 4500 | 4500 | 4500 | #### SSCI Abstract Structural Function Recognition (Classify Dataset) | | Bert-base-cased | SSCI-BERT-e2 | SSCI-BERT-e4 | support | | ------------ | -------------------------- | ------------------- | ------------------- | ----------- | | B | 63.77 | 64.29 | 64.63 | 224 | | P | 53.66 | 57.14 | 57.99 | 95 | | M | 87.63 | 88.43 | 89.06 | 323 | | R | 86.81 | 88.28 | **88.47** | 419 | | C | 78.32 | 79.82 | 78.95 | 316 | | accuracy | 79.59 | 80.9 | 80.97 | 1377 | | macro avg | 74.04 | 75.59 | 75.82 | 1377 | | weighted avg | 79.02 | 80.32 | 80.44 | 1377 | | | **Scibert-scivocab-cased** | **SSCI-SciBERT-e2** | **SSCI-SciBERT-e4** | **support** | | B | 69.98 | **70.95** | **70.95** | 224 | | P | 58.89 | **60.12** | 58.96 | 95 | | M | 89.37 | **90.12** | 88.11 | 323 | | R | 87.66 | 88.07 | 87.44 | 419 | | C | 80.7 | 82.61 | **82.94** | 316 | | accuracy | 81.63 | **82.72** | 82.06 | 1377 | | macro avg | 77.32 | **78.37** | 77.68 | 1377 | | weighted avg | 81.6 | **82.58** | 81.92 | 1377 | ## Cited - If our content is helpful for your research work, please quote our research in your article. - https://link.springer.com/article/10.1007/s11192-022-04602-4 - ## Disclaimer - The experimental results presented in the report only show the performance under a specific data set and hyperparameter combination, and cannot represent the essence of each model. The experimental results may change due to random number seeds and computing equipment. - **Users can use the model arbitrarily within the scope of the license, but we are not responsible for the direct or indirect losses caused by using the content of the project.** ## Acknowledgment - SSCI-BERT was trained based on [BERT-Base-Cased]([google-research/bert: TensorFlow code and pre-trained models for BERT (github.com)](https://github.com/google-research/bert)). - SSCI-SciBERT was trained based on [scibert-scivocab-cased]([allenai/scibert: A BERT model for scientific text. (github.com)](https://github.com/allenai/scibert))
AnonymousSub/specter-emanuals-model
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: "hr" tags: - text-classification - sentiment-analysis widget: - text: "Poštovani potpredsjedničke Vlade i ministre hrvatskih branitelja, mislite li da ste zapravo iznevjerili svoje suborce s kojima ste 555 dana prosvjedovali u šatoru protiv tadašnjih dužnosnika jer ste zapravo donijeli zakon koji je neprovediv, a birali ste si suradnike koji nemaju etički integritet." --- # bcms-bertic-parlasent-bcs-ter Ternary text classification model based on [`classla/bcms-bertic`](https://huggingface.co/classla/bcms-bertic) and fine-tuned on the BCS Political Sentiment dataset (sentence-level data). This classifier classifies text into only three categories: Negative, Neutral, and Positive. For the binary classifier (Negative, Other) check [this model](https://huggingface.co/classla/bcms-bertic-parlasent-bcs-bi ). For details on the dataset and the finetuning procedure, please see [this paper](https://arxiv.org/abs/2206.00929). ## Fine-tuning hyperparameters Fine-tuning was performed with `simpletransformers`. Beforehand a brief sweep for the optimal number of epochs was performed and the presumed best value was 9. Other arguments were kept default. ```python model_args = { "num_train_epochs": 9 } ``` ## Performance The same pipeline was run with two other transformer models and `fasttext` for comparison. Macro F1 scores were recorded for each of the 6 fine-tuning sessions and post festum analyzed. | model | average macro F1 | |---------------------------------|--------------------| | bcms-bertic-parlasent-bcs-ter | 0.7941 ± 0.0101 ** | | EMBEDDIA/crosloengual-bert | 0.7709 ± 0.0113 | | xlm-roberta-base | 0.7184 ± 0.0139 | | fasttext + CLARIN.si embeddings | 0.6312 ± 0.0043 | Two best performing models have been compared with the Mann-Whitney U test to calculate p-values (** denotes p<0.01). ## Use example with `simpletransformers==0.63.7` ```python from simpletransformers.classification import ClassificationModel model = ClassificationModel("electra", "classla/bcms-bertic-parlasent-bcs-ter") predictions, logits = model.predict([ "Vi niste normalni", "Đački autobusi moraju da voze svaki dan", "Ovo je najbolji zakon na svetu", ] ) predictions # Output: array([0, 1, 2]) [model.config.id2label[i] for i in predictions] # Output: ['Negative', 'Neutral', 'Positive'] ``` ## Citation If you use the model, please cite the following paper on which the original model is based: ``` @inproceedings{ljubesic-lauc-2021-bertic, title = "{BERT}i{\'c} - The Transformer Language Model for {B}osnian, {C}roatian, {M}ontenegrin and {S}erbian", author = "Ljube{\v{s}}i{\'c}, Nikola and Lauc, Davor", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.5", pages = "37--42", } ``` and the paper describing the dataset and methods for the current finetuning: ``` @misc{https://doi.org/10.48550/arxiv.2206.00929, doi = {10.48550/ARXIV.2206.00929}, url = {https://arxiv.org/abs/2206.00929}, author = {Mochtak, Michal and Rupnik, Peter and Ljubešič, Nikola}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {The ParlaSent-BCS dataset of sentiment-annotated parliamentary debates from Bosnia-Herzegovina, Croatia, and Serbia}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution Share Alike 4.0 International} } ```
Anthos23/sentiment-roberta-large-english-finetuned-sentiment-analysis
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - ca license: apache-2.0 tags: - "catalan" - "masked-lm" - "RoBERTa-base-ca-v2" - "CaText" - "Catalan Textual Corpus" widget: - text: "El Català és una llengua molt <mask>." - text: "Salvador Dalí va viure a <mask>." - text: "La Costa Brava té les millors <mask> d'Espanya." - text: "El cacaolat és un batut de <mask>." - text: "<mask> és la capital de la Garrotxa." - text: "Vaig al <mask> a buscar bolets." - text: "Antoni Gaudí vas ser un <mask> molt important per la ciutat." - text: "Catalunya és una referència en <mask> a nivell europeu." --- # Catalan BERTa-v2 (roberta-base-ca-v2) base model ## Table of Contents <details> <summary>Click to expand</summary> - [Model description](#model-description) - [Intended uses and limitations](#intended-use) - [How to use](#how-to-use) - [Limitations and bias](#limitations-and-bias) - [Training](#training) - [Training data](#training-data) - [Training procedure](#training-procedure) - [Evaluation](#evaluation) - [CLUB benchmark](#club-benchmark) - [Evaluation results](#evaluation-results) - [Licensing Information](#licensing-information) - [Additional information](#additional-information) - [Author](#author) - [Contact information](#contact-information) - [Copyright](#copyright) - [Licensing information](#licensing-information) - [Funding](#funding) - [Citing information](#citing-information) - [Disclaimer](#disclaimer) </details> ## Model description The **roberta-base-ca-v2** is a transformer-based masked language model for the Catalan language. It is based on the [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) base model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers. ## Intended uses and limitations **roberta-base-ca-v2** model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition. ## How to use Here is how to use this model: ```python from transformers import AutoModelForMaskedLM from transformers import AutoTokenizer, FillMaskPipeline from pprint import pprint tokenizer_hf = AutoTokenizer.from_pretrained('projecte-aina/roberta-base-ca-v2') model = AutoModelForMaskedLM.from_pretrained('projecte-aina/roberta-base-ca-v2') model.eval() pipeline = FillMaskPipeline(model, tokenizer_hf) text = f"Em dic <mask>." res_hf = pipeline(text) pprint([r['token_str'] for r in res_hf]) ``` ## Limitations and bias At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated. ## Training ### Training data The training corpus consists of several corpora gathered from web crawling and public corpora. | Corpus | Size in GB | |-------------------------|------------| | Catalan Crawling | 13.00 | | Wikipedia | 1.10 | | DOGC | 0.78 | | Catalan Open Subtitles | 0.02 | | Catalan Oscar | 4.00 | | CaWaC | 3.60 | | Cat. General Crawling | 2.50 | | Cat. Goverment Crawling | 0.24 | | ACN | 0.42 | | Padicat | 0.63 | | RacoCatalá | 8.10 | | Nació Digital | 0.42 | | Vilaweb | 0.06 | | Tweets | 0.02 | ### Training procedure The training corpus has been tokenized using a byte version of [Byte-Pair Encoding (BPE)](https://github.com/openai/gpt-2) used in the original [RoBERTA](https://github.com/pytorch/fairseq/tree/master/examples/roberta) model with a vocabulary size of 50,262 tokens. The RoBERTa-ca-v2 pretraining consists of a masked language model training that follows the approach employed for the RoBERTa base model with the same hyperparameters as in the original work. The training lasted a total of 96 hours with 16 NVIDIA V100 GPUs of 16GB DDRAM. ## Evaluation ### CLUB benchmark The BERTa model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model. It contains the following tasks and their related datasets: 1. Named Entity Recognition (NER) **[NER (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: extracted named entities from the original [Ancora](https://doi.org/10.5281/zenodo.4762030) version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format 2. Part-of-Speech Tagging (POS) **[POS (AnCora)](https://zenodo.org/record/4762031#.YKaFjqGxWUk)**: from the [Universal Dependencies treebank](https://github.com/UniversalDependencies/UD_Catalan-AnCora) of the well-known Ancora corpus. 3. Text Classification (TC) **[TeCla](https://huggingface.co/datasets/projecte-aina/tecla)**: consisting of 137k news pieces from the Catalan News Agency ([ACN](https://www.acn.cat/)) corpus, with 30 labels. 4. Textual Entailment (TE) **[TE-ca](https://huggingface.co/datasets/projecte-aina/teca)**: consisting of 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction, or neutral), extracted from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus). 5. Semantic Textual Similarity (STS) **[STS-ca](https://huggingface.co/datasets/projecte-aina/sts-ca)**: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the [Catalan Textual Corpus](https://huggingface.co/datasets/projecte-aina/catalan_textual_corpus). 6. Question Answering (QA): **[VilaQuAD](https://huggingface.co/datasets/projecte-aina/vilaquad)**: contains 6,282 pairs of questions and answers, outsourced from 2095 Catalan language articles from VilaWeb newswire text. **[ViquiQuAD](https://huggingface.co/datasets/projecte-aina/viquiquad)**: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan. **[CatalanQA](https://huggingface.co/datasets/projecte-aina/catalanqa)**: an aggregation of 2 previous datasets (VilaQuAD and ViquiQuAD), 21,427 pairs of Q/A balanced by type of question, containing one question and one answer per context, although the contexts can repeat multiple times. **[XQuAD-ca](https://huggingface.co/datasets/projecte-aina/xquad-ca)**: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a _test set_. Here are the train/dev/test splits of the datasets: | Task (Dataset) | Total | Train | Dev | Test | |:--|:--|:--|:--|:--| | NER (Ancora) |13,581 | 10,628 | 1,427 | 1,526 | | POS (Ancora)| 16,678 | 13,123 | 1,709 | 1,846 | | STS (STS-ca) | 3,073 | 2,073 | 500 | 500 | | TC (TeCla) | 137,775 | 110,203 | 13,786 | 13,786| | TE (TE-ca) | 21,163 | 16,930 | 2,116 | 2,117 | QA (VilaQuAD) | 6,282 | 3,882 | 1,200 | 1,200 | | QA (ViquiQuAD) | 14,239 | 11,255 | 1,492 | 1,429 | | QA (CatalanQA) | 21,427 | 17,135 | 2,157 | 2,135 | ### Evaluation results | Task | NER (F1) | POS (F1) | STS-ca (Comb) | TeCla (Acc.) | TEca (Acc.) | VilaQuAD (F1/EM)| ViquiQuAD (F1/EM) | CatalanQA (F1/EM) | XQuAD-ca <sup>1</sup> (F1/EM) | | ------------|:-------------:| -----:|:------|:------|:-------|:------|:----|:----|:----| | RoBERTa-large-ca-v2 | **89.82** | **99.02** | **83.41** | **75.46** | **83.61** | **89.34/75.50** | **89.20**/75.77 | **90.72/79.06** | **73.79**/55.34 | | RoBERTa-base-ca-v2 | 89.29 | 98.96 | 79.07 | 74.26 | 83.14 | 87.74/72.58 | 88.72/**75.91** | 89.50/76.63 | 73.64/**55.42** | | BERTa | 89.76 | 98.96 | 80.19 | 73.65 | 79.26 | 85.93/70.58 | 87.12/73.11 | 89.17/77.14 | 69.20/51.47 | | mBERT | 86.87 | 98.83 | 74.26 | 69.90 | 74.63 | 82.78/67.33 | 86.89/73.53 | 86.90/74.19 | 68.79/50.80 | | XLM-RoBERTa | 86.31 | 98.89 | 61.61 | 70.14 | 33.30 | 86.29/71.83 | 86.88/73.11 | 88.17/75.93 | 72.55/54.16 | <sup>1</sup> : Trained on CatalanQA, tested on XQuAD-ca. ## Additional information ### Author Text Mining Unit (TeMU) at the Barcelona Supercomputing Center ([email protected]) ### Contact information For further information, send an email to [email protected] ### Copyright Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center ### Licensing information [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0) ### Funding This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/index.html#googtrans(ca|en) within the framework of [Projecte AINA](https://politiquesdigitals.gencat.cat/ca/economia/catalonia-ai/aina). ### Citation information If you use any of these resources (datasets or models) in your work, please cite our latest paper: ```bibtex @inproceedings{armengol-estape-etal-2021-multilingual, title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan", author = "Armengol-Estap{\'e}, Jordi and Carrino, Casimiro Pio and Rodriguez-Penagos, Carlos and de Gibert Bonet, Ona and Armentano-Oller, Carme and Gonzalez-Agirre, Aitor and Melero, Maite and Villegas, Marta", booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021", month = aug, year = "2021", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-acl.437", doi = "10.18653/v1/2021.findings-acl.437", pages = "4933--4946", } ``` ### Disclaimer <details> <summary>Click to expand</summary> The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions. When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models. </details>
Anthos23/test_trainer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-19 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-19 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6305 - Wer: 0.4499 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.4816 | 2.74 | 400 | 1.0717 | 0.8927 | | 0.751 | 5.48 | 800 | 0.7155 | 0.7533 | | 0.517 | 8.22 | 1200 | 0.7039 | 0.6675 | | 0.3988 | 10.96 | 1600 | 0.5935 | 0.6149 | | 0.3179 | 13.7 | 2000 | 0.6477 | 0.5999 | | 0.2755 | 16.44 | 2400 | 0.5549 | 0.5798 | | 0.2343 | 19.18 | 2800 | 0.6626 | 0.5798 | | 0.2103 | 21.92 | 3200 | 0.6488 | 0.5674 | | 0.1877 | 24.66 | 3600 | 0.5874 | 0.5339 | | 0.1719 | 27.4 | 4000 | 0.6354 | 0.5389 | | 0.1603 | 30.14 | 4400 | 0.6612 | 0.5210 | | 0.1401 | 32.88 | 4800 | 0.6676 | 0.5131 | | 0.1286 | 35.62 | 5200 | 0.6366 | 0.5075 | | 0.1159 | 38.36 | 5600 | 0.6064 | 0.4977 | | 0.1084 | 41.1 | 6000 | 0.6530 | 0.4835 | | 0.0974 | 43.84 | 6400 | 0.6118 | 0.4853 | | 0.0879 | 46.58 | 6800 | 0.6316 | 0.4770 | | 0.0815 | 49.32 | 7200 | 0.6125 | 0.4664 | | 0.0708 | 52.05 | 7600 | 0.6449 | 0.4683 | | 0.0651 | 54.79 | 8000 | 0.6068 | 0.4571 | | 0.0555 | 57.53 | 8400 | 0.6305 | 0.4499 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
Anubhav23/indianlegal
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # LeViT LeViT-384 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT). Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-384') model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-384') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
Anubhav23/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # LeViT LeViT-256 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT). Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-256') model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-256') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
Anupam/QuestionClassifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # LeViT LeViT-192 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT). Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-192') model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-192') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
Apisate/DialoGPT-small-jordan
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace --- # LeViT LeViT-128S model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT). Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Usage Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import LevitFeatureExtractor, LevitForImageClassificationWithTeacher from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-128S') model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-128S') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits # model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ```
Apisate/Discord-Ai-Bot
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-06-01T11:30:01Z
--- license: wtfpl language: es tags: - gpt-j - spanish - LLM - gpt-j-6b --- # BERTIN-GPT-J-6B with 8-bit weights (Quantized) ### Go [here](https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-v1-8bit) to use the latest checkpoint. This model (and model card) is an adaptation of [hivemind/gpt-j-6B-8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit), so all credits to him/her. This is a version of **[bertin-project/bertin-gpt-j-6B](https://huggingface.co/bertin-project/bertin-gpt-j-6B)** that is modified so you can generate **and fine-tune the model in colab or equivalent desktop GPU (e.g. single 1080Ti)**. Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es) __The [original GPT-J](https://huggingface.co/EleutherAI/gpt-j-6B/tree/main)__ takes 22+ GB memory for float32 parameters alone, and that's before you account for gradients & optimizer. Even if you cast everything to 16-bit, it will still not fit onto most single-GPU setups short of A6000 and A100. You can inference it [on TPU](https://colab.research.google.com/github/kingoflolz/mesh-transformer-jax/blob/master/colab_demo.ipynb) or CPUs, but fine-tuning is way more expensive. Here, we apply several techniques to make GPT-J usable and fine-tunable on a single GPU with ~11 GB memory: - large weight tensors are quantized using dynamic 8-bit quantization and de-quantized just-in-time for multiplication - using gradient checkpoints to store one only activation per layer: using dramatically less memory at the cost of 30% slower training - scalable fine-tuning with [LoRA](https://arxiv.org/abs/2106.09685) and [8-bit Adam](https://arxiv.org/abs/2110.02861) In other words, all of the large weight-matrices are frozen in 8-bit, and you only train small adapters and optionally 1d tensors (layernorm scales, biases). ![img](https://i.imgur.com/n4XXo1x.png) __Does 8-bit affect model quality?__ Technically yes, but the effect is negligible in practice. [This notebook measures wikitext test perplexity](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/check_perplexity.ipynb) and it is nigh indistinguishable from the original GPT-J. Quantized model is even slightly better, but that is not statistically significant. Our code differs from other 8-bit methods in that we use **8-bit only for storage, and all computations are performed in float16 or float32**. As a result, we can take advantage of nonlinear quantization that fits to each individual weight distribution. Such nonlinear quantization does not accelerate inference, but it allows for much smaller error. __What about performance?__ Both checkpointing and de-quantization has some overhead, but it's surprisingly manageable. Depending on GPU and batch size, the quantized model is 1-10% slower than the original model on top of using gradient checkpoints (which is 30% overhead). In short, this is because block-wise quantization from bitsandbytes is really fast on GPU. ### How should I fine-tune the model? We recommend starting with the original hyperparameters from [the LoRA paper](https://arxiv.org/pdf/2106.09685.pdf). On top of that, there is one more trick to consider: the overhead from de-quantizing weights does not depend on batch size. As a result, the larger batch size you can fit, the more efficient you will train. ### Where can I train for free? You can train fine in colab, but if you get a K80, it's probably best to switch to other free gpu providers: [kaggle](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a), [aws sagemaker](https://towardsdatascience.com/amazon-sagemaker-studio-lab-a-great-alternative-to-google-colab-7194de6ef69a) or [paperspace](https://docs.paperspace.com/gradient/more/instance-types/free-instances). For intance, this is the same notebook [running in kaggle](https://www.kaggle.com/justheuristic/dmazur-converted) using a more powerful P100 instance. ### Can I use this technique with other models? The model was converted using [this notebook](https://nbviewer.org/urls/huggingface.co/hivemind/gpt-j-6B-8bit/raw/main/convert-gpt-j.ipynb). It can be adapted to work with other model types. However, please bear in mind that some models replace Linear and Embedding with custom alternatives that require their own BNBWhateverWithAdapters. ### How to use ```sh wget https://huggingface.co/mrm8488/bertin-gpt-j-6B-ES-8bit/resolve/main/utils.py -O Utils.py pip install transformers pip install bitsandbytes-cuda111==0.26.0 ``` ```py import transformers import torch from Utils import GPTJBlock, GPTJForCausalLM device = "cuda" if torch.cuda.is_available() else "cpu" transformers.models.gptj.modeling_gptj.GPTJBlock = GPTJBlock # monkey-patch GPT-J ckpt = "mrm8488/bertin-gpt-j-6B-ES-8bit" tokenizer = transformers.AutoTokenizer.from_pretrained(ckpt) model = GPTJForCausalLM.from_pretrained(ckpt, pad_token_id=tokenizer.eos_token_id, low_cpu_mem_usage=True).to(device) prompt = tokenizer("El sentido de la vida es", return_tensors='pt') prompt = {key: value.to(device) for key, value in prompt.items()} out = model.generate(**prompt, max_length=64, do_sample=True) print(tokenizer.decode(out[0])) ```
ArBert/albert-base-v2-finetuned-ner-agglo-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2022-06-01T11:35:03Z
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: En-Tn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # En-Tn This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-tn](https://huggingface.co/Helsinki-NLP/opus-mt-en-tn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.6174 - Bleu: 32.2889 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0+cu113 - Datasets 2.2.2 - Tokenizers 0.12.1
ArBert/albert-base-v2-finetuned-ner-gmm-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice model-index: - name: wav2vec2-large-xls-r-300m-Hindi-colab-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-Hindi-colab-v4 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.11.3 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.10.3
ArBert/albert-base-v2-finetuned-ner-kmeans-twitter
[ "pytorch", "tensorboard", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="jayeshgar/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
ArBert/bert-base-uncased-finetuned-ner-kmeans-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
ArBert/roberta-base-finetuned-ner-agglo-twitter
[ "pytorch", "tensorboard", "roberta", "token-classification", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: - it metrics: - type squad datasets: - squad_it tags: - Q&A widget: - text: "Come si chiama il primo re di Roma?" context: "Roma è una delle più belle ed antiche città del mondo. Il più famoso monumento di Roma è il Colosseo. Un altro monumento molto bello è la Colonna Traiana. Il primo re di Roma è stato Romolo. Roma ha avuto tanti re: Numa Pompilio, Tullio Ostilio." - text: "Qual è il più famoso monumento di Roma?" context: "Roma è una delle più belle ed antiche città del mondo. Il più famoso monumento di Roma è il Colosseo. Un altro monumento molto bello è la Colonna Traiana. Il primo re di Roma è stato Romolo. Roma ha avuto tanti re: Numa Pompilio, Tullio Ostilio." model-index: - name: squad_it_xxl_cased_hub1 results: [] --- # squad_it_xxl_cased This is a model, based on **BERT** trained on cased Italian, that can be used for [Extractive Q&A](https://huggingface.co/tasks/question-answering) on Italian texts. ## Model description This model has been trained on **squad_it** dataset starting from the pre-trained model [dbmdz/bert-base-italian-xxl-cased](https://huggingface.co/dbmdz/bert-base-italian-xxl-cased). These are the metrics computed on evaluation set: - EM: 63.95 - F1: 75.27 #### How to use ```python from transformers import pipeline pipe_qa = pipeline('question-answering', model='luigisaetta/squad_it_xxl_cased_hub1') pipe_qa(context="Io sono nato a Napoli. Il mare bagna Napoli. Napoli è la più bella città del mondo", question="Qual è la più bella città del mondo?") ``` ## Intended uses & limitations This model can be used for Extractive Q&A on Italian Text ## Training and evaluation data [squad_it](https://huggingface.co/datasets/squad_it) ## Training procedure see code in this [NoteBook](https://github.com/luigisaetta/nlp-qa-italian/blob/main/train_squad_it_final1.ipynb) ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 1234 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 - mixed_precision_training: Native AMP ### Framework versions - Transformers 4.20.0.dev0 - Pytorch 1.9.0 - Datasets 1.11.0 - Tokenizers 0.12.1
ArBert/roberta-base-finetuned-ner-gmm-twitter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: autotrain language: unk widget: - text: "الكل ينتقد الرئيس على إخفاقاته" datasets: - cjbarrie/autotrain-data-masress-medcrit-binary-5 co2_eq_emissions: 0.01017487638098474 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 937130980 - CO2 Emissions (in grams): 0.01017487638098474 ## Validation Metrics - Loss: 0.757265031337738 - Accuracy: 0.7551020408163265 - Macro F1: 0.7202470830473576 - Micro F1: 0.7551020408163265 - Weighted F1: 0.7594301962377263 - Macro Precision: 0.718716577540107 - Micro Precision: 0.7551020408163265 - Weighted Precision: 0.7711448215649895 - Macro Recall: 0.7285714285714286 - Micro Recall: 0.7551020408163265 - Weighted Recall: 0.7551020408163265 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/cjbarrie/autotrain-masress-medcrit-binary-5-937130980 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("cjbarrie/autotrain-masress-medcrit-binary-5-937130980", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("cjbarrie/autotrain-masress-medcrit-binary-5-937130980", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
ArBert/roberta-base-finetuned-ner-kmeans
[ "pytorch", "tensorboard", "roberta", "token-classification", "dataset:conll2003", "transformers", "generated_from_trainer", "license:mit", "model-index", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3206 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2156 | 1.0 | 8235 | 1.1791 | | 0.9413 | 2.0 | 16470 | 1.2182 | | 0.7514 | 3.0 | 24705 | 1.3206 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
Araby/Arabic-TTS
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-01T13:42:50Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="bishmoy/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Aracatto/Catto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-01T13:45:08Z
--- library_name: stable-baselines3 tags: - CartPoleNoVel-v1 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: RecurrentPPO results: - metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPoleNoVel-v1 type: CartPoleNoVel-v1 --- # **RecurrentPPO** Agent playing **CartPoleNoVel-v1** This is a trained model of a **RecurrentPPO** agent playing **CartPoleNoVel-v1** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo ppo_lstm --env CartPoleNoVel-v1 -orga sb3 -f logs/ python enjoy.py --algo ppo_lstm --env CartPoleNoVel-v1 -f logs/ ``` ## Training (with the RL Zoo) ``` python train.py --algo ppo_lstm --env CartPoleNoVel-v1 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo ppo_lstm --env CartPoleNoVel-v1 -f logs/ -orga sb3 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('clip_range', 'lin_0.2'), ('ent_coef', 0.0), ('gae_lambda', 0.8), ('gamma', 0.98), ('learning_rate', 'lin_0.001'), ('n_envs', 8), ('n_epochs', 20), ('n_steps', 32), ('n_timesteps', 100000.0), ('normalize', True), ('policy', 'MlpLstmPolicy'), ('policy_kwargs', 'dict( ortho_init=False, activation_fn=nn.ReLU, ' 'lstm_hidden_size=64, enable_critic_lstm=True, ' 'net_arch=[dict(pi=[64], vf=[64])] )'), ('normalize_kwargs', {'norm_obs': True, 'norm_reward': False})]) ```
Araf/Ummah
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="bishmoy/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Aran/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-base-uncased-finetuned-filtered-0602 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-filtered-0602 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1959 - Accuracy: 0.9783 - F1: 0.9783 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 0.1777 | 1.0 | 3180 | 0.2118 | 0.9563 | 0.9566 | | 0.1409 | 2.0 | 6360 | 0.1417 | 0.9736 | 0.9736 | | 0.1035 | 3.0 | 9540 | 0.1454 | 0.9739 | 0.9739 | | 0.0921 | 4.0 | 12720 | 0.1399 | 0.9755 | 0.9755 | | 0.0607 | 5.0 | 15900 | 0.1150 | 0.9792 | 0.9792 | | 0.0331 | 6.0 | 19080 | 0.1770 | 0.9758 | 0.9758 | | 0.0289 | 7.0 | 22260 | 0.1782 | 0.9767 | 0.9767 | | 0.0058 | 8.0 | 25440 | 0.1877 | 0.9796 | 0.9796 | | 0.008 | 9.0 | 28620 | 0.2034 | 0.9764 | 0.9764 | | 0.0017 | 10.0 | 31800 | 0.1959 | 0.9783 | 0.9783 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.12.1
Arina/Erine
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: bert-large-uncased-finetuned-filtered-0602 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-large-uncased-finetuned-filtered-0602 This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.8409 - Accuracy: 0.1667 - F1: 0.0476 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.8331 | 1.0 | 3180 | 1.8054 | 0.1667 | 0.0476 | | 1.8158 | 2.0 | 6360 | 1.8196 | 0.1667 | 0.0476 | | 1.8088 | 3.0 | 9540 | 1.8059 | 0.1667 | 0.0476 | | 1.8072 | 4.0 | 12720 | 1.7996 | 0.1667 | 0.0476 | | 1.8182 | 5.0 | 15900 | 1.7962 | 0.1667 | 0.0476 | | 1.7993 | 6.0 | 19080 | 1.8622 | 0.1667 | 0.0476 | | 1.7963 | 7.0 | 22260 | 1.8378 | 0.1667 | 0.0476 | | 1.7956 | 8.0 | 25440 | 1.8419 | 0.1667 | 0.0476 | | 1.7913 | 9.0 | 28620 | 1.8406 | 0.1667 | 0.0476 | | 1.7948 | 10.0 | 31800 | 1.8409 | 0.1667 | 0.0476 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.9.0 - Datasets 1.16.1 - Tokenizers 0.12.1
Ashl3y/model_name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-06-01T19:44:40Z
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1258515252163022848/_O1bOXBQ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1530279378332041220/1ysZA-S8_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Justin Moran & ToxicAct 🇺🇸 ⚽️</div> <div style="text-align: center; font-size: 14px;">@disgustingact84-kickswish</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Justin Moran & ToxicAct 🇺🇸 ⚽️. | Data | Justin Moran | ToxicAct 🇺🇸 ⚽️ | | --- | --- | --- | | Tweets downloaded | 3237 | 3247 | | Retweets | 286 | 260 | | Short tweets | 81 | 333 | | Tweets kept | 2870 | 2654 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3vwd4eeo/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disgustingact84-kickswish's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/24jluur0) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/24jluur0/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/disgustingact84-kickswish') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Aspect11/DialoGPT-Medium-LiSBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-06-01T20:06:54Z
--- language: en thumbnail: http://www.huggingtweets.com/disgustingact84-kickswish-managertactical/1654115021712/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1530279378332041220/1ysZA-S8_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1258515252163022848/_O1bOXBQ_400x400.jpg&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1360389551336865797/6RERF_Gg_400x400.jpg&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">ToxicAct 🇺🇸 ⚽️ & Justin Moran & Tactical Manager</div> <div style="text-align: center; font-size: 14px;">@disgustingact84-kickswish-managertactical</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from ToxicAct 🇺🇸 ⚽️ & Justin Moran & Tactical Manager. | Data | ToxicAct 🇺🇸 ⚽️ | Justin Moran | Tactical Manager | | --- | --- | --- | --- | | Tweets downloaded | 3247 | 3237 | 3250 | | Retweets | 260 | 286 | 47 | | Short tweets | 333 | 81 | 302 | | Tweets kept | 2654 | 2870 | 2901 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/3rtzdst3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @disgustingact84-kickswish-managertactical's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3lhxffhi) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3lhxffhi/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/disgustingact84-kickswish-managertactical') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Atchuth/MBOT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_subjqa pipeline_tag: text2text-generation tags: - question generation widget: - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/t5-large-subjqa-books-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: books args: books metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.0 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 23.68 - name: METEOR (Question Generation) type: meteor_question_generation value: 20.83 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 92.89 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 62.51 --- # Model Card of `lmqg/t5-large-subjqa-books-qg` This model is fine-tuned version of [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: books) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (books) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-large-subjqa-books-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-subjqa-books-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-subjqa-books-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.books.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:-----------------------------------------------------------------| | BERTScore | 92.89 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 22.66 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 13.78 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 4.31 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 20.83 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 62.51 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 23.68 | books | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: books - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-large-squad - max_length: 512 - max_length_output: 32 - epoch: 4 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-subjqa-books-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Ateeb/QA
[ "pytorch", "distilbert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForQuestionAnswering" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_subjqa pipeline_tag: text2text-generation tags: - question generation widget: - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/t5-large-subjqa-tripadvisor-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: tripadvisor args: tripadvisor metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 5.35 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 27.69 - name: METEOR (Question Generation) type: meteor_question_generation value: 27.45 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 94.46 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 67.76 --- # Model Card of `lmqg/t5-large-subjqa-tripadvisor-qg` This model is fine-tuned version of [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: tripadvisor) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (tripadvisor) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-large-subjqa-tripadvisor-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-subjqa-tripadvisor-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-subjqa-tripadvisor-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.tripadvisor.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 94.46 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 26.44 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 17.84 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 9.13 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 5.35 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 27.45 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 67.76 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 27.69 | tripadvisor | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: tripadvisor - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-large-squad - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-subjqa-tripadvisor-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Ateeb/SquadQA
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_subjqa pipeline_tag: text2text-generation tags: - question generation widget: - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/t5-large-subjqa-grocery-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: grocery args: grocery metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 1.13 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 17.4 - name: METEOR (Question Generation) type: meteor_question_generation value: 20.64 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 91.39 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 63.41 --- # Model Card of `lmqg/t5-large-subjqa-grocery-qg` This model is fine-tuned version of [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: grocery) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (grocery) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-large-subjqa-grocery-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-subjqa-grocery-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-subjqa-grocery-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 91.39 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 14.13 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 7.78 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 2.94 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 1.13 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 20.64 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 63.41 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 17.4 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: grocery - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-large-squad - max_length: 512 - max_length_output: 32 - epoch: 3 - batch: 16 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 32 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-subjqa-grocery-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Ateeb/asd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_subjqa pipeline_tag: text2text-generation tags: - question generation widget: - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/t5-large-subjqa-movies-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: movies args: movies metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 0.0 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 25.06 - name: METEOR (Question Generation) type: meteor_question_generation value: 21.7 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 93.64 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 63.88 --- # Model Card of `lmqg/t5-large-subjqa-movies-qg` This model is fine-tuned version of [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: movies) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/t5-large-squad](https://huggingface.co/lmqg/t5-large-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (movies) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-large-subjqa-movies-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-subjqa-movies-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-subjqa-movies-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.movies.json) | | Score | Type | Dataset | |:-----------|--------:|:-------|:-----------------------------------------------------------------| | BERTScore | 93.64 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 24.15 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 15.44 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 5 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 0 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 21.7 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 63.88 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 25.06 | movies | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: movies - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: ['qg'] - model: lmqg/t5-large-squad - max_length: 512 - max_length_output: 32 - epoch: 6 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.0 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-subjqa-movies-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Augustab/distilbert-base-uncased-finetuned-cola
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - image_folder metrics: - accuracy model-index: - name: swin-tiny-patch4-window7-224-finetuned-eurosat results: - task: name: Image Classification type: image-classification dataset: name: image_folder type: image_folder args: default metrics: - name: Accuracy type: accuracy value: 0.9618518518518518 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # swin-tiny-patch4-window7-224-finetuned-eurosat This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.1199 - Accuracy: 0.9619 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3627 | 0.99 | 47 | 0.1988 | 0.9389 | | 0.2202 | 1.99 | 94 | 0.1280 | 0.9604 | | 0.1948 | 2.99 | 141 | 0.1199 | 0.9619 | ### Framework versions - Transformers 4.19.2 - Pytorch 1.11.0 - Datasets 2.2.2 - Tokenizers 0.12.1
Augustvember/WokkaBot2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_subjqa pipeline_tag: text2text-generation tags: - question generation widget: - text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/bart-large-subjqa-restaurants-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: restaurants args: restaurants metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 5.54 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 24.77 - name: METEOR (Question Generation) type: meteor_question_generation value: 22.46 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 93.23 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 63.57 --- # Model Card of `lmqg/bart-large-subjqa-restaurants-qg` This model is fine-tuned version of [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: restaurants) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (restaurants) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-large-subjqa-restaurants-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-large-subjqa-restaurants-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-subjqa-restaurants-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.restaurants.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 93.23 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 23.25 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 15.35 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 8.41 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 5.54 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 22.46 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 63.57 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 24.77 | restaurants | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: restaurants - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: lmqg/bart-large-squad - max_length: 512 - max_length_output: 32 - epoch: 1 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-subjqa-restaurants-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Augustvember/WokkaBot3
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_subjqa pipeline_tag: text2text-generation tags: - question generation widget: - text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/bart-large-subjqa-electronics-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: electronics args: electronics metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 5.18 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 28.87 - name: METEOR (Question Generation) type: meteor_question_generation value: 25.17 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 93.51 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 65.68 --- # Model Card of `lmqg/bart-large-subjqa-electronics-qg` This model is fine-tuned version of [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: electronics) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/bart-large-squad](https://huggingface.co/lmqg/bart-large-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (electronics) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-large-subjqa-electronics-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-large-subjqa-electronics-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-large-subjqa-electronics-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.electronics.json) | | Score | Type | Dataset | |:-----------|--------:|:------------|:-----------------------------------------------------------------| | BERTScore | 93.51 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 28.11 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 19.75 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 9.66 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 5.18 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 25.17 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 65.68 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 28.87 | electronics | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: electronics - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: lmqg/bart-large-squad - max_length: 512 - max_length_output: 32 - epoch: 4 - batch: 8 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-large-subjqa-electronics-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Augustvember/WokkaBotF
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_subjqa pipeline_tag: text2text-generation tags: - question generation widget: - text: "<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" model-index: - name: lmqg/bart-base-subjqa-grocery-qg results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_subjqa type: grocery args: grocery metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 1.82 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 24.54 - name: METEOR (Question Generation) type: meteor_question_generation value: 20.8 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 94.09 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 65.76 --- # Model Card of `lmqg/bart-base-subjqa-grocery-qg` This model is fine-tuned version of [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad) for question generation task on the [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (dataset_name: grocery) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [lmqg/bart-base-squad](https://huggingface.co/lmqg/bart-base-squad) - **Language:** en - **Training data:** [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) (grocery) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/bart-base-subjqa-grocery-qg") # model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/bart-base-subjqa-grocery-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/bart-base-subjqa-grocery-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_subjqa.grocery.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:-----------------------------------------------------------------| | BERTScore | 94.09 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_1 | 23.71 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_2 | 15.47 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_3 | 4.58 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | Bleu_4 | 1.82 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | METEOR | 20.8 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | MoverScore | 65.76 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | | ROUGE_L | 24.54 | grocery | [lmqg/qg_subjqa](https://huggingface.co/datasets/lmqg/qg_subjqa) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_subjqa - dataset_name: grocery - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: lmqg/bart-base-squad - max_length: 512 - max_length_output: 32 - epoch: 2 - batch: 32 - lr: 5e-05 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/bart-base-subjqa-grocery-qg/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```