modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Chungu424/repodata
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-30T00:56:01Z
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: clinical-finetuned-data3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical-finetuned-data3 This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5058 - Accuracy: 0.86 - Precision: 0.875 - Recall: 0.9265 - F1: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Cinnamon/electra-small-japanese-discriminator
[ "pytorch", "electra", "pretraining", "ja", "transformers", "license:apache-2.0" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
419
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: clinical-finetunedNew results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical-finetunedNew This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.0423 - Accuracy: 0.84 - Precision: 0.8562 - Recall: 0.9191 - F1: 0.8865 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0707 | 1.0 | 50 | 0.9997 | 0.86 | 0.86 | 0.9485 | 0.9021 | | 0.0593 | 2.0 | 100 | 0.9293 | 0.845 | 0.8777 | 0.8971 | 0.8873 | | 0.0273 | 3.0 | 150 | 0.9836 | 0.83 | 0.8643 | 0.8897 | 0.8768 | | 0.039 | 4.0 | 200 | 1.0028 | 0.85 | 0.8732 | 0.9118 | 0.8921 | | 0.0121 | 5.0 | 250 | 1.0423 | 0.84 | 0.8562 | 0.9191 | 0.8865 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ciruzzo/DialoGPT-small-hattypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Pong-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-Pong results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pong-PLE-v0 type: Pong-PLE-v0 metrics: - type: mean_reward value: -16.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pong-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pong-PLE-v0** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Clarianliz30/Caitlyn
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - conversational --- # Basil DialoGPT Model
ClaudeCOULOMBE/RickBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
Access to model assayw119/etners-nlp is restricted and you are not in the authorized list. Visit https://huggingface.co/assayw119/etners-nlp to ask for access.
CleveGreen/FieldClassifier_v2
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
46
null
--- widget: - text: "How can I protect myself against covid-19?" context: "Preventative measures consist of recommendations to wear a mask in public, maintain social distancing of at least six feet, wash hands regularly, and use hand sanitizer. To facilitate this aim, we adapt the conceptual model and measures of Liao et al. " - text: "What are the risk factors for covid-19?" context: "To identify risk factors for hospital deaths from COVID-19, the OpenSAFELY platform examined electronic health records from 17.4 million UK adults. The authors used multivariable Cox proportional hazards model to identify the association of risk of death with older age, lower socio-economic status, being male, non-white ethnic background and certain clinical conditions (diabetes, obesity, cancer, respiratory diseases, heart, kidney, liver, neurological and autoimmune conditions). Notably, asthma was identified as a risk factor, despite prior suggestion of a potential protective role. Interestingly, higher risks due to ethnicity or lower socio-economic status could not be completely attributed to pre-existing health conditions." ---
CleveGreen/FieldClassifier_v2_gpt
[ "pytorch", "gpt2", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "GPT2ForSequenceClassification" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
2022-08-30T02:28:33Z
--- language: - en tags: - stable-diffusion - text-to-image license: creativeml-openrail-m inference: true --- # waifu-diffusion v1.4 - Diffusion for Weebs waifu-diffusion is a latent text-to-image diffusion model that has been conditioned on high-quality anime images through fine-tuning. ![image](https://user-images.githubusercontent.com/26317155/210155933-db3a5f1a-1ec3-4777-915c-6deff2841ce9.png) <sub>masterpiece, best quality, 1girl, green hair, sweater, looking at viewer, upper body, beanie, outdoors, watercolor, night, turtleneck</sub> [Original Weights](https://huggingface.co/hakurei/waifu-diffusion-v1-4) # Gradio & Colab We also support a [Gradio](https://github.com/gradio-app/gradio) Web UI and Colab with Diffusers to run Waifu Diffusion: [![Open In Spaces](https://camo.githubusercontent.com/00380c35e60d6b04be65d3d94a58332be5cc93779f630bcdfc18ab9a3a7d3388/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f25463025394625413425393725323048756767696e67253230466163652d5370616365732d626c7565)](https://huggingface.co/spaces/hakurei/waifu-diffusion-demo) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1_8wPN7dJO746QXsFnB09Uq2VGgSRFuYE#scrollTo=1HaCauSq546O) ## Model Description [See here for a full model overview.](https://gist.github.com/harubaru/f727cedacae336d1f7877c4bbe2196e1) ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ## Downstream Uses This model can be used for entertainment purposes and as a generative art assistant. ## Example Code ```python import torch from torch import autocast from diffusers import StableDiffusionPipeline pipe = StableDiffusionPipeline.from_pretrained( 'hakurei/waifu-diffusion', torch_dtype=torch.float32 ).to('cuda') prompt = "1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt" with autocast("cuda"): image = pipe(prompt, guidance_scale=6)["sample"][0] image.save("test.png") ``` ## Team Members and Acknowledgements This project would not have been possible without the incredible work by Stability AI and Novel AI. - [Haru](https://github.com/harubaru) - [Salt](https://github.com/sALTaccount/) - [Sta @ Bit192](https://twitter.com/naclbbr) In order to reach us, you can join our [Discord server](https://discord.gg/touhouai). [![Discord Server](https://discordapp.com/api/guilds/930499730843250783/widget.png?style=banner2)](https://discord.gg/touhouai)
CleveGreen/JobClassifier
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
2022-08-30T02:32:05Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="freeagh/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
CleveGreen/JobClassifier_v2
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.52 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="freeagh/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
CoShin/XLM-roberta-large_ko_en_nil_sts
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_trainer datasets: - squad_bn metrics: - sacrebleu model-index: - name: squad-bn-qgen-banglat5 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad_bn type: squad_bn args: squad_bn metrics: - name: Sacrebleu type: sacrebleu value: 8.0898 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad-bn-qgen-banglat5 This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on the squad_bn dataset. It achieves the following results on the evaluation set: - Loss: 0.4808 - Rouge1 Precision: 37.7366 - Rouge1 Recall: 34.2712 - Rouge1 Fmeasure: 34.8738 - Rouge2 Precision: 16.2055 - Rouge2 Recall: 14.568 - Rouge2 Fmeasure: 14.852 - Rougel Precision: 35.4241 - Rougel Recall: 32.2011 - Rougel Fmeasure: 32.7617 - Rougelsum Precision: 35.4167 - Rougelsum Recall: 32.1978 - Rougelsum Fmeasure: 32.7572 - Sacrebleu: 8.0898 - Meteor: 0.1782 - Gen Len: 9.8299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Sacrebleu | Meteor | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:---------:|:------:|:-------:| | 0.5208 | 1.0 | 16396 | 0.4683 | 38.566 | 35.5094 | 35.9216 | 17.0701 | 15.3916 | 15.6829 | 36.4433 | 33.5298 | 33.958 | 36.4637 | 33.5496 | 33.9913 | 8.6055 | 0.1799 | 9.8340 | | 0.479 | 2.0 | 32792 | 0.4815 | 40.7475 | 35.8163 | 37.0498 | 17.9002 | 15.2742 | 15.9601 | 38.6977 | 33.8607 | 35.1258 | 38.7261 | 33.8717 | 35.1537 | 9.0561 | 0.1835 | 9.4338 | | 0.4577 | 3.0 | 49188 | 0.4879 | 40.6712 | 36.2763 | 37.2775 | 18.5942 | 16.0689 | 16.7206 | 38.8546 | 34.5013 | 35.5491 | 38.8633 | 34.5255 | 35.5682 | 9.7947 | 0.1879 | 9.6324 | | 0.4389 | 4.0 | 65584 | 0.4881 | 41.4251 | 36.2873 | 37.6272 | 18.561 | 15.7067 | 16.5358 | 39.434 | 34.3496 | 35.7457 | 39.533 | 34.4702 | 35.8347 | 9.7612 | 0.1881 | 9.3944 | | 0.4321 | 5.0 | 81980 | 0.4937 | 41.1197 | 36.0568 | 37.4121 | 18.7179 | 15.8348 | 16.6644 | 39.3386 | 34.3177 | 35.7088 | 39.3171 | 34.3015 | 35.6748 | 9.8263 | 0.1887 | 9.4040 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
CodeMonkey98/distilroberta-base-finetuned-wikitext2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: tfranklin/bert-a-saurus results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # tfranklin/bert-a-saurus This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0003 - Validation Loss: 0.0004 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1202, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.2424 | 0.0004 | 0 | | 0.0004 | 0.0004 | 1 | | 0.0003 | 0.0004 | 2 | ### Framework versions - Transformers 4.22.0.dev0 - TensorFlow 2.9.2 - Datasets 2.4.0 - Tokenizers 0.12.1
CoffeeAddict93/gpt2-medium-call-of-the-wild
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.8871031746031746 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6871657754010695 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6913946587537092 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.8148971650917176 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.958 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6359649122807017 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6458333333333334 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9153231881874341 - name: F1 (macro) type: f1_macro value: 0.909786964934943 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8577464788732394 - name: F1 (macro) type: f1_macro value: 0.6952254602767576 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6847237269772481 - name: F1 (macro) type: f1_macro value: 0.6742659270266346 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9634137859080476 - name: F1 (macro) type: f1_macro value: 0.8926357349234371 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9106863052334692 - name: F1 (macro) type: f1_macro value: 0.9093125585829993 --- # relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/analogy.json)): - Accuracy on SAT (full): 0.6871657754010695 - Accuracy on SAT: 0.6913946587537092 - Accuracy on BATS: 0.8148971650917176 - Accuracy on U2: 0.6359649122807017 - Accuracy on U4: 0.6458333333333334 - Accuracy on Google: 0.958 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9153231881874341 - Micro F1 score on CogALexV: 0.8577464788732394 - Micro F1 score on EVALution: 0.6847237269772481 - Micro F1 score on K&H+N: 0.9634137859080476 - Micro F1 score on ROOT09: 0.9106863052334692 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.8871031746031746 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: average_no_mask - data: relbert/semeval2012_relational_similarity - template_mode: manual - template: I wasn’t aware of this relationship, but I just read in the encyclopedia that <subj> is the <mask> of <obj> - loss_function: info_loob - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 21 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-average-no-mask-prompt-d-loob/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
CoffeeAddict93/gpt2-modest-proposal
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: nllb-200-distilled-600M-finetuned-pan_Guru-to-eng_Latn results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-200-distilled-600M-finetuned-pan_Guru-to-eng_Latn This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8728 - Bleu: 42.5453 - Gen Len: 32.376 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 6 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | 1.5153 | 0.72 | 500 | 1.0531 | 34.9696 | 32.548 | | 1.1282 | 1.45 | 1000 | 0.9580 | 38.3648 | 31.832 | | 1.0299 | 2.18 | 1500 | 0.9235 | 40.1212 | 31.964 | | 0.942 | 2.9 | 2000 | 0.8963 | 41.2737 | 31.884 | | 0.8869 | 3.63 | 2500 | 0.8847 | 41.4381 | 31.82 | | 0.8553 | 4.35 | 3000 | 0.8780 | 42.1548 | 32.136 | | 0.8306 | 5.08 | 3500 | 0.8733 | 42.3333 | 32.64 | | 0.8063 | 5.8 | 4000 | 0.8728 | 42.5453 | 32.376 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
CogComp/bart-faithful-summary-detector
[ "pytorch", "jax", "bart", "text-classification", "en", "dataset:xsum", "transformers", "xsum", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BartForSequenceClassification" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": 1, "max_length": 128, "min_length": 12, "no_repeat_ngram_size": null, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
234
null
--- license: apache-2.0 tags: - translation - generated_from_trainer datasets: - kde4 metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-ja results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: kde4 type: kde4 config: en-ja split: train args: en-ja metrics: - name: Bleu type: bleu value: 37.10979592471087 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-ja This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on the kde4 dataset. It achieves the following results on the evaluation set: - Loss: 0.9825 - Bleu: 37.1098 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Contrastive-Tension/BERT-Base-NLI-CT
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- languages: - la - grc - he --- This model builds upon [an existing language detection model](https://huggingface.co/papluca/xlm-roberta-base-language-detection). It uses the same dataset, extended with Latin, Ancient Greek and (modern) Hebrew texts.
Contrastive-Tension/BERT-Base-Swe-CT-STSb
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
126
2022-08-30T09:47:44Z
--- tags: - generated_from_trainer metrics: - bleu model-index: - name: lg-en-test-version results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lg-en-test-version This model is a fine-tuned version of [AI-Lab-Makerere/lg_en](https://huggingface.co/AI-Lab-Makerere/lg_en) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5803 - Bleu: 31.3111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.687717341785184e-05 - train_batch_size: 15 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.0 | 24 | 1.0100 | 28.5722 | | No log | 2.0 | 48 | 0.7758 | 27.7506 | | No log | 3.0 | 72 | 0.6459 | 40.3866 | | No log | 4.0 | 96 | 0.5803 | 31.3111 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Contrastive-Tension/BERT-Distil-CT
[ "pytorch", "tf", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - conll2003 metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: - task: name: Token Classification type: token-classification dataset: name: conll2003 type: conll2003 config: conll2003 split: train args: conll2003 metrics: - name: Precision type: precision value: 0.9512644448166137 - name: Recall type: recall value: 0.9559071019858634 - name: F1 type: f1 value: 0.9535801225551919 - name: Accuracy type: accuracy value: 0.9921732019781161 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0399 - Precision: 0.9513 - Recall: 0.9559 - F1: 0.9536 - Accuracy: 0.9922 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0548 | 1.0 | 1756 | 0.0438 | 0.9368 | 0.9411 | 0.9390 | 0.9900 | | 0.021 | 2.0 | 3512 | 0.0395 | 0.9446 | 0.9519 | 0.9482 | 0.9914 | | 0.0108 | 3.0 | 5268 | 0.0399 | 0.9513 | 0.9559 | 0.9536 | 0.9922 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Craak/GJ0001
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1745 - F1: 0.8505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3055 | 1.0 | 835 | 0.1842 | 0.8099 | | 0.1561 | 2.0 | 1670 | 0.1711 | 0.8452 | | 0.1016 | 3.0 | 2505 | 0.1745 | 0.8505 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Craftified/Bob
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- widget: - text: "define the method i with an argument self." - text: "substitute asvar for self.asvar." - text: "convert host to lowercase." - text: "for every var in self.vars," - text: "call the method parser.delete_first_token." --- ``` ``` [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/mariancg-a-code-generation-transformer-model/code-generation-on-django)](https://paperswithcode.com/sota/code-generation-on-django?p=mariancg-a-code-generation-transformer-model) ``` ``` # MarianCG: a code generation transformer model inspired by machine translation This model is to improve the solving of the code generation problem and implement a transformer model that can work with high accurate results. We implemented MarianCG transformer model which is a code generation model that can be able to generate code from natural language. This work declares the impact of using Marian machine translation model for solving the problem of code generation. In our implementation, we prove that a machine translation model can be operated and working as a code generation model. Finally, we set the new contributors and state-of-the-art on CoNaLa reaching a BLEU score of 30.92 and Exact Match Accuracy of 6.2 in the code generation problem with CoNaLa dataset. MarianCG model and its implementation with the code of training and the generated output is available at this repository: https://github.com/AhmedSSoliman/MarianCG-NL-to-Code DJANGO dataset is available at https://huggingface.co/datasets/AhmedSSoliman/DJANGO This model is avialable on the huggingface hub https://huggingface.co/AhmedSSoliman/MarianCG-DJANGO ```python # Model and Tokenizer from transformers import AutoTokenizer, AutoModelForSeq2SeqLM # model_name = "AhmedSSoliman/MarianCG-NL-to-Code" model = AutoModelForSeq2SeqLM.from_pretrained("AhmedSSoliman/MarianCG-DJANGO") tokenizer = AutoTokenizer.from_pretrained("AhmedSSoliman/MarianCG-DJANGO") # Input (Natural Language) and Output (Python Code) NL_input = "define the method i with an argument self." output = model.generate(**tokenizer(NL_input, padding="max_length", truncation=True, max_length=512, return_tensors="pt")) output_code = tokenizer.decode(output[0], skip_special_tokens=True) ``` This model is available in spaces using gradio at: https://huggingface.co/spaces/AhmedSSoliman/MarianCG-DJANGO --- Tasks: - Translation - Code Generation - Text2Text Generation - Text Generation --- # Citation We now have a [paper](https://doi.org/10.1186/s44147-022-00159-4) for this work and you can cite: ``` @article{soliman2022mariancg, title={MarianCG: a code generation transformer model inspired by machine translation}, author={Soliman, Ahmed S and Hadhoud, Mayada M and Shaheen, Samir I}, journal={Journal of Engineering and Applied Science}, volume={69}, number={1}, pages={1--23}, year={2022}, publisher={SpringerOpen} url={https://doi.org/10.1186/s44147-022-00159-4} } ```
Crispy/dialopt-small-kratos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1619.40 +/- 156.98 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Crumped/imdb-simpleRNN
[ "keras" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/VioletaMG/ddpm-butterflies-128/tensorboard?#scalars)
Cryptikdw/DialoGPT-small-rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Electra-base-squad-adversarialqa-epoch-1 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Electra-base-squad-adversarialqa-epoch-1 This model is a fine-tuned version of [google/electra-base-discriminator](https://huggingface.co/google/electra-base-discriminator) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.4884 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 43062, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1104, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Epoch | |:----------:|:-----:| | 1.4884 | 0 | ### Framework versions - Transformers 4.21.2 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
DKpro000/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec2-base-timit-demo-google-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-timit-demo-google-colab This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5079 - Wer: 0.3365 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.4933 | 1.0 | 500 | 1.7711 | 0.9978 | | 0.8658 | 2.01 | 1000 | 0.6262 | 0.5295 | | 0.4405 | 3.01 | 1500 | 0.4841 | 0.4845 | | 0.3062 | 4.02 | 2000 | 0.4897 | 0.4215 | | 0.233 | 5.02 | 2500 | 0.4326 | 0.4101 | | 0.1896 | 6.02 | 3000 | 0.4924 | 0.4078 | | 0.1589 | 7.03 | 3500 | 0.4430 | 0.3896 | | 0.1391 | 8.03 | 4000 | 0.4334 | 0.3889 | | 0.1216 | 9.04 | 4500 | 0.4691 | 0.3828 | | 0.1063 | 10.04 | 5000 | 0.4726 | 0.3705 | | 0.0992 | 11.04 | 5500 | 0.4333 | 0.3690 | | 0.0872 | 12.05 | 6000 | 0.4986 | 0.3771 | | 0.0829 | 13.05 | 6500 | 0.4903 | 0.3685 | | 0.0713 | 14.06 | 7000 | 0.5293 | 0.3655 | | 0.068 | 15.06 | 7500 | 0.5039 | 0.3612 | | 0.0621 | 16.06 | 8000 | 0.5314 | 0.3665 | | 0.0571 | 17.07 | 8500 | 0.5038 | 0.3572 | | 0.0585 | 18.07 | 9000 | 0.4718 | 0.3550 | | 0.0487 | 19.08 | 9500 | 0.5482 | 0.3626 | | 0.0459 | 20.08 | 10000 | 0.5239 | 0.3545 | | 0.0419 | 21.08 | 10500 | 0.5096 | 0.3473 | | 0.0362 | 22.09 | 11000 | 0.5222 | 0.3500 | | 0.0331 | 23.09 | 11500 | 0.5062 | 0.3489 | | 0.0352 | 24.1 | 12000 | 0.4913 | 0.3459 | | 0.0315 | 25.1 | 12500 | 0.4701 | 0.3412 | | 0.028 | 26.1 | 13000 | 0.5178 | 0.3402 | | 0.0255 | 27.11 | 13500 | 0.5168 | 0.3405 | | 0.0228 | 28.11 | 14000 | 0.5154 | 0.3368 | | 0.0232 | 29.12 | 14500 | 0.5079 | 0.3365 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 1.18.3 - Tokenizers 0.12.1
DTAI-KULeuven/robbertje-1-gb-bort
[ "pytorch", "roberta", "fill-mask", "nl", "dataset:oscar", "dataset:oscar (NL)", "dataset:dbrd", "dataset:lassy-ud", "dataset:europarl-mono", "dataset:conll2002", "arxiv:2101.05716", "transformers", "Dutch", "Flemish", "RoBERTa", "RobBERT", "RobBERTje", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/nawage/ddpm-butterflies-128/tensorboard?#scalars)
alexandrainst/da-hatespeech-classification-base
[ "pytorch", "tf", "safetensors", "bert", "text-classification", "da", "transformers", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
866
2022-08-30T20:51:56Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: t5-base-mse-summarization results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-mse-summarization This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8743 - Rouge1: 45.9597 - Rouge2: 26.8086 - Rougel: 39.935 - Rougelsum: 43.8897 - Bleurt: -0.7132 - Gen Len: 18.464 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Bleurt | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|:-------:| | 1.2568 | 1.0 | 267 | 1.0472 | 41.6829 | 21.9654 | 35.4264 | 39.5556 | -0.8231 | 18.522 | | 1.1085 | 2.0 | 534 | 0.9840 | 43.1479 | 23.3351 | 36.9244 | 40.886 | -0.7843 | 18.534 | | 1.0548 | 3.0 | 801 | 0.9515 | 44.1511 | 24.4912 | 37.9549 | 41.9984 | -0.7702 | 18.528 | | 1.0251 | 4.0 | 1068 | 0.9331 | 44.426 | 24.9439 | 38.2978 | 42.1731 | -0.7633 | 18.619 | | 0.9888 | 5.0 | 1335 | 0.9201 | 45.0385 | 25.524 | 38.8681 | 42.8998 | -0.7497 | 18.523 | | 0.9623 | 6.0 | 1602 | 0.9119 | 44.8648 | 25.469 | 38.9281 | 42.7798 | -0.7496 | 18.537 | | 0.9502 | 7.0 | 1869 | 0.9015 | 44.9668 | 25.5041 | 38.9463 | 42.9368 | -0.7412 | 18.48 | | 0.9316 | 8.0 | 2136 | 0.8973 | 45.3028 | 25.7232 | 39.1533 | 43.277 | -0.7318 | 18.523 | | 0.9191 | 9.0 | 2403 | 0.8921 | 45.2901 | 25.916 | 39.2909 | 43.3022 | -0.7296 | 18.529 | | 0.9122 | 10.0 | 2670 | 0.8889 | 45.3535 | 26.1369 | 39.4861 | 43.28 | -0.7271 | 18.545 | | 0.8993 | 11.0 | 2937 | 0.8857 | 45.5345 | 26.1669 | 39.5656 | 43.4664 | -0.7269 | 18.474 | | 0.8905 | 12.0 | 3204 | 0.8816 | 45.7796 | 26.4145 | 39.8117 | 43.734 | -0.7185 | 18.503 | | 0.8821 | 13.0 | 3471 | 0.8794 | 45.7163 | 26.4314 | 39.719 | 43.6407 | -0.7211 | 18.496 | | 0.8789 | 14.0 | 3738 | 0.8784 | 45.9097 | 26.7281 | 39.9071 | 43.8105 | -0.7127 | 18.452 | | 0.8665 | 15.0 | 4005 | 0.8765 | 46.1148 | 26.8882 | 40.1006 | 43.988 | -0.711 | 18.443 | | 0.8676 | 16.0 | 4272 | 0.8766 | 45.9119 | 26.7674 | 39.9001 | 43.8237 | -0.718 | 18.491 | | 0.8637 | 17.0 | 4539 | 0.8758 | 45.9158 | 26.7153 | 39.9463 | 43.8323 | -0.7183 | 18.492 | | 0.8622 | 18.0 | 4806 | 0.8752 | 45.9508 | 26.75 | 39.9533 | 43.8795 | -0.7144 | 18.465 | | 0.8588 | 19.0 | 5073 | 0.8744 | 45.9192 | 26.7352 | 39.8921 | 43.8204 | -0.7148 | 18.462 | | 0.8554 | 20.0 | 5340 | 0.8743 | 45.9597 | 26.8086 | 39.935 | 43.8897 | -0.7132 | 18.464 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
DaisyMak/bert-finetuned-squad-transformerfrozen-testtoken
[ "pytorch", "tensorboard", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: - apache-2.0 - bsd-3-clause tags: - summarization - summary - booksum - long-document - long-form datasets: - kmfoda/booksum metrics: - rouge inference: false model-index: - name: pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 results: - task: type: summarization name: Summarization dataset: name: samsum type: samsum config: samsum split: test metrics: - type: rouge value: 24.4101 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjhmM2NiMDQ1NjI3Zjk4YjkyMTVkMmUwZDU2YWMwZjc4ZmIzMjA1OGZiYzRmNjI3NDk3OWNmOTlkZDMxZmViMyIsInZlcnNpb24iOjF9.wS774e7vxQrf2gCcPhySsET3UaiUsj8E7mQmBS84wz86aT9j1yCqVX-8ozuj896K5wMygbL-TpUbydRIyyHTDw - type: rouge value: 5.003 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTYyNTFkNWZhOTgwNDg5ZWU5Y2U5NGI4Y2Y2YTMxNjUzOWI0ZWNlNDE1OGYzMjA1YTBmNDE4ZjcyOTZmODE4NiIsInZlcnNpb24iOjF9.AuqDkCgUgDWl8vMyrjTh59QW741UssGxdBqj3GZKy5e5gKadClUA709qgKbpxPIbMEyk38yvXYGplaJf5CnCCA - type: rouge value: 17.2544 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTBmODZmNWRhMzBhY2MzOGRkZWQzNjAzMGViOGMxYWYyZjNlZmM4YzgzMjkxNTk3M2E1ODAwZjY1M2I2MDZkYyIsInZlcnNpb24iOjF9.Md52aHjujvkxaW-ubJNquiHHHgi-OfRav0ZElVvYhIpU_k0iKEaQZRcw9JYjtG5vZJbQeiWbMzcCOJ999DhrAA - type: rouge value: 20.9183 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDJjNDc1OTZjY2VmNWRhNmYwZjRjY2JmNTAyNmIwZjRhYjMyMTdlNzY2M2Q4OGQwNTEyYTU0NGVhYWI2ZTk3NSIsInZlcnNpb24iOjF9.nlqol0HEeEjU7509-B9eyohf3CP3EZTibJ1lTvOx3wt8rU5LzEdwFazOTHjpWlcK_rik7jcySdUDe4fGjJtKAQ - type: loss value: 3.194674015045166 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzRiYmRiYjFkZDhlNGIwYTg3NDUwZTEzZjc5MjllNmJmODQ1YzBjNDM4MzQwNmMzNmNkMzk5N2M2MzZlOWY4MyIsInZlcnNpb24iOjF9._YJqPY9p_N2n7UxAkTeGenH1sVAkC_Z5HzZ6NbzlQoa8-RXTfbEPLw7fSKmlsGNyZxj7L_Bs4COIWzwAMxZSAA - type: gen_len value: 58.9951 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDJhYzU2Zjg4ZmIyOGRmNTU4MDM2NGZiNzc0NDk3YzZkOTQwMWMwNjMzZDQzZTZiZjk4ZDdmMmI2ODRkYjk3OCIsInZlcnNpb24iOjF9.MG1rcM_qpUhQmAYrsBxyNpcLUrPZw6V_uzYzDAo01kQyZEwJClWgMRVgpsSEnY93Mlu1445QLxkJEByUrfD3BQ - task: type: summarization name: Summarization dataset: name: billsum type: billsum config: default split: test metrics: - type: rouge value: 37.3648 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWU4ZmZmYzllMzQxM2I4YTUxMjkwYjEzNDk1NjRlYjJiZjYyYWNiNzM4ODMxMGJjMzdhYjFhMzhlNTE5YmYyMiIsInZlcnNpb24iOjF9.9NTlO_5zLC8Y3mkwstviPb9WmMqPmXfWfEN0yONA6WYhh1jPy0gECEb5uF0G6wBMhTPDTqGMWOYIAF2vMeNbDA - type: rouge value: 12.3316 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTJhZTcxMDc5ODljMDBjYzFmYWIzNTA4M2NiZDUwYTMwNTVjZTUyZTU2M2IwYWE2YjkzMzMzMjg1MDU1OWE1NSIsInZlcnNpb24iOjF9.FRsoRao8qj6A8W7OeIVAoZCEc1HCZEzmKOs0CPkUceF19pk1ngaXt5K6kcPJ-5fYJydtfSuSnuG3aqlOEJeYDQ - type: rouge value: 22.075 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2FjNTMxMGYyNjgyNjk2YTQwZjM4MTM4Yjg0MTkyN2RmNDE5YTU5ZDNkZDFhZDM2YWRlNDI4M2JlMWYxNDQ3ZCIsInZlcnNpb24iOjF9.wsLUEYGJyMSJPPclOzb1hcRdE-VrZex2Sd5er_XVbe6bY1cRO5DdIn69sE9hmAcltefu4ikpHu2ihbv7qvj4Aw - type: rouge value: 31.1679 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTUyODVkZGIxYzMyZDczNzU5YjVkNTliZmM4ZTdiYWE2ZjJhNGM3ZDgzMWE3ZjA2MDBhZWQ1ZGY1YzNmZDMwNiIsInZlcnNpb24iOjF9.fPgMnnXY5oPdCn1STZ0HwUiil8OlLZ8ZWZZav_chDIQ7Kh1RKeLy0EG2vEhrB6IlyP7uZ3RmdT9VHM1_khrEAw - type: loss value: 2.745267391204834 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ2NDVmODI2ZTQyNmVjZjRkZDdlMTdiODBkZTlkNTFkODBjNjViMTZhMDVkYTkwYWIyNDFkZWZhZmJhODEwMyIsInZlcnNpb24iOjF9.9JWTqdGEhztS--N8grHY6q2a8taVu65Lr17ocXgudp4imhqr9Bhau2X2G5SLN7c1oYieKtyKcWdDAmVzHyTbDw - type: gen_len value: 157.3126 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYWNiODFmMWQ1ZTkzZGNjNDkwM2ZiZjJlZmQ3N2ExNWJhYmUxYWM2ZGNiYzlhYTY5Y2RhOGVlZDhmN2ZmODQwYSIsInZlcnNpb24iOjF9.sRA9iBS4vzFDZtwM4Vs6Kevj3eiTkS5akApUWTZBCt58YSW8mpoKqsWcnQFEjDCCec-FfV_451OLIetcmDZiCA - task: type: summarization name: Summarization dataset: name: xsum type: xsum config: default split: test metrics: - type: rouge value: 18.2975 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjJhMjQ0Yzc4ZmNkOWI5YjhmOTlmOTA4MTE0NWM4NGRlNjE0NDIwOTY2ZmQyNjA0ZmE5MjM2YjAyZDZiNWFkNiIsInZlcnNpb24iOjF9.2UJ48OcezjnfMC0dGjksZpAiXRGNAOHniHdN-tQmQPo0vXwRYNTyPrVULnVoBZUvSdycTYvjl0jDKNhZmtGfCA - type: rouge value: 2.6806 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNTlkMmQwZTRmN2JlOTQ0N2I0YjdhOTBmYmU3MzEwNzE2ZjFiOTM4OWMyMWRhNmZjNTBkZWY5OGMwYTZhZDRhYSIsInZlcnNpb24iOjF9.7D-IR1aBxx1goOkbeA3Tzd1Wu0Zfi0yQVSG8HWSboM7J67TBHblFsFCVJE7Z2wZRbBW4WtuDIGAcl1d1_Wu_Aw - type: rouge value: 11.9453 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGZjNmY5NmU5ODBmMDQyMjhhNzY3NzBlNDEyMTE3NjY1ZmRkZDZkZWI1YTA0ZTA0NzU1MjMzOTNjZDA3YWM1MCIsInZlcnNpb24iOjF9.SlI42pwrWc_OlcBKOPtrYNzvK_DUk6IJlzrrtjvkZX7k1S7bguekAV-_rWHfn_82k8rJ1FQAReasGHu1dZ0aBw - type: rouge value: 14.2121 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2E2MGE0MTQ1YmU0MTJkOWY3ZDhhODIwYWNhNTE3YWJkZTFhYzM1ZjBmNGExODIzYmU2YzE1ODg4ZjdhZWMwMiIsInZlcnNpb24iOjF9.K5FEsZtSph0FqF5zwetkE-X5AKOlj5g_02DPdl-kEe1azKrBBZy9sDiS0WfIGfwHiRdNvKGKi8t3PAGPsfQwCQ - type: loss value: 4.836681365966797 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzhlYjA0YzZmYjdmYWQwNDFhNzIzYWNkYzM4OGFlOWJiY2EzYTkxYjk3ZmJmNGQyMGE1ZmYzMDU2MzhhMmVkMiIsInZlcnNpb24iOjF9.uHYwqPBg6K63exBvqt__c82gKi52OhPTRSrcIKHOECCmoXJLJKgFJCuIXGWMJ7UP4HG375e9uqunJB0XwC20DA - type: gen_len value: 96.2584 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjNjYzQzNmM5NTY2YzVhNzRkZjMxMzhiYTU1MDBiOGZkYjA4YTg0MmQzYzQ3YTk3N2YwMDA5MWNlM2Y4YTFmZiIsInZlcnNpb24iOjF9.dirG9kG6OdNi-YEMWHv0UMrHTjEt6VS9i6fRbbUeZd1OoP2fl6XcKoDIk6Us-cdiyVnCyyhWsMNsUufMAqLtDA - task: type: summarization name: Summarization dataset: name: launch/gov_report type: launch/gov_report config: plain_text split: test metrics: - type: rouge value: 37.3609 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMGExYjM5ODRiNThlZTU4ZTdhM2ZlZWRlNTgzNzc3N2ZjODk2ZjdlOGZlMDkzNmU2Yjk1NzQzZjQ5YzkwODllMCIsInZlcnNpb24iOjF9.JQIeaQkG-IlinWoyc6FKJZUgpWfqOsDhludqm5MgVsw68gsjo0nSPp_Y_1q26Y4dulZOLlQLyBAm3mlCA8s5Ag - type: rouge value: 8.6943 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWFjNzJkMzViOGM5YWQ0OGQ4ZTg3NTE5MzU1MjZkZjZiZmVkYTk0ZDhkYjAxMjZiZDVkZTYyYjk4MzRjNTQ3YiIsInZlcnNpb24iOjF9.9XJZ2UF6XyZNNrtp-XOEXC6etoDOFLq1xlIoMFEM9Jinisq3kWguXBiqPQWImLKra5WBm7jU_QIX-Fvn8sP-DA - type: rouge value: 17.9106 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ1MTdmNThiM2FiOGRmZWRmOTNlYWMwYTU1YjRiNTRlMGEwYjBmMmQ0YjQ4MDBhNzMzZmZkNjk3NjU0YzRhMSIsInZlcnNpb24iOjF9.040nGV6pig0Rzq9vkN83ZVWQzyjcVi13L36v0QF-Nhziol_dPPhuvghTlGWXWHwj6amsKzyh8M7rNfwL2TcsAQ - type: rouge value: 33.8022 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDYwOGRmYzg4ODc2NDExNjhhMjI5MDg3MjI0YTQ5ZDRhM2NjN2Q2ZjM5YTIwZDIxNmY3Y2JlMmMxYTE5MDE4ZiIsInZlcnNpb24iOjF9.S1nynUjLz7z4gf-0WFfPs-ZuZubhN9kXyVSrYNzOdT2gTJmByQWasKreZkVSWus-HNAHR8DhzL6UUWxuDMmAAQ - type: loss value: 3.4974069595336914 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzkyNmU5NTRhMTkxNjA1M2E1MjdiMTE0MzQyMDc4ODBkNmM1NDg1ZDk4OTNjODk2MThlZGZiYzQxOGE1YzgwMiIsInZlcnNpb24iOjF9.H9Oo0VKvcqAHcVNvjeEPEhQe5HP0v614suyCv75tfFGaPSKTIe3UlBNDdGOtqfUxb2zUNaBQ8MkA66C_Fkq6CA - type: gen_len value: 243.3453 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZWE1MGQzMDc2NDViOGM5ZmVkZjk0NmY0NzliOTBhMmE3NmY5MmUxMTI3NGE2OTQzM2Y1NjdmN2NlZGFlODFlYiIsInZlcnNpb24iOjF9.635fcTp_czTabJUVR_dwpzdkntb4cxEbODAC9MMTKrLKEf9NHqDBJXQ-nBOieW05iCSYzw_tEi8O-QW-sRxDAw - task: type: summarization name: Summarization dataset: name: kmfoda/booksum type: kmfoda/booksum config: kmfoda--booksum split: test metrics: - type: rouge value: 35.2043 name: ROUGE-1 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYTRlZTdjZDRlZGMxYzA2NmRkYjBiMzZkY2Q1ODUyYjJkM2QwOTRmMzA3ZmU5MDI5ZmM1MmZkZDUwNzc0NjhmNyIsInZlcnNpb24iOjF9.zrskApkmkhbfQLtlgjf_n6i3WmZcmkDH7Sd-JTzOYAU3yk1_Zl4paGdmpXvyQY48M71qWsBYtEKkhnzrkvCGBA - type: rouge value: 5.746 name: ROUGE-2 verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2FlMjU2MzU1MTljZjM0ZmFhMmJlZDAxMTcwZDk3YWE5NjVjYjE0YmEyMTgzY2UyMTVmZDY5ZWM1YmM1ZDA5NSIsInZlcnNpb24iOjF9.5nDuOwa98pon3VW1TazB2Vw1uJgh6pfFMorzgLMJFvhgwYz6_MvLR1dDUeffP4eyw7rGZjBmf039AM7CyKEgCg - type: rouge value: 15.6794 name: ROUGE-L verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYjRmYzk3NWFhZDVlODA4YWRiMDU1ZWFhZmMwMWE4MmNkNmNjZWM3ZjUwYzI3MWIxM2Y4MTlhZDk2ZTg5YjkyYSIsInZlcnNpb24iOjF9.TLflM2CYNgz4DNt-TwjgdkTL8ebKckTNnlPVsGLUUGqNI1CvSswzsPedqmntCfKVsH2YAsKsR4ZUb1HtJFsSAw - type: rouge value: 32.1129 name: ROUGE-LSUM verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzdhNWE1YjRjNGUzYWYyNzM4MjIyYThiODJhODU2OGVlOTYxOGNhZmQ4Mjk2ZDUwNmU0MGQwNjQ5NTk2MzU4ZiIsInZlcnNpb24iOjF9.5yvTmPktBuyzoVNHn7UHcci3OrZLTm7e9d_lQkJq8UwzUuso1wHoy_gdvnvpn2DvUfdcBi5sXgG4mtFnVnGgBw - type: loss value: 2.945225238800049 name: loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTgxNGRiN2RkMzQ5MjI2OGI0MTljZTY5ZDQyMzc5MjhmNzdhZWQ2NmJhYTgzOTRlMGY2YzkzZWE2NzVkYzVmNCIsInZlcnNpb24iOjF9.VkkP4-S6ZoozLj-iuY7tdsrSR0q1JLQXfgPv_0u2sJuv6x9RYMdCpfJHbqYbirV63b9w28USSwaAAMnz-LoJAA - type: gen_len value: 307.5493 name: gen_len verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMmQ1YTgxYmRhYWViYjhhNmYzNjdlYzVhMTNmZTBkY2RiOTRlMTUzNTIzY2RjOTNhMjRmNGRmYjQyNTBmZWRiMiIsInZlcnNpb24iOjF9.7ItU-AQXB4EEj9U9kJceteBQbA5MkZoegeLhCdpZepEaXzqr6Zg3yHLCD9zL_6Svb9uxuin678KOT5Zf-2YWCQ --- # long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP13 > Evaluating some metric results before merging with the "main" wip version This model is a fine-tuned version of [pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP12) on the `kmfoda/booksum`. The "base" checkpoint that I update when a training session is productive is [here](https://huggingface.co/pszemraj/long-t5-tglobal-large-pubmed-3k-booksum-16384-WIP) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0006 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 64 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.01 - num_epochs: 1.1 ### Framework versions - Transformers 4.21.2 - Pytorch 1.10.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Daltcamalea01/Camaleaodalt
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-30T23:46:07Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_bn metrics: - sacrebleu model-index: - name: squad-bn-qgen-mt5-all-metric results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: squad_bn type: squad_bn args: squad_bn metrics: - name: Sacrebleu type: sacrebleu value: 6.4143 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # squad-bn-qgen-mt5-all-metric This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the squad_bn dataset. It achieves the following results on the evaluation set: - Loss: 0.7273 - Rouge1 Precision: 35.8589 - Rouge1 Recall: 29.7041 - Rouge1 Fmeasure: 31.6373 - Rouge2 Precision: 15.4203 - Rouge2 Recall: 12.5155 - Rouge2 Fmeasure: 13.3978 - Rougel Precision: 34.4684 - Rougel Recall: 28.5887 - Rougel Fmeasure: 30.4627 - Rougelsum Precision: 34.4252 - Rougelsum Recall: 28.5362 - Rougelsum Fmeasure: 30.4053 - Sacrebleu: 6.4143 - Meteor: 0.1416 - Gen Len: 16.7199 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rougelsum Precision | Rougelsum Recall | Rougelsum Fmeasure | Sacrebleu | Meteor | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:-------------------:|:----------------:|:------------------:|:---------:|:------:|:-------:| | 0.8449 | 1.0 | 16396 | 0.7340 | 31.6476 | 26.8901 | 28.2871 | 13.621 | 11.3545 | 11.958 | 30.3276 | 25.7754 | 27.1048 | 30.3426 | 25.7489 | 27.0991 | 5.9655 | 0.1336 | 16.8685 | | 0.7607 | 2.0 | 32792 | 0.7182 | 33.7173 | 28.6115 | 30.1049 | 14.8227 | 12.2059 | 12.9453 | 32.149 | 27.2036 | 28.6617 | 32.2479 | 27.2261 | 28.7272 | 6.6093 | 0.138 | 16.8522 | | 0.7422 | 3.0 | 49188 | 0.7083 | 34.6128 | 29.0223 | 30.7248 | 14.9888 | 12.3092 | 13.1021 | 33.2507 | 27.8154 | 29.4599 | 33.2848 | 27.812 | 29.5064 | 6.2407 | 0.1416 | 16.5806 | | 0.705 | 4.0 | 65584 | 0.7035 | 34.156 | 29.0012 | 30.546 | 14.72 | 12.0251 | 12.8161 | 32.7527 | 27.6511 | 29.1955 | 32.7692 | 27.6627 | 29.231 | 6.1784 | 0.1393 | 16.7793 | | 0.6859 | 5.0 | 81980 | 0.7038 | 35.1405 | 29.6033 | 31.2614 | 15.5108 | 12.6414 | 13.5059 | 33.8335 | 28.4264 | 30.0745 | 33.8782 | 28.4349 | 30.0901 | 6.5896 | 0.144 | 16.6651 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Danih1502/t5-small-finetuned-en-to-de
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/Tsubame/ddpm-butterflies-128/tensorboard?#scalars)
Darya/layoutlmv2-finetuned-funsd-test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
DataikuNLP/average_word_embeddings_glove.6B.300d
[ "arxiv:1908.10084", "sentence-transformers", "feature-extraction", "sentence-similarity", "license:apache-2.0" ]
sentence-similarity
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
language: - "List of ISO 639-1 code for your language" - lang1 - lang2 thumbnail: "url to a thumbnail used in social sharing" tags: - tag1 - tag2 license: "any valid license identifier" datasets: - dataset1 - dataset2 metrics: - metric1 - metric2
DataikuNLP/distiluse-base-multilingual-cased-v1
[ "pytorch", "distilbert", "arxiv:1908.10084", "sentence-transformers", "feature-extraction", "sentence-similarity", "transformers", "license:apache-2.0" ]
sentence-similarity
{ "architectures": [ "DistilBertModel" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_10_0 model-index: - name: wav2vec2-large-xls-r-300m-j-kana-colab results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-j-kana-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.7188 - Wer: 0.1285 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 397 | 3.8381 | 0.9571 | | No log | 2.0 | 794 | 0.8909 | 0.2265 | | 4.0962 | 3.0 | 1191 | 0.8076 | 0.2054 | | 4.0962 | 4.0 | 1588 | 0.7300 | 0.1809 | | 4.0962 | 5.0 | 1985 | 0.7322 | 0.1761 | | 0.6325 | 6.0 | 2382 | 0.6478 | 0.1524 | | 0.6325 | 7.0 | 2779 | 0.6559 | 0.1472 | | 0.408 | 8.0 | 3176 | 0.6925 | 0.1500 | | 0.408 | 9.0 | 3573 | 0.7567 | 0.1582 | | 0.408 | 10.0 | 3970 | 0.6687 | 0.1358 | | 0.29 | 11.0 | 4367 | 0.7223 | 0.1418 | | 0.29 | 12.0 | 4764 | 0.7082 | 0.1328 | | 0.2152 | 13.0 | 5161 | 0.7114 | 0.1340 | | 0.2152 | 14.0 | 5558 | 0.7082 | 0.1280 | | 0.2152 | 15.0 | 5955 | 0.7188 | 0.1285 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.10.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Davlan/bert-base-multilingual-cased-finetuned-igbo
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
2022-08-31T03:20:05Z
--- license: afl-3.0 --- <p align="center"> <br> <img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/> <br> </p> # reStructured Pre-training (RST) official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html) #### RST is a new paradigm for language pre-training, which * unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model, * surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc) * achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam In such a pre-training paradigm, * Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing * Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access. ## Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | **rst-all-11b** | **Trained with all the signals below except signals that are used to train Gaokao models** | **All applications below (specialized models are recommended first if high performance is preferred)** | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning | | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification | | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering| | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling| | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks | ## Have a try? ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b") model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b") inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)) ``` ## Data for reStructure Pre-training This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research. We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals. ###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush: | Mine | Signal | #Sample | Use in DataLab | Some Applications | | --- | --- | --- | --- | --- | | [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion| | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification | | [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning| | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion | | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning | | [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation| | [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition| [Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation | | [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference| |[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension| | [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension| | [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension | | [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension| | [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension| | [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension | | [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification| | [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion| | [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition| | [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion| ## Bibtext for Citation Info ``` @article{yuan2022restructured, title={reStructured Pre-training}, author={Yuan, Weizhe and Liu, Pengfei}, journal={arXiv preprint arXiv:2206.11147}, year={2022} } ```
Davlan/byt5-base-eng-yor-mt
[ "pytorch", "t5", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/js05212/ddpm-butterflies-128/tensorboard?#scalars)
Davlan/byt5-base-yor-eng-mt
[ "pytorch", "t5", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.2999 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4977 | 1.0 | 782 | 2.3318 | | 2.4232 | 2.0 | 1564 | 2.3005 | | 2.386 | 3.0 | 2346 | 2.2721 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.12.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
Davlan/m2m100_418M-eng-yor-mt
[ "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "M2M100ForConditionalGeneration" ], "model_type": "m2m_100", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: bert-base-uncased-issues-128 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-issues-128 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2456 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 16 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.0986 | 1.0 | 291 | 1.6929 | | 1.6401 | 2.0 | 582 | 1.4304 | | 1.4881 | 3.0 | 873 | 1.3916 | | 1.4 | 4.0 | 1164 | 1.3796 | | 1.3416 | 5.0 | 1455 | 1.2012 | | 1.2807 | 6.0 | 1746 | 1.2733 | | 1.2396 | 7.0 | 2037 | 1.2646 | | 1.1993 | 8.0 | 2328 | 1.2098 | | 1.1661 | 9.0 | 2619 | 1.1862 | | 1.1406 | 10.0 | 2910 | 1.2223 | | 1.1294 | 11.0 | 3201 | 1.2056 | | 1.1042 | 12.0 | 3492 | 1.1655 | | 1.0827 | 13.0 | 3783 | 1.2525 | | 1.0738 | 14.0 | 4074 | 1.1685 | | 1.0626 | 15.0 | 4365 | 1.1182 | | 1.0629 | 16.0 | 4656 | 1.2456 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Davlan/m2m100_418M-yor-eng-mt
[ "pytorch", "m2m_100", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "M2M100ForConditionalGeneration" ], "model_type": "m2m_100", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - precision - recall - f1 model-index: - name: clinical-finetuned-AgitationModel results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # clinical-finetuned-AgitationModel This model is a fine-tuned version of [emilyalsentzer/Bio_ClinicalBERT](https://huggingface.co/emilyalsentzer/Bio_ClinicalBERT) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9746 - Accuracy: 0.88 - Precision: 0.9178 - Recall: 0.9178 - F1: 0.9178 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0949 | 1.0 | 50 | 1.0393 | 0.85 | 0.8816 | 0.9178 | 0.8993 | | 0.0475 | 2.0 | 100 | 1.0619 | 0.85 | 0.8816 | 0.9178 | 0.8993 | | 0.0149 | 3.0 | 150 | 0.9746 | 0.88 | 0.9178 | 0.9178 | 0.9178 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
Davlan/mT5_base_yoruba_adr
[ "pytorch", "mt5", "text2text-generation", "arxiv:2003.10564", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 2.4721 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | ### Framework versions - Transformers 4.15.0 - Pytorch 1.12.1+cu113 - Datasets 1.17.0 - Tokenizers 0.10.3
Davlan/mbart50-large-eng-yor-mt
[ "pytorch", "mbart", "text2text-generation", "arxiv:2103.08647", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
#Pre-trained-Language-Model-For-Chinese-Patent ZL-RoBERTa-wwm: MLM with Whole Word Masking 在中文发明专利上进行训练,MLM任务使用了wwm策略
Davlan/xlm-roberta-base-finetuned-swahili
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
40
2022-08-31T07:32:00Z
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Davlan/xlm-roberta-base-finetuned-wolof
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: huggan/smithsonian_butterflies_subset metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `huggan/smithsonian_butterflies_subset` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/livingmagic/ddpm-butterflies-128/tensorboard?#scalars)
Davlan/xlm-roberta-base-finetuned-zulu
[ "pytorch", "xlm-roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "XLMRobertaForMaskedLM" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: fintuning-sentiment-model-3000-samples results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # fintuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3079 - Accuracy: 0.88 - F1: 0.8808 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
Dazai/Ko
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - flair - token-classification - sequence-tagger-model --- ### Demo: How to use in Flair Requires: - **[Flair](https://github.com/flairNLP/flair/)** (`pip install flair`) ```python from flair.data import Sentence from flair.models import SequenceTagger # load tagger tagger = SequenceTagger.load("osanseviero/flair_test5") # make example sentence sentence = Sentence("On September 1st George won 1 dollar while watching Game of Thrones.") # predict NER tags tagger.predict(sentence) # print sentence print(sentence) # print predicted NER spans print('The following NER tags are found:') # iterate over entities and print for entity in sentence.get_spans('ner'): print(entity) ```
DeBERTa/deberta-v2-xxlarge
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-31T11:13:59Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1612.90 +/- 407.25 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeadBeast/marathi-roberta-base
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - HalfCheetahBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1647.65 +/- 21.63 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: HalfCheetahBulletEnv-v0 type: HalfCheetahBulletEnv-v0 --- # **A2C** Agent playing **HalfCheetahBulletEnv-v0** This is a trained model of a **A2C** agent playing **HalfCheetahBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
DeadBeast/mbert-base-cased-finetuned-bengali-fakenews
[ "pytorch", "bert", "text-classification", "bengali", "dataset:BanFakeNews", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- license: bigscience-bloom-rail-1.0 language: - zh pipeline_tag: text-generation widget: - text: "中国的首都是" --- This model is based on [bigscience/bloom-3b](https://huggingface.co/bigscience/bloom-3b). We pruned its vocabulary from 250880 to 46145 with Chinese corpus to reduce GPU memory usage. So the total parameter is 2b5 now. # How to use ```python from transformers import BloomTokenizerFast, BloomForCausalLM tokenizer = BloomTokenizerFast.from_pretrained('Langboat/bloom-2b5-zh') model = BloomForCausalLM.from_pretrained('Langboat/bloom-2b5-zh') print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt')))) ```
DeadBeast/roberta-base-pretrained-mr-2
[ "pytorch", "jax", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: bigscience-bloom-rail-1.0 language: - zh pipeline_tag: text-generation widget: - text: "中国的首都是" --- This model is based on [bigscience/bloom-7b1](https://huggingface.co/bigscience/bloom-7b1). We pruned its vocabulary from 250880 to 46145 with Chinese corpus to reduce GPU memory usage. So the total parameter is 6b4 now. # How to use ```python from transformers import BloomTokenizerFast, BloomForCausalLM tokenizer = BloomTokenizerFast.from_pretrained('Langboat/bloom-6b4-zh') model = BloomForCausalLM.from_pretrained('Langboat/bloom-6b4-zh') print(tokenizer.batch_decode(model.generate(tokenizer.encode('中国的首都是', return_tensors='pt')))) ```
Declan/Breitbart_model_v7
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-08-31T13:31:39Z
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 822.42 +/- 48.82 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/CNN_model_v2
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-08-31T14:24:00Z
--- license: apache-2.0 language: - hy pipeline_tag: text-generation tags: - multilingual - PyTorch - Transformers - gpt3 - gpt2 - Deepspeed - Megatron datasets: - mc4 - wikipedia thumbnail: "https://github.com/sberbank-ai/mgpt" --- # Multilingual GPT model, Armenian language finetune We introduce a monolingual GPT-3-based model for Armenian language The model is based on [mGPT](https://huggingface.co/sberbank-ai/mGPT/), a family of autoregressive GPT-like models with 1.3 billion parameters trained on 60 languages from 25 language families using Wikipedia and Colossal Clean Crawled Corpus. We reproduce the GPT-3 architecture using GPT-2 sources and the sparse attention mechanism, [Deepspeed](https://github.com/microsoft/DeepSpeed) and [Megatron](https://github.com/NVIDIA/Megatron-LM) frameworks allows us to effectively parallelize the training and inference steps. The resulting models show performance on par with the recently released [XGLM](https://arxiv.org/pdf/2112.10668.pdf) models at the same time covering more languages and enhancing NLP possibilities for low resource languages. ## Code The source code for the mGPT XL model is available on [Github](https://github.com/sberbank-ai/mgpt) ## Paper mGPT: Few-Shot Learners Go Multilingual [Abstract](https://arxiv.org/abs/2204.07580) [PDF](https://arxiv.org/pdf/2204.07580.pdf) ![](https://habrastorage.org/webt/1q/ru/yt/1qruytul6m2m-upyk9frq3pgrds.png) ``` @misc{https://doi.org/10.48550/arxiv.2204.07580, doi = {10.48550/ARXIV.2204.07580}, url = {https://arxiv.org/abs/2204.07580}, author = {Shliazhko, Oleh and Fenogenova, Alena and Tikhonova, Maria and Mikhailov, Vladislav and Kozlova, Anastasia and Shavrina, Tatiana}, keywords = {Computation and Language (cs.CL), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences, I.2; I.2.7, 68-06, 68-04, 68T50, 68T01}, title = {mGPT: Few-Shot Learners Go Multilingual}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` ## Training The model was fine-tuned on 170GB of Armenian texts, including MC4, Archive.org fiction, EANC public data, OpenSubtitles, OSCAR corpus and blog texts. Val perplexity is 2.046. The mGPT model was pre-trained for 12 days x 256 GPU (Tesla NVidia V100), 4 epochs, then 9 days x 64 GPU, 1 epoch The Armenian finetune was around 7 days with 4 Tesla NVidia V100 and has made 160k steps. ![](https://habrastorage.org/webt/4h/pp/tq/4hpptqkgytnoi9ax58wdrymsxx4.png) What happens on this image? The model is originally trained with sparse attention masks, then fine-tuned with no sparsity on the last steps (perplexity and loss peak). Getting rid of the sparsity in the end of the training helps to integrate the model into the GPT2 HF class.
Declan/CNN_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-08-31T14:33:40Z
--- library_name: stable-baselines3 tags: - HalfCheetahBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1967.35 +/- 44.90 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: HalfCheetahBulletEnv-v0 type: HalfCheetahBulletEnv-v0 --- # **A2C** Agent playing **HalfCheetahBulletEnv-v0** This is a trained model of a **A2C** agent playing **HalfCheetahBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Declan/ChicagoTribune_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-08-31T15:41:14Z
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: imagefolder metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # dress-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `imagefolder` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/iramshiv/dress-128/tensorboard?#scalars)
Declan/Independent__model
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-08-31T17:04:29Z
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: cartpole results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 139.50 +/- 32.14 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 5 of the Deep Reinforcement Learning Class: https://github.com/huggingface/deep-rl-class/tree/main/unit5
Declan/NPR_model_v3
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 2.9615 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 97 | 3.2690 | | No log | 2.0 | 194 | 3.0873 | | No log | 3.0 | 291 | 2.9615 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Declan/NPR_model_v5
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - ur - en license: apache-2.0 datasets: - iwslt2017 metrics: - bleu library_name: tensorflowtts pipeline_tag: translation --- ### urd-eng * source group: Urdu * target group: English * OPUS readme: [urd-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md) * model: transformer-align * source language(s): urd * target language(s): eng * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.eval.txt) ## Benchmarks | testset | BLEU | chr-F | |-----------------------|-------|-------| | Tatoeba-test.urd.eng | 23.2 | 0.435 | ### System Info: - hf_name: urd-eng - source_languages: urd - target_languages: eng - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/urd-eng/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ur', 'en'] - src_constituents: {'urd'} - tgt_constituents: {'eng'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/urd-eng/opus-2020-06-17.test.txt - src_alpha3: urd - tgt_alpha3: eng - short_pair: ur-en - chrF2_score: 0.435 - bleu: 23.2 - brevity_penalty: 0.975 - ref_len: 12029.0 - src_name: Urdu - tgt_name: English - train_date: 2020-06-17 - src_alpha2: ur - tgt_alpha2: en - prefer_old: False - long_pair: urd-eng - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
Declan/NPR_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2022-08-31T17:57:28Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad_v2 model-index: - name: ClimateBertQA results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ClimateBertQA This model is a fine-tuned version of [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.3251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.1604 | 1.0 | 4081 | 1.1894 | | 0.8577 | 2.0 | 8162 | 1.1763 | | 0.6395 | 3.0 | 12243 | 1.1118 | | 0.5015 | 4.0 | 16324 | 1.3251 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Declan/NewYorkTimes_model_v6
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="curt-tigges/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Declan/Politico_model_v8
[ "pytorch", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - AaronCU/autotrain-data-attribute-classification co2_eq_emissions: emissions: 0.002847008943614719 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1343651539 - CO2 Emissions (in grams): 0.0028 ## Validation Metrics - Loss: 0.163 - Accuracy: 0.949 - Macro F1: 0.947 - Micro F1: 0.949 - Weighted F1: 0.949 - Macro Precision: 0.943 - Micro Precision: 0.949 - Weighted Precision: 0.951 - Macro Recall: 0.952 - Micro Recall: 0.949 - Weighted Recall: 0.949 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/AaronCU/autotrain-attribute-classification-1343651539 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("AaronCU/autotrain-attribute-classification-1343651539", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("AaronCU/autotrain-attribute-classification-1343651539", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Denver/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 language: hi --- ## HindBERT-Scratch HindBERT is a Hindi BERT model. It is a base-BERT model trained from scratch on publicly available Hindi monolingual datasets. [project link] (https://github.com/l3cube-pune/MarathiNLP) More details on the dataset, models, and baseline results can be found in our [paper] (<a href='https://arxiv.org/abs/2211.11418'> link </a>) The best version of model is shared <a href='https://huggingface.co/l3cube-pune/hindi-bert-v2'> here </a> Citing: ``` @article{joshi2022l3cubehind, author = {Joshi, Raviraj}, year = {2022}, month = {09}, pages = {}, title = {L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages}, doi = {10.13140/RG.2.2.14606.84809} } ```
DeskDown/MarianMixFT_en-my
[ "pytorch", "marian", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - kensho/spgispeech widget: - example_title: Finance Speech src: https://drive.google.com/uc?id=151bzDnN_f0Dfjjrg36nI97tXM39t5Ka8 model-index: - name: wav2vec2-base-finetuned-spgispeech-dev results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-spgispeech-dev This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the [kensho/spgispeech](https://huggingface.co/datasets/kensho/spgispeech) dev dataset. It achieves the following results on the evaluation set: - Loss: 0.2897 - Wer: 0.1508 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.8285 | 2.22 | 1500 | 0.3361 | 0.2754 | | 0.2582 | 4.44 | 3000 | 0.2643 | 0.2205 | | 0.1697 | 6.66 | 4500 | 0.2467 | 0.2006 | | 0.1314 | 8.88 | 6000 | 0.2711 | 0.1927 | | 0.1084 | 11.09 | 7500 | 0.2521 | 0.1872 | | 0.0922 | 13.31 | 9000 | 0.2588 | 0.1827 | | 0.0818 | 15.53 | 10500 | 0.2572 | 0.1783 | | 0.0712 | 17.75 | 12000 | 0.2720 | 0.1766 | | 0.067 | 19.97 | 13500 | 0.2873 | 0.1751 | | 0.0594 | 22.19 | 15000 | 0.2753 | 0.1704 | | 0.0546 | 24.41 | 16500 | 0.2794 | 0.1694 | | 0.0505 | 26.63 | 18000 | 0.2811 | 0.1665 | | 0.0467 | 28.85 | 19500 | 0.2906 | 0.1657 | | 0.0417 | 31.07 | 21000 | 0.3043 | 0.1661 | | 0.0395 | 33.28 | 22500 | 0.3068 | 0.1627 | | 0.0368 | 35.5 | 24000 | 0.3096 | 0.1617 | | 0.0334 | 37.72 | 25500 | 0.3036 | 0.1581 | | 0.0322 | 39.94 | 27000 | 0.2819 | 0.1564 | | 0.0286 | 42.16 | 28500 | 0.2936 | 0.1544 | | 0.0279 | 44.38 | 30000 | 0.2914 | 0.1534 | | 0.0264 | 46.6 | 31500 | 0.2957 | 0.1519 | | 0.0241 | 48.82 | 33000 | 0.2897 | 0.1508 | ### Framework versions - Transformers 4.17.0 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
DimaOrekhov/transformer-method-name
[ "pytorch", "encoder-decoder", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - it tags: - Text Classification datasets: - TAG-IT --- Write an italian sentence with the prefix "Classifica Argomento: " to get a topic classification of the sentence. The dataset used for the task is: [TAG-IT](https://sites.google.com/view/tag-it-2020/). The model is a fine tuned version of [IT5-base](https://huggingface.co/gsarti/it5-base) of Sarti and Nissim.
Donghyun/L2_BERT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - image-classification - generated_from_trainer metrics: - accuracy model-index: - name: finetuned-ViT-human-action-recognition-v1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-ViT-human-action-recognition-v1 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the Human_Action_Recognition dataset. It achieves the following results on the evaluation set: - Loss: 3.1427 - Accuracy: 0.0791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.4986 | 0.13 | 100 | 3.1427 | 0.0791 | | 1.1929 | 0.25 | 200 | 3.4083 | 0.0726 | | 1.2673 | 0.38 | 300 | 3.4615 | 0.0769 | | 0.9805 | 0.51 | 400 | 3.9192 | 0.0824 | | 1.158 | 0.63 | 500 | 4.2648 | 0.0698 | | 1.2544 | 0.76 | 600 | 4.5536 | 0.0574 | | 1.0073 | 0.89 | 700 | 4.0310 | 0.0819 | | 0.9315 | 1.02 | 800 | 4.5154 | 0.0702 | | 0.9063 | 1.14 | 900 | 4.7162 | 0.0633 | | 0.6756 | 1.27 | 1000 | 4.6482 | 0.0626 | | 1.0239 | 1.4 | 1100 | 4.6437 | 0.0635 | | 0.7634 | 1.52 | 1200 | 4.5625 | 0.0752 | | 0.8365 | 1.65 | 1300 | 4.9912 | 0.0561 | | 0.8979 | 1.78 | 1400 | 5.1739 | 0.0356 | | 0.9448 | 1.9 | 1500 | 4.8946 | 0.0541 | | 0.697 | 2.03 | 1600 | 4.9516 | 0.0741 | | 0.7861 | 2.16 | 1700 | 5.0090 | 0.0776 | | 0.6404 | 2.28 | 1800 | 5.3905 | 0.0643 | | 0.7939 | 2.41 | 1900 | 4.9159 | 0.1015 | | 0.6331 | 2.54 | 2000 | 5.3083 | 0.0589 | | 0.6082 | 2.66 | 2100 | 4.8538 | 0.0857 | | 0.6229 | 2.79 | 2200 | 5.3086 | 0.0689 | | 0.6964 | 2.92 | 2300 | 5.3745 | 0.0713 | | 0.5246 | 3.05 | 2400 | 5.0369 | 0.0796 | | 0.6097 | 3.17 | 2500 | 5.2935 | 0.0743 | | 0.5778 | 3.3 | 2600 | 5.5431 | 0.0709 | | 0.4196 | 3.43 | 2700 | 5.5508 | 0.0759 | | 0.5495 | 3.55 | 2800 | 5.5728 | 0.0813 | | 0.5932 | 3.68 | 2900 | 5.7992 | 0.0663 | | 0.4382 | 3.81 | 3000 | 5.8010 | 0.0643 | | 0.4827 | 3.93 | 3100 | 5.7529 | 0.0680 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Doohae/q_encoder
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-finetuned-mbti-0901 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-finetuned-mbti-0901 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9470 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 4.1073 | 1.0 | 9906 | 4.0111 | | 4.0302 | 2.0 | 19812 | 3.9761 | | 3.9757 | 3.0 | 29718 | 3.9578 | | 3.9471 | 4.0 | 39624 | 3.9495 | | 3.9187 | 5.0 | 49530 | 3.9470 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Doquey/DialoGPT-small-Luisbot1
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - generated_from_trainer metrics: - f1 model-index: - name: twitter-roberta-base-stance-abortionV3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # twitter-roberta-base-stance-abortionV3 This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-stance-abortion](https://huggingface.co/cardiffnlp/twitter-roberta-base-stance-abortion) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5095 - F1: 0.7917 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.8492 | 1.0 | 12 | 0.4862 | 0.7917 | | 0.7291 | 2.0 | 24 | 0.4264 | 0.7917 | | 0.5465 | 3.0 | 36 | 0.6450 | 0.7917 | | 0.5905 | 4.0 | 48 | 0.5857 | 0.7917 | | 0.4556 | 5.0 | 60 | 0.5095 | 0.7917 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-clean-with-unclean-valid
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Imene/vit-base-patch16-224-in21k-wwwwwi results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Imene/vit-base-patch16-224-in21k-wwwwwi This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.2187 - Train Accuracy: 0.5652 - Train Top-3-accuracy: 0.7611 - Validation Loss: 3.8221 - Validation Accuracy: 0.2540 - Validation Top-3-accuracy: 0.4409 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 4920, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16 ### Training results | Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch | |:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:| | 5.3476 | 0.0283 | 0.0716 | 5.1306 | 0.0483 | 0.1240 | 0 | | 4.9357 | 0.0914 | 0.2057 | 4.7998 | 0.1158 | 0.2385 | 1 | | 4.6155 | 0.1641 | 0.3230 | 4.5616 | 0.1430 | 0.2891 | 2 | | 4.3325 | 0.2269 | 0.4188 | 4.3480 | 0.1722 | 0.3391 | 3 | | 4.0702 | 0.2915 | 0.4984 | 4.1662 | 0.2042 | 0.3886 | 4 | | 3.8262 | 0.3638 | 0.5758 | 4.0416 | 0.2296 | 0.4067 | 5 | | 3.6117 | 0.4258 | 0.6415 | 3.9451 | 0.2329 | 0.4234 | 6 | | 3.4324 | 0.4855 | 0.6956 | 3.8690 | 0.2499 | 0.4397 | 7 | | 3.2991 | 0.5320 | 0.7376 | 3.8351 | 0.2553 | 0.4359 | 8 | | 3.2187 | 0.5652 | 0.7611 | 3.8221 | 0.2540 | 0.4409 | 9 | ### Framework versions - Transformers 4.21.2 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
DoyyingFace/bert-asian-hate-tweets-asian-unclean-freeze-12
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- language: en license: apache-2.0 datasets: climatebert/environmental_claims tags: - ClimateBERT - climate --- # Model Card for environmental-claims ## Model Description The environmental-claims model is fine-tuned on the [EnvironmentalClaims](https://huggingface.co/datasets/climatebert/environmental_claims) dataset by using the [climatebert/distilroberta-base-climate-f](https://huggingface.co/climatebert/distilroberta-base-climate-f) model as pre-trained language model. The underlying methodology can be found in our [research paper](https://arxiv.org/abs/2209.00507). ## Climate Performance Model Card | environmental-claims | | |--------------------------------------------------------------------------|----------------| | 1. Is the resulting model publicly available? | Yes | | 2. How much time does the training of the final model take? | < 5 min | | 3. How much time did all experiments take (incl. hyperparameter search)? | 60 hours | | 4. What was the power of GPU and CPU? | 0.3 kW | | 5. At which geo location were the computations performed? | Switzerland | | 6. What was the energy mix at the geo location? | 89 gCO2eq/kWh | | 7. How much CO2eq was emitted to train the final model? | 2.2 g | | 8. How much CO2eq was emitted for all experiments? | 1.6 kg | | 9. What is the average CO2eq emission for the inference of one sample? | 0.0067 mg | | 10. Which positive environmental impact can be expected from this work? | This work can help detect and evaluate environmental claims and thus have a positive impact on the environment in the future. | | 11. Comments | - | ## Citation Information ```bibtex @misc{stammbach2022environmentalclaims, title = {A Dataset for Detecting Real-World Environmental Claims}, author = {Stammbach, Dominik and Webersinke, Nicolas and Bingler, Julia Anna and Kraus, Mathias and Leippold, Markus}, year = {2022}, doi = {10.48550/ARXIV.2209.00507}, url = {https://arxiv.org/abs/2209.00507}, publisher = {arXiv}, } ```
albert-large-v1
[ "pytorch", "tf", "albert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1909.11942", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
687
2022-09-01T15:43:56Z
--- license: apache-2.0 --- TRACER with EfficientNet v1 b7 encoder.
bert-large-cased-whole-word-masking
[ "pytorch", "tf", "jax", "bert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1810.04805", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2,316
2022-09-01T16:54:26Z
--- license: afl-3.0 --- <p align="center"> <br> <img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/> <br> </p> # reStructured Pre-training (RST) official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html) #### RST is a new paradigm for language pre-training, which * unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model, * surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc) * achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam In such a pre-training paradigm, * Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing * Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access. ## Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning | | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification | | rst-gaokao-rc-11b | Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model | General multiple-choice question answering| | **rst-gaokao-cloze-11b** | **Trained with manually crafted cloze datasets** | **General cloze filling**| | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks | ## Have a try? ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b") model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b") inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)) ``` ## Data for reStructure Pre-training This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research. We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals. ###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush: | Mine | Signal | #Sample | Use in DataLab | Some Applications | | --- | --- | --- | --- | --- | | [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion| | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification | | [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning| | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion | | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning | | [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation| | [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition| [Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation | | [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference| |[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension| | [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension| | [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension | | [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension| | [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension| | [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension | | [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification| | [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion| | [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition| | [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion| ## Bibtext for Citation Info ``` @article{yuan2022restructured, title={reStructured Pre-training}, author={Yuan, Weizhe and Liu, Pengfei}, journal={arXiv preprint arXiv:2206.11147}, year={2022} } ```
distilbert-base-german-cased
[ "pytorch", "safetensors", "distilbert", "fill-mask", "de", "transformers", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
43,667
2022-09-01T18:23:04Z
--- license: afl-3.0 --- <p align="center"> <br> <img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/> <br> </p> # reStructured Pre-training (RST) official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai.co/peripherals/emoji-eng.html) #### RST is a new paradigm for language pre-training, which * unifies **26** different types of signal from **10** data sources (Totten Tomatoes, Dailymail, Wikipedia, Wikidata, Wikihow, Wordnet, arXiv etc ) in the world structurally, being pre-trained with a monolithcal model, * surpasses strong competitors (e.g., T0) on **52/55** popular datasets from a variety of NLP tasks (classification, IE, retrieval, generation etc) * achieves superior performance in National College Entrance Examination **(Gaokao-English, 高考-英语)** achieves **40** points higher than the average scores made by students and 15 points higher than GPT3 with **1/16** parameters. In particular, Qin gets a high score of **138.5** (the full mark is 150) in the 2018 English exam In such a pre-training paradigm, * Data-centric Pre-training: the role of data will be re-emphasized, and model pre-training and fine-tuning of downstream tasks are viewed as a process of data storing and accessing * Pre-training over JSON instead of TEXT: a good storage mechanism should not only have the ability to cache a large amount of data but also consider the ease of access. ## Model Description We release all models introduced in our [paper](https://arxiv.org/pdf/2206.11147.pdf), covering 13 different application scenarios. Each model contains 11 billion parameters. | Model | Description | Recommended Application | ----------- | ----------- |----------- | | rst-all-11b | Trained with all the signals below except signals that are used to train Gaokao models | All applications below (specialized models are recommended first if high performance is preferred) | | rst-fact-retrieval-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym, wikiHow category hierarchy, Wikidata relation, Wikidata entity typing, Paperswithcode entity typing | Knowledge intensive tasks, information extraction tasks,factual checker | | rst-summarization-11b | Trained with the following signals: DailyMail summary, Paperswithcode summary, arXiv summary, wikiHow summary | Summarization or other general generation tasks, meta-evaluation (e.g., BARTScore) | | rst-temporal-reasoning-11b | Trained with the following signals: DailyMail temporal information, wikiHow procedure | Temporal reasoning, relation extraction, event-based extraction | | rst-information-extraction-11b | Trained with the following signals: Paperswithcode entity, Paperswithcode entity typing, Wikidata entity typing, Wikidata relation, Wikipedia entity | Named entity recognition, relation extraction and other general IE tasks in the news, scientific or other domains| | rst-intent-detection-11b | Trained with the following signals: wikiHow goal-step relation | Intent prediction, event prediction | | rst-topic-classification-11b | Trained with the following signals: DailyMail category, arXiv category, wikiHow text category, Wikipedia section title | general text classification | | rst-word-sense-disambiguation-11b | Trained with the following signals: WordNet meaning, WordNet part-of-speech, WordNet synonym, WordNet antonym | Word sense disambiguation, part-of-speech tagging, general IE tasks, common sense reasoning | | rst-natural-language-inference-11b | Trained with the following signals: ConTRoL dataset, DREAM dataset, LogiQA dataset, RACE & RACE-C dataset, ReClor dataset, DailyMail temporal information | Natural language inference, multiple-choice question answering, reasoning | | rst-sentiment-classification-11b | Trained with the following signals: Rotten Tomatoes sentiment, Wikipedia sentiment | Sentiment classification, emotion classification | | **rst-gaokao-rc-11b** | **Trained with multiple-choice QA datasets that are used to train the [T0pp](https://huggingface.co/bigscience/T0pp) model** | **General multiple-choice question answering**| | rst-gaokao-cloze-11b | Trained with manually crafted cloze datasets | General cloze filling| | rst-gaokao-writing-11b | Trained with example essays from past Gaokao-English exams and grammar error correction signals | Essay writing, story generation, grammar error correction and other text generation tasks | ## Have a try? ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("XLab/rst-all-11b") model = AutoModelForSeq2SeqLM.from_pretrained("XLab/rst-all-11b") inputs = tokenizer.encode("TEXT: this is the best cast iron skillet you will ever buy. QUERY: Is this review \"positive\" or \"negative\"", return_tensors="pt") outputs = model.generate(inputs) print(tokenizer.decode(outputs[0], skip_special_tokens=True, clean_up_tokenization_spaces=True)) ``` ## Data for reStructure Pre-training This dataset is a precious treasure, containing a variety of naturally occurring signals. Any downstream task you can think of (e.g., the college entrance exam mentioned in the RST paper) can benefit from being pre-trained on some of our provided signals. We spent several months collecting the following 29 signal types, accounting for a total of 46,926,447 data samples. We hope this dataset will be a valuable asset for everyone in natural language processing research. We provide collected signals through [DataLab](https://github.com/ExpressAI/DataLab). For efficiency, we only provide 50,000 samples at most for each signal type. If you want all the samples we collected, please fill this [form](https://docs.google.com/forms/d/e/1FAIpQLSdPO50vSdfwoO3D7DQDVlupQnHgrXrwfF3ePE4X1H6BwgTn5g/viewform?usp=sf_link). More specifically, we collected the following signals. ###### We will be happy :smiley: to know if the resource is helpful for your work, and please cite our [work](https://github.com/ExpressAI/reStructured-Pretraining/blob/main/README.md#Bib) :blush: | Mine | Signal | #Sample | Use in DataLab | Some Applications | | --- | --- | --- | --- | --- | | [Rotten Tomatoes](https://www.rottentomatoes.com/) | (review, rating) | 5,311,109 | `load_dataset("rst", "rotten_tomatoes_sentiment")` | Sentiment classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, category) | 899,904 | `load_dataset("rst", "daily_mail_category")`| Topic classification | | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (title, text, summary) | 1,026,616 | `load_dataset("rst", "daily_mail_summary")` | Summarization; Sentence expansion| | [Daily Mail](https://www.dailymail.co.uk/home/index.html) | (text, events) | 1,006,412 | `load_dataset("rst", "daily_mail_temporal")` | Temporal reasoning| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (entity, entity_type, text) | 2,214,274 | `load_dataset("rst", "wikidata_entity")` | Entity typing| | [Wikidata](https://www.wikidata.org/wiki/Wikidata:Main_Page) | (subject, object, relation, text) | 1,526,674 | `load_dataset("rst", "wikidata_relation")` | Relation extraction; Fact retrieval| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, category) | 112,109 | `load_dataset("rst", "wikihow_text_category")` | Topic classification | | [wikiHow](https://www.wikihow.com/Main-Page) | (low_category, high_category) | 4,868 | `load_dataset("rst", "wikihow_category_hierarchy")` | Relation extraction; Commonsense reasoning| | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, steps) | 47,956 | `load_dataset("rst", "wikihow_goal_step")` | Intent detection| | [wikiHow](https://www.wikihow.com/Main-Page) | (text, summary) | 703,278 | `load_dataset("rst", "wikihow_summary")` | Summarization; Sentence expansion | | [wikiHow](https://www.wikihow.com/Main-Page) | (goal, first_step, second_step) | 47,787 | `load_dataset("rst", "wikihow_procedure")` | Temporal reasoning | | [wikiHow](https://www.wikihow.com/Main-Page) | (question, description, answer, related_questions) | 47,705 | `load_dataset("rst", "wikihow_question")` | Question generation| | [Wikipedia](https://www.wikipedia.org/) | (text, entities) |22,231,011 | `load_dataset("rst", "wikipedia_entities")` | Entity recognition| [Wikipedia](https://www.wikipedia.org/) | (texts, titles) | 3,296,225 | `load_dataset("rst", "wikipedia_sections")` | Summarization| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, pos) | 27,123 | `load_dataset("rst", "wordnet_pos")` | Part-of-speech tagging| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, meaning, possible_meanings) | 27,123 | `load_dataset("rst", "wordnet_meaning")` | Word sense disambiguation| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, synonyms) | 17,804 | `load_dataset("rst", "wordnet_synonym")`| Paraphrasing| | [WordNet](https://wordnet.princeton.edu/) | (word, sentence, antonyms) | 6,408 | `load_dataset("rst", "wordnet_antonym")` |Negation | | [ConTRoL]() | (premise, hypothesis, label) | 8,323 | `load_dataset("rst", "qa_control")` | Natural language inference| |[DREAM](https://transacl.org/ojs/index.php/tacl/article/view/1534)| (context, question, options, answer) | 9,164 | `load_dataset("rst", "qa_dream")` | Reading comprehension| | [LogiQA](https://doi.org/10.24963/ijcai.2020/501) | (context, question, options, answer) | 7,974 | `load_dataset("rst", "qa_logiqa")` | Reading comprehension| | [ReClor](https://openreview.net/forum?id=HJgJtT4tvB) | (context, question, options, answer) | 5,138 | `load_dataset("rst", "qa_reclor")` |Reading comprehension | | [RACE](https://doi.org/10.18653/v1/d17-1082) | (context, question, options, answer) | 44,880 | `load_dataset("rst", "qa_race")` | Reading comprehension| | [RACE-C](http://proceedings.mlr.press/v101/liang19a.html) | (context, question, options, answer) | 5,093 | `load_dataset("rst", "qa_race_c")` | Reading comprehension| | [TriviaQA](https://doi.org/10.18653/v1/P17-1147) | (context, question, answer) | 46,636 | `load_dataset("rst", "qa_triviaqa")` |Reading comprehension | | [Arxiv](https://arxiv.org/) | (text, category) | 1,696,348 | `load_dataset("rst", "arxiv_category")` |Topic classification| | [Arxiv](https://arxiv.org/) | (text, summary) | 1,696,348 | `load_dataset("rst", "arxiv_summary")` | Summarization; Sentence expansion| | [Paperswithcode](https://paperswithcode.com/) | (text, entities, datasets, methods, tasks, metrics) | 4,731,233 | `load_dataset("rst", "paperswithcode_entity")` | Entity recognition| | [Paperswithcode](https://paperswithcode.com/) | (text, summary) | 120,924 | `load_dataset("rst", "paperswithcode_summary")` | Summarization; Sentence expansion| ## Bibtext for Citation Info ``` @article{yuan2022restructured, title={reStructured Pre-training}, author={Yuan, Weizhe and Liu, Pengfei}, journal={arXiv preprint arXiv:2206.11147}, year={2022} } ```
distilbert-base-uncased
[ "pytorch", "tf", "jax", "rust", "safetensors", "distilbert", "fill-mask", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:1910.01108", "transformers", "exbert", "license:apache-2.0", "autotrain_compatible", "has_space" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10,887,471
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8748965566869354 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1216 - F1: 0.8749 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2247 | 1.0 | 834 | 0.1429 | 0.8432 | | 0.1127 | 2.0 | 1668 | 0.1270 | 0.8653 | | 0.0712 | 3.0 | 2502 | 0.1216 | 0.8749 | ### Framework versions - Transformers 4.20.1 - Pytorch 1.12.0+cu116 - Datasets 2.3.2 - Tokenizers 0.12.1
openai-gpt
[ "pytorch", "tf", "rust", "safetensors", "openai-gpt", "text-generation", "en", "arxiv:1705.11168", "arxiv:1803.02324", "arxiv:1910.09700", "transformers", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "OpenAIGPTLMHeadModel" ], "model_type": "openai-gpt", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
65,432
null
Access to model deseipel/medium-LucyClarke_ is restricted and you are not in the authorized list. Visit https://huggingface.co/deseipel/medium-LucyClarke_ to ask for access.
ARTeLab/mbart-summarization-mlsum
[ "pytorch", "mbart", "text2text-generation", "it", "dataset:ARTeLab/mlsum-it", "transformers", "summarization", "autotrain_compatible", "has_space" ]
summarization
{ "architectures": [ "MBartForConditionalGeneration" ], "model_type": "mbart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
111
2022-09-02T23:16:29Z
--- tags: - generated_from_keras_callback model-index: - name: monday-custom-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # monday-custom-model This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.21.2 - TensorFlow 2.8.2 - Datasets 2.4.0 - Tokenizers 0.12.1
AdapterHub/roberta-base-pf-emo
[ "roberta", "en", "dataset:emo", "arxiv:2104.08247", "adapter-transformers", "text-classification" ]
text-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
2022-09-03T15:56:16Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # Pavankalyan/Sentence_embedding_fine-tuned This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('Pavankalyan/Sentence_embedding_fine-tuned') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('Pavankalyan/Sentence_embedding_fine-tuned') model = AutoModel.from_pretrained('Pavankalyan/Sentence_embedding_fine-tuned') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Pavankalyan/Sentence_embedding_fine-tuned) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 706 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.DenoisingAutoEncoderLoss.DenoisingAutoEncoderLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 3e-05 }, "scheduler": "constantlr", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
AdapterHub/roberta-base-pf-winogrande
[ "roberta", "en", "dataset:winogrande", "arxiv:2104.08247", "adapter-transformers", "adapterhub:comsense/winogrande" ]
null
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-03T20:07:17Z
--- license: mit widget: - text: Bu sene eriğin kilosu kaç lira olacak?" example_title: "Question" - text: "Evlilik mükemmel bir kurum ama kim bir kurumda yaşamak ister?" example_title: "Not Question" --- # Question Detection Model Fine-Tuned with Tweet Dataset You can find detailed explanation about dataset [here](https://github.com/izzetkalic/botcuk-dataset-analyze/tree/main/datasets/qd-tweet). * RQ: Rhetorical Questions * FK: Factual Knowledge * OQ: Other Questions * NQ: Not Question
AdapterHub/roberta-base-pf-wnut_17
[ "roberta", "en", "dataset:wnut_17", "arxiv:2104.08247", "adapter-transformers", "token-classification" ]
token-classification
{ "architectures": null, "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-09-03T20:11:18Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: distilr2-lr2e05-wd0.1-bs64 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilr2-lr2e05-wd0.1-bs64 This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2722 - Rmse: 0.5218 - Mse: 0.2722 - Mae: 0.4090 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 512 - eval_batch_size: 512 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | 0.2771 | 1.0 | 312 | 0.2742 | 0.5237 | 0.2742 | 0.4241 | | 0.2737 | 2.0 | 624 | 0.2726 | 0.5221 | 0.2726 | 0.4079 | | 0.2718 | 3.0 | 936 | 0.2727 | 0.5222 | 0.2727 | 0.4149 | | 0.2696 | 4.0 | 1248 | 0.2722 | 0.5218 | 0.2722 | 0.4090 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.9.0+cu111 - Datasets 2.4.0 - Tokenizers 0.12.1
Aeskybunnie/Me
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-09-03T21:51:26Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - go_emotions metrics: - accuracy - f1 model-index: - name: roberta-large-bne-finetuned-go_emotions-es results: - task: name: Text Classification type: text-classification dataset: name: go_emotions type: go_emotions config: simplified split: train args: simplified metrics: - name: Accuracy type: accuracy value: 0.5668425681618294 - name: F1 type: f1 value: 0.5572049178848779 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-large-bne-finetuned-go_emotions-es This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-large-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-large-bne) on the go_emotions dataset. It achieves the following results on the evaluation set: - Loss: 3.2457 - Accuracy: 0.5668 - F1: 0.5572 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.5678 | 1.0 | 9077 | 1.5649 | 0.5671 | 0.5197 | | 1.3898 | 2.0 | 18154 | 1.5005 | 0.5776 | 0.5492 | | 0.915 | 3.0 | 27231 | 1.8045 | 0.5891 | 0.5692 | | 0.5424 | 4.0 | 36308 | 2.8463 | 0.5646 | 0.5519 | | 0.2018 | 5.0 | 45385 | 3.2457 | 0.5668 | 0.5572 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Akashpb13/xlsr_hungarian_new
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "hu", "dataset:mozilla-foundation/common_voice_8_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "model_for_talk", "mozilla-foundation/common_voice_8_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_8_0 model-index: - name: Fine_Tunning_on_CV_Urdu_dataset results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Fine_Tunning_on_CV_Urdu_dataset This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_8_0 dataset. It achieves the following results on the evaluation set: - Loss: 1.2389 - Wer: 0.7380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 15.2352 | 1.69 | 100 | 4.0555 | 1.0 | | 3.3873 | 3.39 | 200 | 3.2521 | 1.0 | | 3.2387 | 5.08 | 300 | 3.2304 | 1.0 | | 3.1983 | 6.78 | 400 | 3.1712 | 1.0 | | 3.1224 | 8.47 | 500 | 3.0883 | 1.0 | | 3.0782 | 10.17 | 600 | 3.0767 | 0.9996 | | 3.0618 | 11.86 | 700 | 3.0280 | 1.0 | | 2.9929 | 13.56 | 800 | 2.8994 | 1.0 | | 2.785 | 15.25 | 900 | 2.4330 | 1.0 | | 2.1276 | 16.95 | 1000 | 1.7795 | 0.9517 | | 1.5544 | 18.64 | 1100 | 1.5101 | 0.8266 | | 1.2651 | 20.34 | 1200 | 1.4037 | 0.7993 | | 1.0816 | 22.03 | 1300 | 1.3101 | 0.7638 | | 0.9817 | 23.73 | 1400 | 1.2855 | 0.7542 | | 0.9019 | 25.42 | 1500 | 1.2737 | 0.7421 | | 0.8688 | 27.12 | 1600 | 1.2457 | 0.7435 | | 0.8293 | 28.81 | 1700 | 1.2389 | 0.7380 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.11.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
AlanDev/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-en results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.en metrics: - name: F1 type: f1 value: 0.6886160714285715 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-en This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.4043 - F1: 0.6886 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1347 | 1.0 | 50 | 0.5771 | 0.4880 | | 0.5066 | 2.0 | 100 | 0.4209 | 0.6582 | | 0.3631 | 3.0 | 150 | 0.4043 | 0.6886 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
AlbertHSU/BertTEST
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: en thumbnail: http://www.huggingtweets.com/reda_getachew/1662284943859/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1464982586443370501/jh6Dqife_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Getachew K Reda</div> <div style="text-align: center; font-size: 14px;">@reda_getachew</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Getachew K Reda. | Data | Getachew K Reda | | --- | --- | | Tweets downloaded | 605 | | Retweets | 73 | | Short tweets | 9 | | Tweets kept | 523 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1sf5r66e/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @reda_getachew's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/jlj5mw14) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/jlj5mw14/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/reda_getachew') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
AnjanBiswas/distilbert-base-uncased-finetuned-emotion
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- tags: - generated_from_trainer metrics: - accuracy model-index: - name: sagemaker-bert-mini-arabic results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # sagemaker-bert-mini-arabic This model is a fine-tuned version of [asafaya/bert-mini-arabic](https://huggingface.co/asafaya/bert-mini-arabic) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2531 - Accuracy: 0.8974 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.3385 | 1.0 | 1469 | 0.2707 | 0.8840 | | 0.2416 | 2.0 | 2938 | 0.2531 | 0.8974 | ### Framework versions - Transformers 4.12.3 - Pytorch 1.9.1 - Datasets 1.15.1 - Tokenizers 0.10.3
AnonymousSub/AR_SDR_HF_model_base
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2022-09-05T09:26:38Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: wav2vec2-xls-r-300m-arabic_speech_commands_10s_one_speaker_all_classes_TTS results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-xls-r-300m-arabic_speech_commands_10s_one_speaker_all_classes_TTS This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2062 - Accuracy: 0.9579 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 60 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.6865 | 0.99 | 34 | 3.6873 | 0.025 | | 3.6155 | 1.99 | 68 | 3.4150 | 0.2188 | | 2.6933 | 2.99 | 102 | 2.4527 | 0.4625 | | 1.789 | 3.99 | 136 | 1.5249 | 0.7246 | | 0.8812 | 4.99 | 170 | 0.7804 | 0.8708 | | 0.4054 | 5.99 | 204 | 0.6304 | 0.8558 | | 0.3481 | 6.99 | 238 | 0.5552 | 0.8667 | | 0.238 | 7.99 | 272 | 0.4142 | 0.9113 | | 0.1981 | 8.99 | 306 | 0.3007 | 0.9354 | | 0.1254 | 9.99 | 340 | 0.2556 | 0.9479 | | 0.1356 | 10.99 | 374 | 0.5148 | 0.8825 | | 0.1263 | 11.99 | 408 | 0.3228 | 0.9308 | | 0.1074 | 12.99 | 442 | 0.3085 | 0.9279 | | 0.0756 | 13.99 | 476 | 0.4546 | 0.9029 | | 0.0763 | 14.99 | 510 | 0.4045 | 0.9133 | | 0.0902 | 15.99 | 544 | 0.3123 | 0.9287 | | 0.1134 | 16.99 | 578 | 0.2054 | 0.9504 | | 0.0943 | 17.99 | 612 | 0.2871 | 0.93 | | 0.0511 | 18.99 | 646 | 0.3628 | 0.9292 | | 0.0525 | 19.99 | 680 | 0.2228 | 0.9471 | | 0.0769 | 20.99 | 714 | 0.3069 | 0.9329 | | 0.0564 | 21.99 | 748 | 0.2658 | 0.9358 | | 0.0319 | 22.99 | 782 | 0.2886 | 0.9387 | | 0.0485 | 23.99 | 816 | 0.2342 | 0.9467 | | 0.0542 | 24.99 | 850 | 0.3723 | 0.9287 | | 0.0478 | 25.99 | 884 | 0.2890 | 0.9396 | | 0.0373 | 26.99 | 918 | 0.2849 | 0.9383 | | 0.0437 | 27.99 | 952 | 0.3886 | 0.9237 | | 0.02 | 28.99 | 986 | 0.2672 | 0.9387 | | 0.0379 | 29.99 | 1020 | 0.2946 | 0.9363 | | 0.0253 | 30.99 | 1054 | 0.2499 | 0.9433 | | 0.0256 | 31.99 | 1088 | 0.2967 | 0.9337 | | 0.029 | 32.99 | 1122 | 0.2577 | 0.9458 | | 0.0427 | 33.99 | 1156 | 0.2899 | 0.9396 | | 0.0167 | 34.99 | 1190 | 0.2984 | 0.9437 | | 0.0334 | 35.99 | 1224 | 0.4822 | 0.9175 | | 0.0288 | 36.99 | 1258 | 0.2802 | 0.9417 | | 0.017 | 37.99 | 1292 | 0.2233 | 0.9504 | | 0.0064 | 38.99 | 1326 | 0.2657 | 0.9429 | | 0.0176 | 39.99 | 1360 | 0.2062 | 0.9579 | | 0.0307 | 40.99 | 1394 | 0.3633 | 0.9275 | | 0.0208 | 41.99 | 1428 | 0.3059 | 0.9421 | | 0.0091 | 42.99 | 1462 | 0.2488 | 0.9483 | | 0.0121 | 43.99 | 1496 | 0.2397 | 0.9496 | | 0.0106 | 44.99 | 1530 | 0.2958 | 0.9413 | | 0.0176 | 45.99 | 1564 | 0.2243 | 0.9525 | | 0.0153 | 46.99 | 1598 | 0.2293 | 0.9537 | | 0.011 | 47.99 | 1632 | 0.2654 | 0.9496 | | 0.0237 | 48.99 | 1666 | 0.2252 | 0.9533 | | 0.0053 | 49.99 | 1700 | 0.2380 | 0.9483 | | 0.0142 | 50.99 | 1734 | 0.2590 | 0.9467 | | 0.0259 | 51.99 | 1768 | 0.2363 | 0.9508 | | 0.0062 | 52.99 | 1802 | 0.2451 | 0.9496 | | 0.0123 | 53.99 | 1836 | 0.2546 | 0.9479 | | 0.011 | 54.99 | 1870 | 0.2578 | 0.9487 | | 0.0143 | 55.99 | 1904 | 0.2770 | 0.945 | | 0.015 | 56.99 | 1938 | 0.2869 | 0.9421 | | 0.0099 | 57.99 | 1972 | 0.2922 | 0.9429 | | 0.0086 | 58.99 | 2006 | 0.2783 | 0.9437 | | 0.013 | 59.99 | 2040 | 0.2748 | 0.9433 | ### Framework versions - Transformers 4.21.1 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
AnonymousSub/SR_rule_based_roberta_hier_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
# Bert Online Discussions (bert-web-discussions-en) This model is a fine-tuned version of the [BERT base model](https://huggingface.co/bert-base-uncased). It was introduced in [this paper](https://aclanthology.org/2022.acl-long.379/). ## Model description The BERT base language model was fine-tuned on the [Webis-CMV-20 corpus](https://zenodo.org/record/3778298#.YxB-HC223RZ) and on the [args.me corpus](https://zenodo.org/record/3734893#.YxB-NC223RY). The model was trained on a sample of 2,469,026 sentences in total.
Araf/Ummah
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: cards-demo-model3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # cards-demo-model3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.9271 - F1: 0.7505 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.301 | 1.0 | 41 | 0.9127 | 0.7477 | | 0.318 | 2.0 | 82 | 0.9173 | 0.7574 | | 0.2757 | 3.0 | 123 | 0.9271 | 0.7505 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Tokenizers 0.12.1
Aries/T5_question_answering
[ "pytorch", "jax", "t5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "T5ForConditionalGeneration" ], "model_type": "t5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } } }
5
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: tire-types results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.7230769395828247 --- # tire-types Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### all-terrain tire ![all-terrain tire](images/all-terrain_tire.jpg) #### competition tire ![competition tire](images/competition_tire.jpg) #### passenger tire ![passenger tire](images/passenger_tire.jpg)
Augustvember/WokkaBot99
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-paraphrase-feedback results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-paraphrase-feedback This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3640 - Rouge1: 55.8307 - Rouge2: 49.7983 - Rougel: 51.7379 - Rougelsum: 55.0839 - Gen Len: 19.4385 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.6009 | 1.0 | 521 | 0.3640 | 55.8307 | 49.7983 | 51.7379 | 55.0839 | 19.4385 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Axon/resnet18-v1
[ "dataset:ImageNet", "arxiv:1512.03385", "Axon", "Elixir", "license:apache-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- datasets: - relbert/semeval2012_relational_similarity model-index: - name: relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification-conceptnet-validated results: - task: name: Relation Mapping type: sorting-task dataset: name: Relation Mapping args: relbert/relation_mapping type: relation-mapping metrics: - name: Accuracy type: accuracy value: 0.856984126984127 - task: name: Analogy Questions (SAT full) type: multiple-choice-qa dataset: name: SAT full args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5080213903743316 - task: name: Analogy Questions (SAT) type: multiple-choice-qa dataset: name: SAT args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5192878338278932 - task: name: Analogy Questions (BATS) type: multiple-choice-qa dataset: name: BATS args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.6653696498054474 - task: name: Analogy Questions (Google) type: multiple-choice-qa dataset: name: Google args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.84 - task: name: Analogy Questions (U2) type: multiple-choice-qa dataset: name: U2 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.45614035087719296 - task: name: Analogy Questions (U4) type: multiple-choice-qa dataset: name: U4 args: relbert/analogy_questions type: analogy-questions metrics: - name: Accuracy type: accuracy value: 0.5393518518518519 - task: name: Lexical Relation Classification (BLESS) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9132138014163026 - name: F1 (macro) type: f1_macro value: 0.9101733559621606 - task: name: Lexical Relation Classification (CogALexV) type: classification dataset: name: CogALexV args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.8502347417840377 - name: F1 (macro) type: f1_macro value: 0.6852576593859314 - task: name: Lexical Relation Classification (EVALution) type: classification dataset: name: BLESS args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.6852654387865655 - name: F1 (macro) type: f1_macro value: 0.6694360423727916 - task: name: Lexical Relation Classification (K&H+N) type: classification dataset: name: K&H+N args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9604228976838005 - name: F1 (macro) type: f1_macro value: 0.8826948107609662 - task: name: Lexical Relation Classification (ROOT09) type: classification dataset: name: ROOT09 args: relbert/lexical_relation_classification type: relation-classification metrics: - name: F1 type: f1 value: 0.9022250078345346 - name: F1 (macro) type: f1_macro value: 0.9002463330589072 --- # relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification-conceptnet-validated RelBERT fine-tuned from [roberta-large](https://huggingface.co/roberta-large) on [relbert/semeval2012_relational_similarity](https://huggingface.co/datasets/relbert/semeval2012_relational_similarity). Fine-tuning is done via [RelBERT](https://github.com/asahi417/relbert) library (see the repository for more detail). It achieves the following results on the relation understanding tasks: - Analogy Question ([dataset](https://huggingface.co/datasets/relbert/analogy_questions), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/analogy.json)): - Accuracy on SAT (full): 0.5080213903743316 - Accuracy on SAT: 0.5192878338278932 - Accuracy on BATS: 0.6653696498054474 - Accuracy on U2: 0.45614035087719296 - Accuracy on U4: 0.5393518518518519 - Accuracy on Google: 0.84 - Lexical Relation Classification ([dataset](https://huggingface.co/datasets/relbert/lexical_relation_classification), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/classification.json)): - Micro F1 score on BLESS: 0.9132138014163026 - Micro F1 score on CogALexV: 0.8502347417840377 - Micro F1 score on EVALution: 0.6852654387865655 - Micro F1 score on K&H+N: 0.9604228976838005 - Micro F1 score on ROOT09: 0.9022250078345346 - Relation Mapping ([dataset](https://huggingface.co/datasets/relbert/relation_mapping), [full result](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/relation_mapping.json)): - Accuracy on Relation Mapping: 0.856984126984127 ### Usage This model can be used through the [relbert library](https://github.com/asahi417/relbert). Install the library via pip ```shell pip install relbert ``` and activate model as below. ```python from relbert import RelBERT model = RelBERT("relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification-conceptnet-validated") vector = model.get_embedding(['Tokyo', 'Japan']) # shape of (1024, ) ``` ### Training hyperparameters The following hyperparameters were used during training: - model: roberta-large - max_length: 64 - mode: mask - data: relbert/semeval2012_relational_similarity - split: train - data_eval: relbert/conceptnet_high_confidence - split_eval: full - template_mode: manual - template: Today, I finally discovered the relation between <subj> and <obj> : <obj> is <subj>'s <mask> - loss_function: nce_logout - classification_loss: True - temperature_nce_constant: 0.05 - temperature_nce_rank: {'min': 0.01, 'max': 0.05, 'type': 'linear'} - epoch: 27 - batch: 128 - lr: 5e-06 - lr_decay: False - lr_warmup: 1 - weight_decay: 0 - random_seed: 0 - exclude_relation: None - exclude_relation_eval: None - n_sample: 640 - gradient_accumulation: 8 The full configuration can be found at [fine-tuning parameter file](https://huggingface.co/relbert/roberta-large-semeval2012-mask-prompt-b-nce-classification-conceptnet-validated/raw/main/trainer_config.json). ### Reference If you use any resource from RelBERT, please consider to cite our [paper](https://aclanthology.org/2021.eacl-demos.7/). ``` @inproceedings{ushio-etal-2021-distilling-relation-embeddings, title = "{D}istilling {R}elation {E}mbeddings from {P}re-trained {L}anguage {M}odels", author = "Ushio, Asahi and Schockaert, Steven and Camacho-Collados, Jose", booktitle = "EMNLP 2021", year = "2021", address = "Online", publisher = "Association for Computational Linguistics", } ```
Axon/resnet34-v1
[ "dataset:ImageNet", "arxiv:1512.03385", "Axon", "Elixir", "license:apache-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - common_voice_10_0 model-index: - name: wav2vec2-large-xls-r-300m-j-phoneme-colab-new results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-large-xls-r-300m-j-phoneme-colab-new This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5498 - Wer: 0.3257 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 397 | 0.7976 | 0.7045 | | No log | 2.0 | 794 | 0.5777 | 0.5723 | | 1.7064 | 3.0 | 1191 | 0.4775 | 0.4706 | | 1.7064 | 4.0 | 1588 | 0.4755 | 0.4580 | | 1.7064 | 5.0 | 1985 | 0.4678 | 0.4250 | | 0.3823 | 6.0 | 2382 | 0.4742 | 0.4196 | | 0.3823 | 7.0 | 2779 | 0.4419 | 0.3817 | | 0.2485 | 8.0 | 3176 | 0.4402 | 0.3711 | | 0.2485 | 9.0 | 3573 | 0.4942 | 0.3703 | | 0.2485 | 10.0 | 3970 | 0.4877 | 0.3613 | | 0.1735 | 11.0 | 4367 | 0.5073 | 0.3453 | | 0.1735 | 12.0 | 4764 | 0.5127 | 0.3354 | | 0.1238 | 13.0 | 5161 | 0.5545 | 0.3392 | | 0.1238 | 14.0 | 5558 | 0.5419 | 0.3290 | | 0.1238 | 15.0 | 5955 | 0.5498 | 0.3257 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.10.0+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Axon/resnet50-v1
[ "dataset:ImageNet", "arxiv:1512.03385", "Axon", "Elixir", "license:apache-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - coscan-speech metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-coscan-sex results: - task: name: Audio Classification type: audio-classification dataset: name: Coscan Speech type: NbAiLab/coscan-speech args: no metrics: - name: Test Accuracy type: accuracy value: 0.9993247805536799 - name: Validation Accuracy type: accuracy value: 0.9965283657917019 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-coscan-sex This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the coscan-speech dataset. It achieves the following results on the evaluation set: - Loss: 0.0229 - Accuracy: 0.9965 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.0034 | 1.0 | 6644 | 0.0229 | 0.9965 | ### Framework versions - Transformers 4.21.0 - Pytorch 1.10.1+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
Aybars/XLM_Turkish
[ "pytorch", "xlm-roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "XLMRobertaForQuestionAnswering" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: afrodp95/distilbert-base-uncased-finetuned-job-skills-ner results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # afrodp95/distilbert-base-uncased-finetuned-job-skills-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0923 - Validation Loss: 0.1313 - Train Precision: 0.3601 - Train Recall: 0.4922 - Train F1: 0.4159 - Train Accuracy: 0.9522 - Epoch: 5 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1386, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.3257 | 0.1935 | 0.3122 | 0.2144 | 0.2542 | 0.9521 | 0 | | 0.1564 | 0.1464 | 0.3503 | 0.3423 | 0.3463 | 0.9546 | 1 | | 0.1257 | 0.1365 | 0.3593 | 0.4893 | 0.4143 | 0.9522 | 2 | | 0.1102 | 0.1318 | 0.3607 | 0.4692 | 0.4079 | 0.9521 | 3 | | 0.1002 | 0.1305 | 0.3504 | 0.4941 | 0.4100 | 0.9515 | 4 | | 0.0923 | 0.1313 | 0.3601 | 0.4922 | 0.4159 | 0.9522 | 5 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.2 - Datasets 2.6.1 - Tokenizers 0.13.2
Ayham/albert_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: Dogz results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 1.0 --- # Dogz Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Golden Retriever ![Golden Retriever](images/Golden_Retriever.jpg) #### Jack Russell Terrier ![Jack Russell Terrier](images/Jack_Russell_Terrier.jpg) #### Pitbull Terrier ![Pitbull Terrier](images/Pitbull_Terrier.jpg)
Ayham/albert_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: mit tags: - generated_from_trainer model-index: - name: roberta-base-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-finetuned-squad This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0001 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 100 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 2 | 5.6504 | | No log | 2.0 | 4 | 5.0165 | | No log | 3.0 | 6 | 4.2438 | | No log | 4.0 | 8 | 3.2047 | | No log | 5.0 | 10 | 2.3533 | | No log | 6.0 | 12 | 2.1072 | | No log | 7.0 | 14 | 1.4145 | | No log | 8.0 | 16 | 1.0086 | | No log | 9.0 | 18 | 0.5869 | | No log | 10.0 | 20 | 0.2890 | | No log | 11.0 | 22 | 0.1551 | | No log | 12.0 | 24 | 0.0902 | | No log | 13.0 | 26 | 0.0503 | | No log | 14.0 | 28 | 0.0312 | | No log | 15.0 | 30 | 0.0173 | | No log | 16.0 | 32 | 0.0113 | | No log | 17.0 | 34 | 0.0085 | | No log | 18.0 | 36 | 0.0056 | | No log | 19.0 | 38 | 0.0035 | | No log | 20.0 | 40 | 0.0024 | | No log | 21.0 | 42 | 0.0018 | | No log | 22.0 | 44 | 0.0012 | | No log | 23.0 | 46 | 0.0011 | | No log | 24.0 | 48 | 0.0009 | | No log | 25.0 | 50 | 0.0007 | | No log | 26.0 | 52 | 0.0006 | | No log | 27.0 | 54 | 0.0006 | | No log | 28.0 | 56 | 0.0005 | | No log | 29.0 | 58 | 0.0004 | | No log | 30.0 | 60 | 0.0004 | | No log | 31.0 | 62 | 0.0004 | | No log | 32.0 | 64 | 0.0004 | | No log | 33.0 | 66 | 0.0004 | | No log | 34.0 | 68 | 0.0003 | | No log | 35.0 | 70 | 0.0003 | | No log | 36.0 | 72 | 0.0003 | | No log | 37.0 | 74 | 0.0003 | | No log | 38.0 | 76 | 0.0002 | | No log | 39.0 | 78 | 0.0002 | | No log | 40.0 | 80 | 0.0002 | | No log | 41.0 | 82 | 0.0002 | | No log | 42.0 | 84 | 0.0002 | | No log | 43.0 | 86 | 0.0002 | | No log | 44.0 | 88 | 0.0002 | | No log | 45.0 | 90 | 0.0002 | | No log | 46.0 | 92 | 0.0002 | | No log | 47.0 | 94 | 0.0002 | | No log | 48.0 | 96 | 0.0002 | | No log | 49.0 | 98 | 0.0002 | | No log | 50.0 | 100 | 0.0002 | | No log | 51.0 | 102 | 0.0002 | | No log | 52.0 | 104 | 0.0002 | | No log | 53.0 | 106 | 0.0002 | | No log | 54.0 | 108 | 0.0002 | | No log | 55.0 | 110 | 0.0002 | | No log | 56.0 | 112 | 0.0002 | | No log | 57.0 | 114 | 0.0002 | | No log | 58.0 | 116 | 0.0002 | | No log | 59.0 | 118 | 0.0002 | | No log | 60.0 | 120 | 0.0002 | | No log | 61.0 | 122 | 0.0001 | | No log | 62.0 | 124 | 0.0001 | | No log | 63.0 | 126 | 0.0001 | | No log | 64.0 | 128 | 0.0001 | | No log | 65.0 | 130 | 0.0001 | | No log | 66.0 | 132 | 0.0001 | | No log | 67.0 | 134 | 0.0001 | | No log | 68.0 | 136 | 0.0001 | | No log | 69.0 | 138 | 0.0001 | | No log | 70.0 | 140 | 0.0001 | | No log | 71.0 | 142 | 0.0001 | | No log | 72.0 | 144 | 0.0001 | | No log | 73.0 | 146 | 0.0001 | | No log | 74.0 | 148 | 0.0001 | | No log | 75.0 | 150 | 0.0001 | | No log | 76.0 | 152 | 0.0001 | | No log | 77.0 | 154 | 0.0001 | | No log | 78.0 | 156 | 0.0001 | | No log | 79.0 | 158 | 0.0001 | | No log | 80.0 | 160 | 0.0001 | | No log | 81.0 | 162 | 0.0001 | | No log | 82.0 | 164 | 0.0001 | | No log | 83.0 | 166 | 0.0001 | | No log | 84.0 | 168 | 0.0001 | | No log | 85.0 | 170 | 0.0001 | | No log | 86.0 | 172 | 0.0001 | | No log | 87.0 | 174 | 0.0001 | | No log | 88.0 | 176 | 0.0001 | | No log | 89.0 | 178 | 0.0001 | | No log | 90.0 | 180 | 0.0001 | | No log | 91.0 | 182 | 0.0001 | | No log | 92.0 | 184 | 0.0001 | | No log | 93.0 | 186 | 0.0001 | | No log | 94.0 | 188 | 0.0001 | | No log | 95.0 | 190 | 0.0001 | | No log | 96.0 | 192 | 0.0001 | | No log | 97.0 | 194 | 0.0001 | | No log | 98.0 | 196 | 0.0001 | | No log | 99.0 | 198 | 0.0001 | | No log | 100.0 | 200 | 0.0001 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayham/albert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="slarionne/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"]) ```
Ayham/bert_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
Access to model ONVS/Sporadicism is restricted and you are not in the authorized list. Visit https://huggingface.co/ONVS/Sporadicism to ask for access.
Ayham/bert_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en license: apache-2.0 library_name: diffusers tags: [] datasets: /content/drive/Shareddrives/artGAN S2 2022/sugimori-artwork metrics: [] --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # ddpm-butterflies-128 ## Model description This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library on the `/content/drive/Shareddrives/artGAN S2 2022/sugimori-artwork` dataset. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training data [TODO: describe the data used to train the model] ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 ### Training results 📈 [TensorBoard logs](https://huggingface.co/Tahahah/ddpm-butterflies-128/tensorboard?#scalars)
Ayham/bert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - clinc_oos metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: name: Text Classification type: text-classification dataset: name: clinc_oos type: clinc_oos config: plus split: train args: plus metrics: - name: Accuracy type: accuracy value: 0.9503225806451613 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. It achieves the following results on the evaluation set: - Loss: 0.2339 - Accuracy: 0.9503 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 3.2073 | 1.0 | 1271 | 1.3840 | 0.8542 | | 0.7452 | 2.0 | 2542 | 0.4053 | 0.9316 | | 0.1916 | 3.0 | 3813 | 0.2580 | 0.9452 | | 0.0768 | 4.0 | 5084 | 0.2371 | 0.9477 | | 0.0455 | 5.0 | 6355 | 0.2339 | 0.9503 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayham/bert_roberta_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: vi datasets: - cc100 tags: - summarization license: mit widget: - text: "VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." --- # ViT5-Base Finetuned on `vietnews` Abstractive Summarization (No prefix needed) State-of-the-art pretrained Transformer-based encoder-decoder model for Vietnamese. [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/vit5-pretrained-text-to-text-transformer-for/abstractive-text-summarization-on-vietnews)](https://paperswithcode.com/sota/abstractive-text-summarization-on-vietnews?p=vit5-pretrained-text-to-text-transformer-for) ## How to use For more details, do check out [our Github repo](https://github.com/vietai/ViT5) and [eval script](https://github.com/vietai/ViT5/blob/main/eval/Eval_vietnews_sum.ipynb). ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM ​ tokenizer = AutoTokenizer.from_pretrained("VietAI/vit5-base-vietnews-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("VietAI/vit5-base-vietnews-summarization") model.cuda() ​ sentence = "VietAI là tổ chức phi lợi nhuận với sứ mệnh ươm mầm tài năng về trí tuệ nhân tạo và xây dựng một cộng đồng các chuyên gia trong lĩnh vực trí tuệ nhân tạo đẳng cấp quốc tế tại Việt Nam." sentence = sentence + "</s>" encoding = tokenizer(sentence, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to("cuda"), encoding["attention_mask"].to("cuda") outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, max_length=256, early_stopping=True ) for output in outputs: line = tokenizer.decode(output, skip_special_tokens=True, clean_up_tokenization_spaces=True) print(line) ``` ## Citation ``` @inproceedings{phan-etal-2022-vit5, title = "{V}i{T}5: Pretrained Text-to-Text Transformer for {V}ietnamese Language Generation", author = "Phan, Long and Tran, Hieu and Nguyen, Hieu and Trinh, Trieu H.", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop", year = "2022", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-srw.18", pages = "136--142", } ```
Ayham/bertgpt2_cnn
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- language: en thumbnail: http://www.huggingtweets.com/funfacts/1662519173108/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1251284305185255425/TuAMzBHm_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Funfacts</div> <div style="text-align: center; font-size: 14px;">@funfacts</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Funfacts. | Data | Funfacts | | --- | --- | | Tweets downloaded | 2160 | | Retweets | 7 | | Short tweets | 3 | | Tweets kept | 2150 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/j0uu6ccx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @funfacts's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2r4x2tam) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2r4x2tam/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/funfacts') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Ayham/roberta_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: cc widget: - text: "User: Hey, how are you?" example_title: "How are you?" - text: "User: What did you do today?" example_title: "What did you do today?" - text: "User: What's your favorite movie?" example_title: "What's your favorite movie?" ---
Ayham/robertagpt2_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: bert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9454375 - name: F1 type: f1 value: 0.9458448428504193 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-emotion This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1476 - Accuracy: 0.9454 - F1: 0.9458 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8907 | 1.0 | 250 | 0.2625 | 0.9184 | 0.9157 | | 0.2315 | 2.0 | 500 | 0.1476 | 0.9454 | 0.9458 | ### Framework versions - Transformers 4.21.3 - Pytorch 1.12.1+cu113 - Datasets 2.4.0 - Tokenizers 0.12.1
Ayham/robertagpt2_xsum4
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - autotrain - text-classification language: - bn widget: - text: "I love AutoTrain 🤗" datasets: - neuralspace/autotrain-data-citizen_nlu_bn co2_eq_emissions: emissions: 0.08431503532658222 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1370652766 - CO2 Emissions (in grams): 0.0843 ## Validation Metrics - Loss: 0.117 - Accuracy: 0.971 - Macro F1: 0.971 - Micro F1: 0.971 - Weighted F1: 0.971 - Macro Precision: 0.973 - Micro Precision: 0.971 - Weighted Precision: 0.972 - Macro Recall: 0.970 - Micro Recall: 0.971 - Weighted Recall: 0.971 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/neuralspace/autotrain-citizen_nlu_bn-1370652766 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("neuralspace/autotrain-citizen_nlu_bn-1370652766", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("neuralspace/autotrain-citizen_nlu_bn-1370652766", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Ayham/xlmroberta_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
Access to model Valkyries15/tf_demo is restricted and you are not in the authorized list. Visit https://huggingface.co/Valkyries15/tf_demo to ask for access.
Ayham/xlmroberta_large_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - autotrain - text-classification language: - hi widget: - text: "I love AutoTrain 🤗" datasets: - neuralspace/autotrain-data-citizen_nlu_hindi co2_eq_emissions: emissions: 0.06283545088764929 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 1370952776 - CO2 Emissions (in grams): 0.0628 ## Validation Metrics - Loss: 0.101 - Accuracy: 0.974 - Macro F1: 0.974 - Micro F1: 0.974 - Weighted F1: 0.974 - Macro Precision: 0.975 - Micro Precision: 0.974 - Weighted Precision: 0.975 - Macro Recall: 0.973 - Micro Recall: 0.974 - Weighted Recall: 0.974 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/neuralspace/autotrain-citizen_nlu_hindi-1370952776 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("neuralspace/autotrain-citizen_nlu_hindi-1370952776", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("neuralspace/autotrain-citizen_nlu_hindi-1370952776", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```