modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.6566666666666666 - name: F1 type: f1 value: 0.6979472140762463 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.7339 - Accuracy: 0.6567 - F1: 0.6979 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
CBreit00/DialoGPT_small_Rick
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-ta results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.ta metrics: - name: F1 type: f1 value: 0.8144578313253013 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-ta This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.2183 - F1: 0.8145 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.5477 | 1.0 | 209 | 0.2732 | 0.7305 | | 0.2506 | 2.0 | 418 | 0.2425 | 0.7626 | | 0.168 | 3.0 | 627 | 0.2183 | 0.8145 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.11.0 - Datasets 1.16.1 - Tokenizers 0.10.3
CL/safe-math-bot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit --- ### face2contra-sd-dreambooth on Stable Diffusion via Dreambooth #### model by avantcontra This your the Stable Diffusion model fine-tuned the face2contra-sd-dreambooth concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks face2contra** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/1.jpeg) ![image 1](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/4.jpeg) ![image 2](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/10.jpeg) ![image 3](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/7.jpeg) ![image 4](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/2.jpeg) ![image 5](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/0.jpeg) ![image 6](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/9.jpeg) ![image 7](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/3.jpeg) ![image 8](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/6.jpeg) ![image 9](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/18.jpeg) ![image 10](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/17.jpeg) ![image 11](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/19.jpeg) ![image 12](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/15.jpeg) ![image 13](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/11.jpeg) ![image 14](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/14.jpeg) ![image 15](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/12.jpeg) ![image 16](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/13.jpeg) ![image 17](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/5.jpeg) ![image 18](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/20.jpeg) ![image 19](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/8.jpeg) ![image 20](https://huggingface.co/avantcontra/face2contra-sd-dreambooth/resolve/main/concept_images/16.jpeg)
CLTL/icf-levels-ber
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- language: en thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/2348558617/x0vh6bui3sq97vt4jd2n_400x400.png&#39;)"> </div> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1567266375026053125/0cyfXyiF_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI CYBORG 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Дмитрий Медведев & MORGENSHTERN</div> <div style="text-align: center; font-size: 14px;">@medvedevrussia-morgen__shtern</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Дмитрий Медведев & MORGENSHTERN. | Data | Дмитрий Медведев | MORGENSHTERN | | --- | --- | --- | | Tweets downloaded | 1745 | 3178 | | Retweets | 298 | 57 | | Short tweets | 50 | 1034 | | Tweets kept | 1397 | 2087 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2wx8v66j/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @medvedevrussia-morgen__shtern's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/qwb0vpv7) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/qwb0vpv7/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/medvedevrussia-morgen__shtern') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
CLTL/icf-levels-ins
[ "pytorch", "roberta", "text-classification", "nl", "transformers", "license:mit" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
Update: https://huggingface.co/Deltaadams/HentaiDiffusion
Callidior/bert2bert-base-arxiv-titlegen
[ "pytorch", "safetensors", "encoder-decoder", "text2text-generation", "en", "dataset:arxiv_dataset", "transformers", "summarization", "license:apache-2.0", "autotrain_compatible", "has_space" ]
summarization
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
145
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 250.25 +/- 16.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Cameron/BERT-SBIC-offensive
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: default split: train args: default metrics: - name: Accuracy type: accuracy value: 0.9255 - name: F1 type: f1 value: 0.925520268497019 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2170 - Accuracy: 0.9255 - F1: 0.9255 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8237 | 1.0 | 250 | 0.3205 | 0.9045 | 0.9002 | | 0.2539 | 2.0 | 500 | 0.2170 | 0.9255 | 0.9255 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Cameron/BERT-eec-emotion
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
Access to model abrizk/autotrain-bart-meeting-summarization-1648858537 is restricted and you are not in the authorized list. Visit https://huggingface.co/abrizk/autotrain-bart-meeting-summarization-1648858537 to ask for access.
Cameron/BERT-jigsaw-severetoxic
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-10-03T21:55:44Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.7.1+cu110 - Datasets 2.2.2 - Tokenizers 0.12.1
Cameron/BERT-mdgender-convai-ternary
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
38
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - suresh-subramanian/autotrain-data-fake-news co2_eq_emissions: emissions: 0.04097854185629584 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1649058538 - CO2 Emissions (in grams): 0.0410 ## Validation Metrics - Loss: 0.387 - Accuracy: 0.815 - Precision: 0.760 - Recall: 0.730 - AUC: 0.902 - F1: 0.745 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058538 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058538", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058538", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Cameron/BERT-mdgender-wizard
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
30
2022-10-03T22:07:19Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - suresh-subramanian/autotrain-data-fake-news co2_eq_emissions: emissions: 0.040297872306469855 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1649058539 - CO2 Emissions (in grams): 0.0403 ## Validation Metrics - Loss: 0.478 - Accuracy: 0.779 - Precision: 0.814 - Recall: 0.520 - AUC: 0.881 - F1: 0.635 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058539 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058539", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058539", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Cameron/BERT-rtgender-opgender-annotations
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
2022-10-03T22:07:48Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - suresh-subramanian/autotrain-data-fake-news co2_eq_emissions: emissions: 4.630852478388675 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1649058540 - CO2 Emissions (in grams): 4.6309 ## Validation Metrics - Loss: 0.527 - Accuracy: 0.725 - Precision: 0.729 - Recall: 0.408 - AUC: 0.825 - F1: 0.523 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058540 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058540", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058540", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Camzure/MaamiBot-test
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - suresh-subramanian/autotrain-data-fake-news co2_eq_emissions: emissions: 4.695596043893512 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1649058541 - CO2 Emissions (in grams): 4.6956 ## Validation Metrics - Loss: 0.459 - Accuracy: 0.779 - Precision: 0.790 - Recall: 0.546 - AUC: 0.881 - F1: 0.646 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058541 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058541", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058541", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Camzure/MaamiBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-10-03T22:08:00Z
--- tags: - autotrain - text-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - suresh-subramanian/autotrain-data-fake-news co2_eq_emissions: emissions: 12.699762619910537 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1649058542 - CO2 Emissions (in grams): 12.6998 ## Validation Metrics - Loss: 0.624 - Accuracy: 0.637 - Precision: 1.000 - Recall: 0.020 - AUC: 0.652 - F1: 0.039 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/suresh-subramanian/autotrain-fake-news-1649058542 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058542", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("suresh-subramanian/autotrain-fake-news-1649058542", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Canadiancaleb/DialoGPT-small-walter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
2022-10-03T22:13:31Z
--- license: mit --- This model is part of our work "Visual Story Generation Based on Emotional and Keyword Scheme." More information will be provided later
CapitainData/wav2vec2-large-xlsr-turkish-demo-colab
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model AJRVIDEO/Elephant is restricted and you are not in the authorized list. Visit https://huggingface.co/AJRVIDEO/Elephant to ask for access.
Capreolus/bert-base-msmarco
[ "pytorch", "tf", "jax", "bert", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
238
null
--- license: mit --- ### MattVidPro on Stable Diffusion This is the `<mattvidpro>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<mattvidpro> 0](https://huggingface.co/sd-concepts-library/mattvidpro/resolve/main/concept_images/2.jpeg) ![<mattvidpro> 1](https://huggingface.co/sd-concepts-library/mattvidpro/resolve/main/concept_images/1.jpeg) ![<mattvidpro> 2](https://huggingface.co/sd-concepts-library/mattvidpro/resolve/main/concept_images/3.jpeg) ![<mattvidpro> 3](https://huggingface.co/sd-concepts-library/mattvidpro/resolve/main/concept_images/0.jpeg)
Capreolus/birch-bert-large-mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
Access to model AJRVIDEO/Elephantman is restricted and you are not in the authorized list. Visit https://huggingface.co/AJRVIDEO/Elephantman to ask for access.
Capreolus/birch-bert-large-msmarco_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: test-trainer results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # test-trainer This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7993 - Accuracy: 0.704 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 63 | 0.8245 | 0.696 | | No log | 2.0 | 126 | 0.7993 | 0.704 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Carlork314/Carlos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-10-03T23:32:40Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3608 - Accuracy: 0.8433 - F1: 0.8433 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.4095 | 1.0 | 875 | 0.3667 | 0.8353 | 0.8351 | | 0.3348 | 2.0 | 1750 | 0.3608 | 0.8433 | 0.8433 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Carlork314/Xd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: sourBlueBarneyTwo results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.9800000190734863 --- # sourBlueBarneyTwo Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### blue_dream ![blue_dream](images/blue_dream.jpg) #### poodle ![poodle](images/poodle.jpg) #### sour_diesel ![sour_diesel](images/sour_diesel.jpg) #### swan ![swan](images/swan.jpg)
CarlosPR/mt5-spanish-memmories-analysis
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - metrics: - type: mean_reward value: 1218.38 +/- 203.74 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ## parameters ```python model = A2C(policy = "MlpPolicy", env = env, gae_lambda = 0.9, gamma = 0.99, learning_rate = 0.00096, max_grad_norm = 0.5, n_steps = 8, vf_coef = 0.4, ent_coef = 0.0, tensorboard_log = "./tensorboard", policy_kwargs=dict( log_std_init=-2, ortho_init=False), normalize_advantage=False, use_rms_prop= True, use_sde= True, verbose=1) ... ```
Carolhuehuehuehue/Sla
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: finetuning-review results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-review This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5668 - Accuracy: 0.7853 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5019 | 1.0 | 5017 | 0.5607 | 0.7797 | | 0.4334 | 2.0 | 10034 | 0.5668 | 0.7853 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Cat/Kitty
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
# Grapheme-based statistical parametric synthesizer for Kinyarwanda A Grapheme-based approach was chosen because they give acceptable performances for low-resource languages. For instance, this model was trained on approximately 5 hours of Kinyarwanda audios with their corresponding transcriptions, no further language-specific information was provided. The [Festvox](http://festvox.org/) suite of tools was employed to build the model, and the Flite engine was used to generate a small, and portable executable file for this model. Currently, this model can only be run on Linux. ## Model description To build the voice, we needed to map graphemes to their corresponding phonemes. In this work the UniTran-based approach to building the voice. The graphemes are converted to UTF-8 code points, then these are converted to guessed phonetic transcription in X-Sampa. After obtaining the phonemes, on each one of them we use an HMM model from the Clustergen framework to obtain important features. These features are then used to train RandomForest(20 decision trees) to predict spectral features. It achieves an `MCD` of ` 5.03 `. ## Limitations and Recommendations The voice produced lacks in crispness and in some cases ignore tonal information which is indispensable in Kinyarwanda. We believe that with a large corpus of linguistic information the voice would sound more natural. ## Usage Use the following to convert text to a wav file: ``` sh ./flite_du_kin_tts -f kinyarwanda.txt kinyarwanda.wav ``` And to use a terminal prompt, use: ``` sh ./flite_du_kin_tts -t "Muraho Rwanda" kinyarwanda.wav ```
Cathy/reranking_model
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: mit --- ### Filippo Palizzi Artworks on Stable Diffusion via Dreambooth #### model by Capacap This your the Stable Diffusion model fine-tuned the Filippo Palizzi Artworks concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a painting by sks Filippo Palizzi** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) This is a Stable Diffusion concept trained via Dreambooth on a small set of artworks by Italian painter Filippo Palizzi (1818 – 1899). Example prompt: "A cozy cottage by sks Filippo Palizzi". Here are the images used for training this concept: ![image 0](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/2.jpeg) ![image 1](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/4.jpeg) ![image 2](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/1.jpeg) ![image 3](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/8.jpeg) ![image 4](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/0.jpeg) ![image 6](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/7.jpeg) ![image 7](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/5.jpeg) ![image 8](https://huggingface.co/Capacap/filippo-palizzi-artworks/resolve/main/concept_images/6.jpeg)
dccuchile/albert-base-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
Access to model Mirimur/Wav2Vec2_Texas_ASR is restricted and you are not in the authorized list. Visit https://huggingface.co/Mirimur/Wav2Vec2_Texas_ASR to ask for access.
dccuchile/albert-base-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
Access to model LYTinn/finetuning-sentiment-model-3000-samples is restricted and you are not in the authorized list. Visit https://huggingface.co/LYTinn/finetuning-sentiment-model-3000-samples to ask for access.
dccuchile/albert-base-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- datasets: - bigscience/P3 language: en license: apache-2.0 widget: - text : "input: <extra_id_0> The item was packaged in bubble wrap. <extra_id_1>\n - It was fragile.\n - It was small.\n output: It was fragile." --- **Official repository**: [seonghyeonye/Flipped-Learning](https://github.com/seonghyeonye/Flipped-Learning) # Model Description FLIPPED uses a unique meta-learning method to show zero-shot task generalization on classification natural language prompts, outperforming GPT-3 and T0-11B on many tasks with a 4x smaller scale. It is a series of encoder-decoder model trained on a numerous classification dataset. We show inputs and its corresponding outputs of each instances in each dataset to FLIPPED, and train it to generate its possible instruction. We add unlikelihood loss in order **not** to generate the instruction when given the same input, but a wrong output. To obtain FLIPPED, we fine-tune a T5 model in a given scale on a multitask mixture covering many different classification NLP tasks. # Intended uses You can use the models to perform inference on tasks by specifying your input-output NLP query in a "input: {input}\noutput: {output}" form , and the model will predict the instruction. For example, You can try *"input: <extra_id_0> this is the best cast iron skillet you will ever buy<extra_id_1>\noutput: Positive"* as an input, and the model will hopefully generate *"Title: Review:"*. # How to use Our overall explanation models along with ablations can be found in our [paper](https://arxiv.org/abs/2210.02969). We recommend using the [FLIPPED-11B](seonghyeonye/flipped_11B) checkpoint as it leads (on average) to the best performances on a variety of NLP tasks. |Model|Number of parameters| |-|-| |[Flipped_11B](https://huggingface.co/seonghyeonye/flipped_11B)|11 billion| |[Flipped_3B](https://huggingface.co/seonghyeonye/flipped_3B)|3 billion| Here is how to download the model in PyTorch: ```python import torch from transformers import T5Tokenizer, T5ForConditionalGeneration model = T5ForConditionalGeneration.from_pretrained("seonghyeonye/flipped_3B") tokenizer = T5Tokenizer.from_pretrained("seonghyeonye/flipped_3B") ``` If you want to use another checkpoint, please replace the path in `T5Tokenizer` and `T5ForConditionalGeneration`. We also provide a quick [Jupyter Notebook](https://github.com/seonghyeonye/Flipped-Learning/blob/master/flipped_inference.ipynb) where you can inference with our method. **Note: the model was trained with fp32 activations. As such, we highly discourage running inference with fp16.** # Training procedure FLIPPED models are based on [T5](https://huggingface.co/google/t5-v1_1-xl), a Transformer-based encoder-decoder language model pre-trained with a masked language modeling-style objective on [C4](https://huggingface.co/datasets/c4). At a high level, the input text along with output label is fed to the encoder and the instruction text is produced by the decoder. The model is fine-tuned to autoregressively generate the target. We also feed input text along with a wrong input, adding an unlikelihood loss in order not to make model produce the proper instruction in that case. Here are our training details. Training details: - Fine-tuning steps: 5'000 - Input sequence length: 512 - Target sequence length: 128 - Batch size: 240 - Optimizer: Adafactor - Learning rate: 5e-5 - Dropout: 0.1 - Sampling strategy: proportional to the number of examples in each dataset (we randomly sampled any dataset if it has over 500'000 examples so that it has at most 500'000 examples. Also, we randomly choose which instruction to generate for each training steps, so ideally each instruction appears *num_examples/num_templates* while training.) # Training data We trained different variants T0 with different mixtures of datasets. |Model|Training datasets| |--|--| |FLIPPED_11B|- Multiple-Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ<br>- Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp<br>- Topic Classification: AG News, DBPedia<br>- Paraphrase Identification: MRPC, PAWS, QQP| |FLIPPED_3B|Same as FLIPPED_11B| We only choose prompts examples that has output lables, which can be found on the dataset page. # Evaluation data We evaluate our models on following datasets: |Task category|Datasets| |-|-| |Natural language inference|ANLI(R1, R2, R3), CB, RTE| |Coreference resolution|WSC, Winogrande| |Word sense disambiguation|WiC| |Sentence completion|COPA, HellaSwag, Story Cloze| |QA|PIQA, ARC-Challenge, OpenbookQA| We also evaluate FLIPPED on a subset of [BIG-bench benchmark](https://github.com/google/BIG-bench): - Code description task - Conceptual combinations - Hindu knowledge json - Known unknowns - Language identification - Logic grid puzzle task - Logical deduction - Common misconceptions - Movie dialog same or different - Novel concepts - Strategyqa - Formal fallacies syllogisms negation - VitaminC - Winowhy multiple choice # Label generalization We evaluate the robustness of models on following datasets with changing the output label of the datasets. The substitute words can be found in our [paper](https://arxiv.org/abs/2210.02969). |Task category|(Datasets, Template name)| |-|-| |Unseen tasks|(WSC, does the pronoun refer to), (CB, can we infer), (RTE, MNLI crowdsource)| |Seen tasks|(IMDB, Reviewer Enjoyment Yes No), (PAWS, Meaning) | The template name we used can be found in the [promptsource template library](https://github.com/bigscience-workshop/promptsource/tree/main/promptsource/templates). # BibTeX entry and citation info ```bibtex @article{ye2022guess, title={Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners}, author={Ye, Seonghyeon and Kim, Doyoung and Jang, Joel and Shin, Joongbo and Seo, Minjoon}, journal={arXiv preprint arXiv:2210.02969}, year={2022} } ```
dccuchile/albert-large-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: mit --- ### Chungus Poodl Pet on Stable Diffusion This is the `<poodl-chungus-big>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<poodl-chungus-big> 0](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/52.jpeg) ![<poodl-chungus-big> 1](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/55.jpeg) ![<poodl-chungus-big> 2](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/44.jpeg) ![<poodl-chungus-big> 3](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/2.jpeg) ![<poodl-chungus-big> 4](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/27.jpeg) ![<poodl-chungus-big> 5](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/43.jpeg) ![<poodl-chungus-big> 6](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/47.jpeg) ![<poodl-chungus-big> 7](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/4.jpeg) ![<poodl-chungus-big> 8](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/32.jpeg) ![<poodl-chungus-big> 9](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/46.jpeg) ![<poodl-chungus-big> 10](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/11.jpeg) ![<poodl-chungus-big> 11](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/45.jpeg) ![<poodl-chungus-big> 12](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/15.jpeg) ![<poodl-chungus-big> 13](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/16.jpeg) ![<poodl-chungus-big> 14](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/23.jpeg) ![<poodl-chungus-big> 15](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/1.jpeg) ![<poodl-chungus-big> 16](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/9.jpeg) ![<poodl-chungus-big> 17](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/36.jpeg) ![<poodl-chungus-big> 18](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/39.jpeg) ![<poodl-chungus-big> 19](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/40.jpeg) ![<poodl-chungus-big> 20](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/26.jpeg) ![<poodl-chungus-big> 21](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/8.jpeg) ![<poodl-chungus-big> 22](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/38.jpeg) ![<poodl-chungus-big> 23](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/42.jpeg) ![<poodl-chungus-big> 24](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/22.jpeg) ![<poodl-chungus-big> 25](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/35.jpeg) ![<poodl-chungus-big> 26](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/33.jpeg) ![<poodl-chungus-big> 27](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/49.jpeg) ![<poodl-chungus-big> 28](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/14.jpeg) ![<poodl-chungus-big> 29](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/3.jpeg) ![<poodl-chungus-big> 30](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/51.jpeg) ![<poodl-chungus-big> 31](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/37.jpeg) ![<poodl-chungus-big> 32](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/30.jpeg) ![<poodl-chungus-big> 33](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/0.jpeg) ![<poodl-chungus-big> 34](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/19.jpeg) ![<poodl-chungus-big> 35](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/54.jpeg) ![<poodl-chungus-big> 36](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/24.jpeg) ![<poodl-chungus-big> 37](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/7.jpeg) ![<poodl-chungus-big> 38](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/48.jpeg) ![<poodl-chungus-big> 39](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/21.jpeg) ![<poodl-chungus-big> 40](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/50.jpeg) ![<poodl-chungus-big> 41](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/25.jpeg) ![<poodl-chungus-big> 42](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/13.jpeg) ![<poodl-chungus-big> 43](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/17.jpeg) ![<poodl-chungus-big> 44](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/31.jpeg) ![<poodl-chungus-big> 45](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/18.jpeg) ![<poodl-chungus-big> 46](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/5.jpeg) ![<poodl-chungus-big> 47](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/29.jpeg) ![<poodl-chungus-big> 48](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/34.jpeg) ![<poodl-chungus-big> 49](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/28.jpeg) ![<poodl-chungus-big> 50](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/41.jpeg) ![<poodl-chungus-big> 51](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/12.jpeg) ![<poodl-chungus-big> 52](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/20.jpeg) ![<poodl-chungus-big> 53](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/10.jpeg) ![<poodl-chungus-big> 54](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/53.jpeg) ![<poodl-chungus-big> 55](https://huggingface.co/sd-concepts-library/chungus-poodl-pet/resolve/main/concept_images/6.jpeg)
dccuchile/albert-tiny-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- language: - ms tags: - translation metrics: - sacrebleu --- # finetune-translation-t5-super-tiny-standard-bahasa-cased Finetuned T5 super tiny on EN-MS and MS-EN translation tasks. ## Dataset 1. EN-MS dataset, https://huggingface.co/datasets/mesolitica/en-ms 2. MS-EN dataset, https://huggingface.co/datasets/mesolitica/ms-en 3. NLLB eng_Latn-zsm_Latn, https://github.com/huseinzol05/malay-dataset/tree/master/translation/laser ## Finetune details 1. Finetune using single RTX 3090 Ti. Scripts at https://github.com/huseinzol05/malaya/tree/master/session/translation/hf-t5 ## Supported prefix 1. `terjemah Inggeris ke Melayu: {string}`, for EN-MS translation. 2. `terjemah Melayu ke Inggeris: {string}`, for MS-EN translation. ## Evaluation eng_Latn-zsm_Latn, ``` {'name': 'BLEU', 'score': 39.18834189893951, '_mean': -1.0, '_ci': -1.0, '_verbose': '72.6/48.3/33.5/23.6 (BP = 0.960 ratio = 0.961 hyp_len = 21172 ref_len = 22027)', 'bp': 0.9604210226409274, 'counts': [15376, 9741, 6434, 4284], 'totals': [21172, 20175, 19178, 18181], 'sys_len': 21172, 'ref_len': 22027, 'precisions': [72.62422066880787, 48.28252788104089, 33.54885806653457, 23.563060337715196], 'prec_str': '72.6/48.3/33.5/23.6', 'ratio': 0.9611840014527625} chrF2++ = 64.03 ``` zsm_Latn-eng_Latn, ``` {'name': 'BLEU', 'score': 34.10561487832948, '_mean': -1.0, '_ci': -1.0, '_verbose': '67.3/41.6/27.8/18.7 (BP = 0.982 ratio = 0.982 hyp_len = 23139 ref_len = 23570)', 'bp': 0.9815458410942027, 'counts': [15569, 9216, 5871, 3777], 'totals': [23139, 22142, 21145, 20148], 'sys_len': 23139, 'ref_len': 23570, 'precisions': [67.28467090194044, 41.62225634540692, 27.765429179475053, 18.746277546158428], 'prec_str': '67.3/41.6/27.8/18.7', 'ratio': 0.9817140432753501} chrF2++ = 59.18 ```
dccuchile/albert-tiny-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - generated_from_trainer model-index: - name: gpt2-gpt2-mc-weight0-epoch15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-gpt2-mc-weight0-epoch15 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.9633 - Cls loss: 6.8154 - Lm loss: 3.9632 - Cls Accuracy: 0.1337 - Cls F1: 0.0531 - Cls Precision: 0.0331 - Cls Recall: 0.1337 - Perplexity: 52.63 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:| | 4.1973 | 1.0 | 3470 | 4.0341 | 6.8497 | 4.0341 | 0.1331 | 0.0529 | 0.0330 | 0.1331 | 56.49 | | 4.0446 | 2.0 | 6940 | 3.9948 | 6.8450 | 3.9947 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 54.31 | | 3.9714 | 3.0 | 10410 | 3.9795 | 6.8404 | 3.9794 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 53.48 | | 3.9176 | 4.0 | 13880 | 3.9686 | 6.8359 | 3.9686 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.91 | | 3.8739 | 5.0 | 17350 | 3.9580 | 6.8317 | 3.9579 | 0.1331 | 0.0529 | 0.0330 | 0.1331 | 52.35 | | 3.8359 | 6.0 | 20820 | 3.9591 | 6.8286 | 3.9590 | 0.1331 | 0.0529 | 0.0330 | 0.1331 | 52.40 | | 3.8035 | 7.0 | 24290 | 3.9585 | 6.8263 | 3.9585 | 0.1331 | 0.0529 | 0.0330 | 0.1331 | 52.38 | | 3.7762 | 8.0 | 27760 | 3.9585 | 6.8240 | 3.9585 | 0.1331 | 0.0529 | 0.0330 | 0.1331 | 52.38 | | 3.7517 | 9.0 | 31230 | 3.9567 | 6.8216 | 3.9567 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.28 | | 3.7313 | 10.0 | 34700 | 3.9599 | 6.8193 | 3.9598 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.45 | | 3.7131 | 11.0 | 38170 | 3.9606 | 6.8169 | 3.9605 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.48 | | 3.6982 | 12.0 | 41640 | 3.9614 | 6.8154 | 3.9614 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.53 | | 3.6862 | 13.0 | 45110 | 3.9623 | 6.8154 | 3.9622 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.57 | | 3.6767 | 14.0 | 48580 | 3.9621 | 6.8154 | 3.9620 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.56 | | 3.6711 | 15.0 | 52050 | 3.9633 | 6.8154 | 3.9632 | 0.1337 | 0.0531 | 0.0331 | 0.1337 | 52.63 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
dccuchile/albert-tiny-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: mit tags: - generated_from_trainer model-index: - name: refinement-finetuned-mnli-kaggle-2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # refinement-finetuned-mnli-kaggle-2 This model is a fine-tuned version of [mfreihaut/refinement-finetuned-mnli-1](https://huggingface.co/mfreihaut/refinement-finetuned-mnli-1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4099 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.6157 | 1.0 | 12599 | 0.5321 | | 0.5355 | 2.0 | 25198 | 0.6121 | | 0.4084 | 3.0 | 37797 | 0.3938 | | 0.4596 | 4.0 | 50396 | 0.3925 | | 0.4625 | 5.0 | 62995 | 0.3928 | | 0.4668 | 6.0 | 75594 | 0.3892 | | 0.5054 | 7.0 | 88193 | 0.4097 | | 0.4953 | 8.0 | 100792 | 0.4099 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.10.0 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/albert-tiny-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - f1 model-index: - name: distilbert-multilingual-uncased-oct-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-multilingual-uncased-oct-3 This model is a fine-tuned version of [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0532 - F1: 0.9073 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.1205 | 1.0 | 565 | 0.0662 | 0.8449 | | 0.0524 | 2.0 | 1130 | 0.0535 | 0.8921 | | 0.033 | 3.0 | 1695 | 0.0532 | 0.9073 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/albert-tiny-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: wav2vec_korean results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec_korean This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.17.0 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.13.0
dccuchile/albert-tiny-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-bert-base-uncased-mc-weight0-epoch15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-bert-base-uncased-mc-weight0-epoch15 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3651 - Cls loss: 2.9223 - Lm loss: 4.3649 - Cls Accuracy: 0.0248 - Cls F1: 0.0057 - Cls Precision: 0.0061 - Cls Recall: 0.0248 - Perplexity: 78.64 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:| | 4.8711 | 1.0 | 3470 | 4.5156 | 2.9252 | 4.5155 | 0.0213 | 0.0047 | 0.0042 | 0.0213 | 91.42 | | 4.483 | 2.0 | 6940 | 4.4193 | 2.9248 | 4.4191 | 0.0219 | 0.0048 | 0.0042 | 0.0219 | 83.02 | | 4.3345 | 3.0 | 10410 | 4.3684 | 2.9244 | 4.3682 | 0.0219 | 0.0048 | 0.0042 | 0.0219 | 78.91 | | 4.2266 | 4.0 | 13880 | 4.3445 | 2.9241 | 4.3443 | 0.0225 | 0.0049 | 0.0043 | 0.0225 | 77.04 | | 4.1388 | 5.0 | 17350 | 4.3260 | 2.9237 | 4.3258 | 0.0231 | 0.0050 | 0.0044 | 0.0231 | 75.63 | | 4.0644 | 6.0 | 20820 | 4.3299 | 2.9234 | 4.3297 | 0.0231 | 0.0050 | 0.0044 | 0.0231 | 75.92 | | 3.999 | 7.0 | 24290 | 4.3278 | 2.9232 | 4.3276 | 0.0231 | 0.0059 | 0.0061 | 0.0231 | 75.76 | | 3.9426 | 8.0 | 27760 | 4.3269 | 2.9230 | 4.3267 | 0.0231 | 0.0059 | 0.0061 | 0.0231 | 75.70 | | 3.8929 | 9.0 | 31230 | 4.3324 | 2.9228 | 4.3322 | 0.0248 | 0.0061 | 0.0062 | 0.0248 | 76.11 | | 3.8488 | 10.0 | 34700 | 4.3382 | 2.9227 | 4.3380 | 0.0248 | 0.0061 | 0.0064 | 0.0248 | 76.55 | | 3.8116 | 11.0 | 38170 | 4.3461 | 2.9225 | 4.3459 | 0.0242 | 0.0057 | 0.0061 | 0.0242 | 77.16 | | 3.7791 | 12.0 | 41640 | 4.3537 | 2.9224 | 4.3535 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 77.75 | | 3.7532 | 13.0 | 45110 | 4.3593 | 2.9223 | 4.3591 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 78.19 | | 3.7321 | 14.0 | 48580 | 4.3588 | 2.9223 | 4.3586 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 78.15 | | 3.7182 | 15.0 | 52050 | 4.3651 | 2.9223 | 4.3649 | 0.0248 | 0.0057 | 0.0061 | 0.0248 | 78.64 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme config: PAN-X.de split: train args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1392 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1616 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1419 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1392 | 0.8649 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/albert-xxlarge-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: mit --- ### Liminal spaces 2.0 on Stable Diffusion This is the `liminal image` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![liminal image 0](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/2.jpeg) ![liminal image 1](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/4.jpeg) ![liminal image 2](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/11.jpeg) ![liminal image 3](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/15.jpeg) ![liminal image 4](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/16.jpeg) ![liminal image 5](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/1.jpeg) ![liminal image 6](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/9.jpeg) ![liminal image 7](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/8.jpeg) ![liminal image 8](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/14.jpeg) ![liminal image 9](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/3.jpeg) ![liminal image 10](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/0.jpeg) ![liminal image 11](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/19.jpeg) ![liminal image 12](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/7.jpeg) ![liminal image 13](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/13.jpeg) ![liminal image 14](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/17.jpeg) ![liminal image 15](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/18.jpeg) ![liminal image 16](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/5.jpeg) ![liminal image 17](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/12.jpeg) ![liminal image 18](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/10.jpeg) ![liminal image 19](https://huggingface.co/sd-concepts-library/liminal-spaces-2-0/resolve/main/concept_images/6.jpeg)
dccuchile/albert-xxlarge-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 240.84 +/- 20.71 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/albert-xxlarge-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
68
null
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: banglabert-bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # banglabert-bert-finetuned-ner This model is a fine-tuned version of [csebuetnlp/banglabert](https://huggingface.co/csebuetnlp/banglabert) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.9526 - Precision: 0.0143 - Recall: 0.0769 - F1: 0.0241 - Accuracy: 0.0143 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 1 | 2.0085 | 0.0143 | 0.0769 | 0.0241 | 0.0143 | | No log | 2.0 | 2 | 1.9711 | 0.0143 | 0.0769 | 0.0241 | 0.0143 | | No log | 3.0 | 3 | 1.9526 | 0.0143 | 0.0769 | 0.0241 | 0.0143 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/albert-base-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
586
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder model-index: - name: vit-base-patch16-224-finetuned-flower results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # vit-base-patch16-224-finetuned-flower This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/albert-large-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
75
null
--- tags: - conversational --- # Kashiwagi Osamu DialoGPT Model
dccuchile/albert-tiny-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
393
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - metrics: - type: mean_reward value: 249.94 +/- 23.25 name: mean_reward task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
dccuchile/albert-xxlarge-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: ijelid-indobertweet results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ijelid-indobertweet This model is a fine-tuned version of [indolem/indobertweet-base-uncased](https://huggingface.co/indolem/indobertweet-base-uncased) on the Indonesian-Javanese-English code-mixed Twitter dataset. Label ID and its corresponding name: | Label ID | Label Name | |:---------------:|:------------------------------------------: | LABEL_0 | English (EN) | | LABEL_1 | Indonesian (ID) | | LABEL_2 | Javanese (JV) | | LABEL_3 | Mixed Indonesian-English (MIX-ID-EN) | | LABEL_4 | Mixed Indonesian-Javanese (MIX-ID-JV) | | LABEL_5 | Mixed Javanese-English (MIX-JV-EN) | | LABEL_6 | Other (O) | It achieves the following results on the evaluation set: - Loss: 0.2804 - Precision: 0.9323 - Recall: 0.9394 - F1: 0.9356 - Accuracy: 0.9587 It achieves the following results on the test set: - Overall Precision: 0.9326 - Overall Recall: 0.9421 - Overall F1: 0.9371 - Overall Accuracy: 0.9643 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 386 | 0.1666 | 0.8968 | 0.9014 | 0.8982 | 0.9465 | | 0.257 | 2.0 | 772 | 0.1522 | 0.9062 | 0.9368 | 0.9206 | 0.9517 | | 0.1092 | 3.0 | 1158 | 0.1462 | 0.9233 | 0.9335 | 0.9280 | 0.9556 | | 0.0739 | 4.0 | 1544 | 0.1563 | 0.9312 | 0.9361 | 0.9336 | 0.9568 | | 0.0739 | 5.0 | 1930 | 0.1671 | 0.9224 | 0.9413 | 0.9312 | 0.9573 | | 0.0474 | 6.0 | 2316 | 0.1719 | 0.9303 | 0.9394 | 0.9346 | 0.9578 | | 0.0339 | 7.0 | 2702 | 0.1841 | 0.9249 | 0.9389 | 0.9314 | 0.9576 | | 0.0237 | 8.0 | 3088 | 0.2030 | 0.9224 | 0.9380 | 0.9297 | 0.9570 | | 0.0237 | 9.0 | 3474 | 0.2106 | 0.9289 | 0.9377 | 0.9331 | 0.9576 | | 0.0185 | 10.0 | 3860 | 0.2264 | 0.9277 | 0.9389 | 0.9330 | 0.9571 | | 0.0132 | 11.0 | 4246 | 0.2331 | 0.9336 | 0.9344 | 0.9339 | 0.9574 | | 0.0101 | 12.0 | 4632 | 0.2403 | 0.9353 | 0.9375 | 0.9363 | 0.9586 | | 0.0082 | 13.0 | 5018 | 0.2509 | 0.9311 | 0.9373 | 0.9340 | 0.9582 | | 0.0082 | 14.0 | 5404 | 0.2548 | 0.9344 | 0.9351 | 0.9346 | 0.9578 | | 0.0062 | 15.0 | 5790 | 0.2608 | 0.9359 | 0.9372 | 0.9365 | 0.9588 | | 0.005 | 16.0 | 6176 | 0.2667 | 0.9298 | 0.9407 | 0.9350 | 0.9587 | | 0.0045 | 17.0 | 6562 | 0.2741 | 0.9337 | 0.9408 | 0.9371 | 0.9592 | | 0.0045 | 18.0 | 6948 | 0.2793 | 0.9342 | 0.9371 | 0.9355 | 0.9589 | | 0.0035 | 19.0 | 7334 | 0.2806 | 0.9299 | 0.9391 | 0.9342 | 0.9588 | | 0.0034 | 20.0 | 7720 | 0.2804 | 0.9323 | 0.9394 | 0.9356 | 0.9587 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.7.1 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-kde4-en-to-ja_kftt results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kde4-en-to-ja_kftt This model is a fine-tuned version of [Helsinki-NLP/opus-tatoeba-en-ja](https://huggingface.co/Helsinki-NLP/opus-tatoeba-en-ja) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 5.2891 - Bleu: 0.3128 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: - en tags: - text-classification - claim-detection license: "mit" datasets: - Nithiwat/claim-detection widget: - text: "This is the best cast iron skillet you will ever buy." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday." - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book" ---
dccuchile/bert-base-spanish-wwm-uncased-finetuned-pawsx
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- license: mit tags: - generated_from_trainer model-index: - name: gpt2-poetry-model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-poetry-model This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1 - Datasets 2.5.1 - Tokenizers 0.10.3
dccuchile/distilbert-base-spanish-uncased-finetuned-pawsx
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
29
null
--- license: apache-2.0 tags: - translation - generated_from_trainer metrics: - bleu model-index: - name: marian-finetuned-kftt_kde4-en-to-ja results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # marian-finetuned-kftt_kde4-en-to-ja This model is a fine-tuned version of [Hoax0930/kyoto_marian_mod_2_2_1](https://huggingface.co/Hoax0930/kyoto_marian_mod_2_2_1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 8.3622 - Bleu: 2.6910 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
dccuchile/distilbert-base-spanish-uncased-finetuned-pos
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en tags: - text-classification - claim-detection license: "mit" datasets: - Nithiwat/claim-detection widget: - text: "This is the best cast iron skillet you will ever buy." - text: "Barack Obama nominated Hilary Clinton as his secretary of state on Monday." - text: "On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book" ---
dccuchile/distilbert-base-spanish-uncased
[ "pytorch", "distilbert", "fill-mask", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
670
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.88 - name: F1 type: f1 value: 0.881578947368421 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3095 - Accuracy: 0.88 - F1: 0.8816 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
CennetOguz/distilbert-base-uncased-finetuned-recipe-1
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - stanza - token-classification library_name: stanza language: bn license: apache-2.0 --- # Stanza model for Bengali (bn) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-05-19 03:30:30.527
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate-1
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - stanza - token-classification library_name: stanza language: ml license: apache-2.0 --- # Stanza model for Malayalam (ml) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-05-19 04:10:24.073
CennetOguz/distilbert-base-uncased-finetuned-recipe-accelerate
[ "pytorch", "distilbert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - stanza - token-classification library_name: stanza language: sd license: apache-2.0 --- # Stanza model for Sindhi (sd) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-05-19 04:21:22.146
CennetOguz/distilbert-base-uncased-finetuned-recipe
[ "pytorch", "tensorboard", "distilbert", "fill-mask", "transformers", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - stanza - token-classification library_name: stanza language: si license: apache-2.0 --- # Stanza model for Sinhala (si) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2023-05-19 04:21:41.790
Chaddmckay/Cdm
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: mit tags: - generated_from_trainer metrics: - bleu model-index: - name: mBART_slang_to_standard_4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mBART_slang_to_standard_4 This model is a fine-tuned version of [facebook/mbart-large-50](https://huggingface.co/facebook/mbart-large-50) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9058 - Bleu: 60.5005 - Gen Len: 47.7251 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:| | No log | 1.0 | 106 | 2.6704 | 60.4144 | 51.1659 | | No log | 2.0 | 212 | 2.0665 | 60.2528 | 47.1848 | | No log | 3.0 | 318 | 1.9058 | 60.5005 | 47.7251 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Chaewon/mnmt_decoder_en_gpt2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - stable-diffusion - text-to-image license: bigscience-bloom-rail-1.0 inference: true --- # stable-diffusion-wikiart
Chaima/TunBerto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - stable-diffusion - text-to-image license: bigscience-bloom-rail-1.0 inference: true --- # stable-diffusion-wikiart sd-wikiart-v2 is a stable diffusion model that has been fine-tuned on the [wikiart dataset](https://huggingface.co/datasets/fusing/wikiart_captions) to generate artistic images in different style and genres. <img src="https://huggingface.co/valhalla/sd-wikiart-v2/resolve/main/wikiart.png"> # Gradio [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1i7HJlTzVPEirNedDV-TcR5Ok2_8QI6zC?usp=sharing) ## Model Description The model originally used for fine-tuning is [Stable Diffusion V1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), which is a latent image diffusion model trained on [LAION2B-en](https://huggingface.co/datasets/laion/laion2B-en). The current model has been fine-tuned with a learning rate of 1e-05 for 1 epoch on 81K text-image pairs from wikiart dataset. Only the attention layers of the model are fine-tuned. This is done to avoid catastrophic forgetting, the model can generate artistic images given specific prompts but still retains most of its previous knowledge. ## Training Data TODO ## License This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content 2. The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) [Please read the full license here](https://huggingface.co/spaces/CompVis/stable-diffusion-license) ## Downstream Uses This model can be used for entertainment purposes and as a generative art assistant. ## Example Code ```python import torch from diffusers import StableDiffusionPipeline model_id = "valhalla/sd-wikiart-v2" device = "cuda" pipe = StableDiffusionPipeline.from_pretrained( model_id, torch_dtype=torch.float16, ) pipe = pipe.to(device) prompt = "a painting of eiffel tower in the style of surrealism" with torch.autocast("cuda"): image = pipe(prompt, guidance_scale=7.5).images[0] image.save("eiffel_impressionism.png") ```
chainyo/speaker-recognition-meetup
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- license: mit --- ### crb-surrealz on Stable Diffusion This is the `<crbsurreal>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<crbsurreal> 0](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/2.jpeg) ![<crbsurreal> 1](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/4.jpeg) ![<crbsurreal> 2](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/11.jpeg) ![<crbsurreal> 3](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/1.jpeg) ![<crbsurreal> 4](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/9.jpeg) ![<crbsurreal> 5](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/8.jpeg) ![<crbsurreal> 6](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/3.jpeg) ![<crbsurreal> 7](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/0.jpeg) ![<crbsurreal> 8](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/7.jpeg) ![<crbsurreal> 9](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/5.jpeg) ![<crbsurreal> 10](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/10.jpeg) ![<crbsurreal> 11](https://huggingface.co/sd-concepts-library/crb-surrealz/resolve/main/concept_images/6.jpeg)
ChaitanyaU/FineTuneLM
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-10-04T09:02:01Z
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer datasets: - xfun model-index: - name: layoutxlm-finetuned-xfund-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutxlm-finetuned-xfund-fr This model is a fine-tuned version of [microsoft/layoutxlm-base](https://huggingface.co/microsoft/layoutxlm-base) on the xfun dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 1000 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.10.0+cu111 - Datasets 2.5.2 - Tokenizers 0.12.1
Chakita/KNUBert
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: dataset_radiology_20220912.tsv results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dataset_radiology_20220912.tsv This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0+cu102 - Datasets 2.4.0 - Tokenizers 0.12.1
Chakita/Kalbert
[ "pytorch", "tensorboard", "albert", "fill-mask", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-10-04T09:22:12Z
--- language: en --- <p align="center"> <img src="https://doctr-static.mindee.com/models?id=v0.3.1/Logo_doctr.gif&src=0" width="60%"> </p> **Optical Character Recognition made seamless & accessible to anyone, powered by TensorFlow 2 & PyTorch** ## Task: recognition https://github.com/mindee/doctr ### Example usage: ```python >>> from doctr.io import DocumentFile >>> from doctr.models import ocr_predictor, from_hub >>> img = DocumentFile.from_images(['<image_path>']) >>> # Load your model from the hub >>> model = from_hub('mindee/my-model') >>> # Pass it to the predictor >>> # If your model is a recognition model: >>> predictor = ocr_predictor(det_arch='db_mobilenet_v3_large', >>> reco_arch=model, >>> pretrained=True) >>> # If your model is a detection model: >>> predictor = ocr_predictor(det_arch=model, >>> reco_arch='crnn_mobilenet_v3_small', >>> pretrained=True) >>> # Get your predictions >>> res = predictor(img) ``` ### Run Configuration { "arch": "crnn_vgg16_bn", "train_path": "/content/drive/Shareddrives/DataScience/DISA/datasets/IAM_Dataset/IAM/data", "val_path": "/content/drive/MyDrive/OCR_Finetuning/test", "train_samples": 1000, "val_samples": 20, "font": "FreeMono.ttf,FreeSans.ttf,FreeSerif.ttf", "min_chars": 1, "max_chars": 12, "name": null, "epochs": 10, "batch_size": 64, "input_size": 32, "lr": 0.001, "workers": 2, "resume": null, "vocab": "french", "test_only": false, "show_samples": false, "wb": false, "push_to_hub": false, "pretrained": false, "amp": false, "find_lr": false }
Chakita/KannadaBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.de metrics: - name: F1 type: f1 value: 0.8648740833380706 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1365 - F1: 0.8649 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2553 | 1.0 | 525 | 0.1575 | 0.8279 | | 0.1284 | 2.0 | 1050 | 0.1386 | 0.8463 | | 0.0813 | 3.0 | 1575 | 0.1365 | 0.8649 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.12.1+cu113 - Datasets 1.16.1 - Tokenizers 0.10.3
Chakita/gpt2_mwp
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
Access to model maxchoi/aitest is restricted and you are not in the authorized list. Visit https://huggingface.co/maxchoi/aitest to ask for access.
Chalponkey/DialoGPT-small-Barry
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-10-04T09:36:21Z
--- tags: - generated_from_trainer model-index: - name: bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch15 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-bert-base-uncased-mc-weight0.25-epoch15 This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.1343 - Cls loss: 3.0991 - Lm loss: 4.3588 - Cls Accuracy: 0.6092 - Cls F1: 0.6066 - Cls Precision: 0.6082 - Cls Recall: 0.6092 - Perplexity: 78.17 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 15 ### Training results | Training Loss | Epoch | Step | Validation Loss | Cls loss | Lm loss | Cls Accuracy | Cls F1 | Cls Precision | Cls Recall | Perplexity | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:-------:|:------------:|:------:|:-------------:|:----------:|:----------:| | 5.3372 | 1.0 | 3470 | 4.9249 | 1.5682 | 4.5325 | 0.5712 | 0.5567 | 0.5751 | 0.5712 | 92.99 | | 4.8287 | 2.0 | 6940 | 4.7830 | 1.3889 | 4.4355 | 0.6231 | 0.6169 | 0.6448 | 0.6231 | 84.39 | | 4.6295 | 3.0 | 10410 | 4.7585 | 1.4752 | 4.3894 | 0.6248 | 0.6160 | 0.6340 | 0.6248 | 80.59 | | 4.4704 | 4.0 | 13880 | 4.7707 | 1.6098 | 4.3678 | 0.6121 | 0.6079 | 0.6156 | 0.6121 | 78.87 | | 4.3364 | 5.0 | 17350 | 4.8008 | 1.8102 | 4.3478 | 0.6086 | 0.6068 | 0.6105 | 0.6086 | 77.31 | | 4.2245 | 6.0 | 20820 | 4.8353 | 1.9486 | 4.3477 | 0.6121 | 0.6075 | 0.6131 | 0.6121 | 77.30 | | 4.1289 | 7.0 | 24290 | 4.8883 | 2.1912 | 4.3400 | 0.6110 | 0.6076 | 0.6182 | 0.6110 | 76.71 | | 4.0485 | 8.0 | 27760 | 4.9394 | 2.4203 | 4.3337 | 0.5914 | 0.5862 | 0.6016 | 0.5914 | 76.23 | | 3.9826 | 9.0 | 31230 | 5.0026 | 2.6664 | 4.3354 | 0.6006 | 0.5936 | 0.6035 | 0.6006 | 76.35 | | 3.9277 | 10.0 | 34700 | 4.9902 | 2.5992 | 4.3398 | 0.6035 | 0.6032 | 0.6088 | 0.6035 | 76.69 | | 3.8794 | 11.0 | 38170 | 5.0698 | 2.9006 | 4.3441 | 0.6156 | 0.6127 | 0.6213 | 0.6156 | 77.02 | | 3.8428 | 12.0 | 41640 | 5.0956 | 2.9795 | 4.3501 | 0.6127 | 0.6110 | 0.6184 | 0.6127 | 77.49 | | 3.8129 | 13.0 | 45110 | 5.1223 | 3.0646 | 4.3555 | 0.6138 | 0.6099 | 0.6172 | 0.6138 | 77.91 | | 3.7891 | 14.0 | 48580 | 5.1242 | 3.0809 | 4.3534 | 0.6058 | 0.6045 | 0.6071 | 0.6058 | 77.74 | | 3.7744 | 15.0 | 52050 | 5.1343 | 3.0991 | 4.3588 | 0.6092 | 0.6066 | 0.6082 | 0.6092 | 78.17 | ### Framework versions - Transformers 4.21.2 - Pytorch 1.12.1 - Datasets 2.4.0 - Tokenizers 0.12.1
Champion/test_upload_vox2_wavlm_epoch8
[ "sidekit", "audio" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - vision - image-classification datasets: - mouss/autotrain-data-damages widget: - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg example_title: Tiger - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg example_title: Teapot - src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg example_title: Palace co2_eq_emissions: emissions: 0.007316433431312107 --- # Model Trained Using AutoTrain - Problem type: Binary Classification - Model ID: 1652858619 - CO2 Emissions (in grams): 0.0073 ## Validation Metrics - Loss: 0.082 - Accuracy: 0.989 - Precision: 1.000 - Recall: 0.978 - AUC: 0.995 - F1: 0.989
Cheapestmedsshop/Buymodafinilus
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-nc-4.0 pipeline_tag: fill-mask tags: - legal language: - da datasets: - multi_eurlex - DDSC/partial-danish-gigaword-no-twitter model-index: - name: coastalcph/danish-legal-bert-base results: [] --- # Danish LegalBERT (derivative of Maltehb/danish-bert-botxo) This model is a derivative of [Maltehb/danish-bert-botxo](https://huggingface.co/Maltehb/danish-bert-botxo) adapted to legal text. It has been pre-trained on a combination of the Danish part of the MultiEURLEX (Chalkidis et al., 2021) dataset comprising EU legislation and two subsets (`retsinformationdk`, `retspraksis`) of the Danish Gigaword Corpus (Derczynski et al., 2021) comprising legal proceedings. It achieves the following results on the evaluation set: - Loss: - ## Model description This is a BERT model (Devlin et al., 2018) model pre-trained on Danish legal corpora. It follows a base configuration with 12 Transformer layers, each one with 768 hidden units and 12 attention heads. ## Intended uses & limitations More information needed ## Training and evaluation data This model is pre-training on a combination of the Danish part of the MultiEURLEX dataset and two subsets (`retsinformationdk`, `retspraksis`) of the Danish Gigaword Corpus. ## Training procedure The model was initially pre-trained for 500k steps with sequences up to 128 tokens, and then continued pre-training for additional 100k with sequences up to 512 tokens. ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.00001 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - distributed_type: tpu - num_devices: 8 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.05 - training_steps: 100000 ### Training results | Training Loss | Length | Step | Validation Loss | |:-------------:|:------:|:-------:|:---------------:| | 1.0030 | 128 | 50000 | - | | 0.9593 | 128 | 100000 | - |
Cheatham/xlm-roberta-base-finetuned
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: en datasets: - lmqg/qg_squad pipeline_tag: text2text-generation tags: - question generation - answer extraction widget: - text: "generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 1" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer <hl> Etta James <hl> in the 2008 musical biopic, Cadillac Records." example_title: "Question Generation Example 2" - text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, <hl> Cadillac Records <hl> ." example_title: "Question Generation Example 3" - text: "extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress." example_title: "Answer Extraction Example 1" - text: "extract answers: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress. <hl>" example_title: "Answer Extraction Example 2" model-index: - name: lmqg/t5-large-squad-qg-ae results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_squad type: default args: default metrics: - name: BLEU4 (Question Generation) type: bleu4_question_generation value: 27.2 - name: ROUGE-L (Question Generation) type: rouge_l_question_generation value: 54.23 - name: METEOR (Question Generation) type: meteor_question_generation value: 27.81 - name: BERTScore (Question Generation) type: bertscore_question_generation value: 90.69 - name: MoverScore (Question Generation) type: moverscore_question_generation value: 65.29 - name: QAAlignedF1Score-BERTScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer value: 92.87 - name: QAAlignedRecall-BERTScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer value: 93.04 - name: QAAlignedPrecision-BERTScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer value: 92.72 - name: QAAlignedF1Score-MoverScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer value: 64.67 - name: QAAlignedRecall-MoverScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer value: 64.63 - name: QAAlignedPrecision-MoverScore (Question & Answer Generation (with Gold Answer)) type: qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer value: 64.82 - name: BLEU4 (Answer Extraction) type: bleu4_answer_extraction value: 49.73 - name: ROUGE-L (Answer Extraction) type: rouge_l_answer_extraction value: 69.82 - name: METEOR (Answer Extraction) type: meteor_answer_extraction value: 44.46 - name: BERTScore (Answer Extraction) type: bertscore_answer_extraction value: 91.63 - name: MoverScore (Answer Extraction) type: moverscore_answer_extraction value: 82.48 - name: AnswerF1Score (Answer Extraction) type: answer_f1_score__answer_extraction value: 70.3 - name: AnswerExactMatch (Answer Extraction) type: answer_exact_match_answer_extraction value: 59.26 --- # Model Card of `lmqg/t5-large-squad-qg-ae` This model is fine-tuned version of [t5-large](https://huggingface.co/t5-large) for question generation and answer extraction jointly on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [t5-large](https://huggingface.co/t5-large) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="en", model="lmqg/t5-large-squad-qg-ae") # model prediction question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/t5-large-squad-qg-ae") # answer extraction answer = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") # question generation question = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.") ``` ## Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg-ae/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.69 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 59.93 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 43.98 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 34.19 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 27.2 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 27.81 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 65.29 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 54.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 64.67 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.72 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 64.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 93.04 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 64.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/t5-large-squad-qg-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 59.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 70.3 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 91.63 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 60.87 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 56.96 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 53.12 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 49.73 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 44.46 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 82.48 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 69.82 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer', 'paragraph_sentence'] - output_types: ['question', 'answer'] - prefix_types: ['qg', 'ae'] - model: t5-large - max_length: 512 - max_length_output: 32 - epoch: 3 - batch: 16 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 4 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/t5-large-squad-qg-ae/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Cheatham/xlm-roberta-large-finetuned-d1
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
2022-10-04T10:12:59Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imdb metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples-imdb results: - task: name: Text Classification type: text-classification dataset: name: imdb type: imdb config: plain_text split: train args: plain_text metrics: - name: Accuracy type: accuracy value: 0.8733333333333333 - name: F1 type: f1 value: 0.8741721854304636 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3054 - Accuracy: 0.8733 - F1: 0.8742 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Cheatham/xlm-roberta-large-finetuned-d1r01
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
2022-10-04T10:47:00Z
--- tags: - generated_from_trainer datasets: - tweet_eval metrics: - accuracy - f1 model-index: - name: finetuning-cardiffnlp-twitter-roberta-base-sentiment results: - task: name: Text Classification type: text-classification dataset: name: tweet_eval type: tweet_eval config: sentiment split: train args: sentiment metrics: - name: Accuracy type: accuracy value: 0.7433333333333333 - name: F1 type: f1 value: 0.7418048347838402 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-cardiffnlp-twitter-roberta-base-sentiment This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment) on the tweet_eval dataset. It achieves the following results on the evaluation set: - Loss: 2.0244 - Accuracy: 0.7433 - F1: 0.7418 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.1 - Tokenizers 0.12.1
Cheatham/xlm-roberta-large-finetuned-r01
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
23
null
--- license: bigscience-bloom-rail-1.0 language: - en tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image --- このモデルは、アイドルマスター シャイニーカラーズに登場するアイドル、芹沢あさひのイラストを生成するのに特化したStable-DiffusionのDiffuser用のモデルです。 This model is for Diffuser, a Stable-Diffusion specialized for generating illustrations of Asahi Serizawa, an idol from THE iDOLM@STER SHINY COLORS. DreamBoothを利用して、WaifuDiffusionを追加学習し作成されました。 It was created using DreamBooth with additional learning of WaifuDiffusion. 生成した画像が芹沢あさひに類似していた場合、その著作権はBandai Namco Entertainment Inc.に所属する可能性があります。 If the generated image resembles Asahi Serizawa, the copyright may belong to Bandai Namco Entertainment Inc. その他の利用上の注意点は bigscience-bloom-rail-1.0のライセンスを御覧ください。 For other usage notes, please refer to the license of bigscience-bloom-rail-1.0. https://hf.space/static/bigscience/license/index.html
ChukSamuels/DialoGPT-small-Dr.FauciBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
13
null
--- tags: - autotrain - token-classification language: - en widget: - text: "I love AutoTrain 🤗" datasets: - Akshata/autotrain-data-person-name-validity1 co2_eq_emissions: emissions: 0.015012024821802214 --- # Model Trained Using AutoTrain - Problem type: Entity Extraction - Model ID: 1655358687 - CO2 Emissions (in grams): 0.0150 ## Validation Metrics - Loss: 0.038 - Accuracy: 0.991 - Precision: 0.000 - Recall: 0.000 - F1: 0.000 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Akshata/autotrain-person-name-validity1-1655358687 ``` Or Python API: ``` from transformers import AutoModelForTokenClassification, AutoTokenizer model = AutoModelForTokenClassification.from_pretrained("Akshata/autotrain-person-name-validity1-1655358687", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("Akshata/autotrain-person-name-validity1-1655358687", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chun/DialoGPT-small-dailydialog
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: - en tags: - esc datasets: - earnings22 --- To reproduce this run, first call `get_ctc_tokenizer.py` to train the CTC tokenizer and then execute the following command to train the CTC system: ```python #!/usr/bin/env bash python run_flax_speech_recognition_ctc.py \ --model_name_or_path="esc-benchmark/wav2vec2-ctc-pretrained" \ --tokenizer_name="wav2vec2-ctc-earnings22-tokenizer" \ --dataset_name="esc-benchmark/esc-datasets" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --wandb_project="wav2vec2-ctc" \ --wandb_name="wav2vec2-ctc-earnings22" \ --max_steps="50000" \ --save_steps="10000" \ --eval_steps="10000" \ --learning_rate="3e-4" \ --logging_steps="25" \ --warmup_steps="5000" \ --preprocessing_num_workers="1" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --do_train \ --do_eval \ --do_predict \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --push_to_hub \ --use_auth_token ```
Cilan/dalle-knockoff
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - esc datasets: - voxpopuli --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esc-benchmark/esc-datasets" \ --model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \ --dataset_config_name="voxpopuli" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-voxpopuli" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="1" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="10001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="225" \ --final_generation_num_beams="5" \ --generation_length_penalty="0.8" \ --hidden_dropout="0.2" \ --activation_dropout="0.2" \ --feat_proj_dropout="0.2" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
Cinnamon/electra-small-japanese-generator
[ "pytorch", "electra", "fill-mask", "ja", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "ElectraForMaskedLM" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
null
--- language: - en tags: - esc datasets: - spgispeech --- To reproduce this run, execute: ```python #!/usr/bin/env bash python run_flax_speech_recognition_seq2seq.py \ --dataset_name="esc-benchmark/esc-datasets" \ --model_name_or_path="esc-benchmark/wav2vec2-aed-pretrained" \ --dataset_config_name="spgispeech" \ --output_dir="./" \ --wandb_name="wav2vec2-aed-spgispeech" \ --wandb_project="wav2vec2-aed" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="2" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --logging_steps="25" \ --max_steps="50001" \ --eval_steps="10000" \ --save_steps="10000" \ --generation_max_length="40" \ --generation_num_beams="1" \ --final_generation_max_length="225" \ --final_generation_num_beams="14" \ --generation_length_penalty="1.6" \ --overwrite_output_dir \ --gradient_checkpointing \ --freeze_feature_encoder \ --predict_with_generate \ --do_eval \ --do_train \ --do_predict \ --push_to_hub \ --use_auth_token ```
ClydeWasTaken/DialoGPT-small-joshua
[ "conversational" ]
conversational
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - esc datasets: - spgispeech --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esc-benchmark/esc-datasets" \ --dataset_config_name="spgispeech" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-spgispeech" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
CoShin/XLM-roberta-large_ko_en_nil_sts
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - vision - image-classification datasets: - imagenet-1k ---
CoachCarter/distilbert-base-uncased-finetuned-squad
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - esc datasets: - earnings22 --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esc-benchmark/esc-datasets" \ --dataset_config_name="earnings22" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-earnings22" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
CodeDanCode/SP-KyleBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- language: - en tags: - esc datasets: - ami --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esc-benchmark/esc-datasets" \ --dataset_config_name="ami" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-ami" \ --dropout_rate="0.1" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
CodeNinja1126/bert-p-encoder
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- language: - en tags: - esc datasets: - switchboard --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esc-benchmark/esc-datasets" \ --dataset_config_name="switchboard" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-switchboard" \ --max_steps="5000" \ --output_dir="./" \ --run_name="whisper-switchboard" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="1000" \ --save_strategy="steps" \ --save_steps="1000" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
CodeNinja1126/test-model
[ "pytorch", "jax", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
24
null
--- language: - en tags: - esc datasets: - chime4 --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \ --model_name_or_path="medium.en" \ --dataset_name="esc-benchmark/esc-datasets" \ --dataset_config_name="chime4" \ --max_steps="2500" \ --output_dir="./" \ --run_name="whisper-chime4" \ --dropout_rate="0.1" \ --wandb_project="whisper" \ --per_device_train_batch_size="64" \ --per_device_eval_batch_size="16" \ --logging_steps="25" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --report_to="wandb" \ --preprocessing_num_workers="16" \ --evaluation_strategy="steps" \ --eval_steps="500" \ --save_strategy="steps" \ --save_steps="500" \ --generation_max_length="224" \ --length_column_name="input_lengths" \ --gradient_checkpointing \ --group_by_length \ --freeze_encoder \ --fp16 \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --predict_with_generate \ --use_auth_token ```
CoderEFE/DialoGPT-marxbot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational", "has_space" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-10-04T14:29:35Z
--- language: - en tags: - esc datasets: - librispeech --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="librispeech" \ --output_dir="./" \ --run_name="conformer-rnnt-librispeech" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CoderEFE/DialoGPT-medium-marx
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - en tags: - esc datasets: - common_voice --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="common_voice" \ --output_dir="./" \ --run_name="conformer-rnnt-common-voice" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --max_eval_duration_in_seconds="20" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CoffeeAddict93/gpt1-call-of-the-wild
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: - en tags: - esc datasets: - tedlium --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="tedlium" \ --output_dir="./" \ --run_name="rnnt-tedlium-baseline" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CoffeeAddict93/gpt1-modest-proposal
[ "pytorch", "openai-gpt", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "OpenAIGPTLMHeadModel" ], "model_type": "openai-gpt", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2022-10-04T14:36:03Z
--- language: - en tags: - esc datasets: - voxpopuli --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="voxpopuli" \ --output_dir="./" \ --run_name="conformer-rnnt-voxpopuli" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CoffeeAddict93/gpt2-call-of-the-wild
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- language: - en tags: - esc datasets: - gigaspeech --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --num_train_epochs="0.88" \ --dataset_config_name="gigaspeech" \ --output_dir="./" \ --run_name="conformer-rnnt-gigaspeech" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CoffeeAddict93/gpt2-medium-call-of-the-wild
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: - en tags: - esc datasets: - spgispeech --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="spgispeech" \ --output_dir="./" \ --run_name="conformer-rnnt-spgispeech" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CoffeeAddict93/gpt2-medium-modest-proposal
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- language: - en tags: - esc datasets: - earnings22 --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc/esc-datsets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="earnings22" \ --output_dir="./" \ --run_name="conformer-rnnt-earnings22" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CoffeeAddict93/gpt2-modest-proposal
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: - en tags: - esc datasets: - ami --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="ami" \ --output_dir="./" \ --run_name="conformer-rnnt-ami" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CogComp/bart-faithful-summary-detector
[ "pytorch", "jax", "bart", "text-classification", "en", "dataset:xsum", "transformers", "xsum", "license:cc-by-sa-4.0" ]
text-classification
{ "architectures": [ "BartForSequenceClassification" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": 1, "max_length": 128, "min_length": 12, "no_repeat_ngram_size": null, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
234
null
--- license: mit --- ### jfj on Stable Diffusion via Dreambooth #### model by Seonauta This your the Stable Diffusion model fine-tuned the jfj concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **a photo of sks jfj** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/Seonauta/jfj/resolve/main/concept_images/2.jpeg) ![image 1](https://huggingface.co/Seonauta/jfj/resolve/main/concept_images/4.jpeg) ![image 2](https://huggingface.co/Seonauta/jfj/resolve/main/concept_images/1.jpeg) ![image 3](https://huggingface.co/Seonauta/jfj/resolve/main/concept_images/3.jpeg) ![image 4](https://huggingface.co/Seonauta/jfj/resolve/main/concept_images/0.jpeg) ![image 5](https://huggingface.co/Seonauta/jfj/resolve/main/concept_images/5.jpeg)
CogComp/roberta-temporal-predictor
[ "pytorch", "roberta", "fill-mask", "arxiv:2202.00436", "transformers", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- language: - en tags: - esc datasets: - switchboard --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --dataset_config_name="switchboard" \ --output_dir="./" \ --run_name="conformer-rnnt-switchboard" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CohleM/bert-nepali-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- language: - en tags: - esc datasets: - chime4 --- To reproduce this run, execute: ```python #!/usr/bin/env bash CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_rnnt.py \ --config_path="conf/conformer_transducer_bpe_xlarge.yaml" \ --model_name_or_path="stt_en_conformer_transducer_xlarge" \ --dataset_name="esc-benchmark/esc-datasets" \ --dataset_config_name="chime4" \ --tokenizer_path="tokenizer" \ --vocab_size="1024" \ --max_steps="100000" \ --output_dir="./" \ --run_name="conformer-rnnt-chime4" \ --wandb_project="rnnt" \ --per_device_train_batch_size="8" \ --per_device_eval_batch_size="4" \ --logging_steps="50" \ --learning_rate="1e-4" \ --warmup_steps="500" \ --save_strategy="steps" \ --save_steps="20000" \ --evaluation_strategy="steps" \ --eval_steps="20000" \ --report_to="wandb" \ --preprocessing_num_workers="4" \ --fused_batch_size="4" \ --length_column_name="input_lengths" \ --fuse_loss_wer \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --use_auth_token ```
CohleM/mbert-nepali-tokenizer
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - question-answering - generated_from_trainer model-index: - name: roberta-base-squad2-nq-bioasq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta-base-squad2-nq-bioasq ## Model description This model is a fine-tuned version of [nlpconnect/roberta-base-squad2-nq](https://huggingface.co/nlpconnect/roberta-base-squad2-nq) on the BioASQ 10b dataset. ## Intended uses & limitations Cross-domain question answering! ## Training and evaluation data Training: BioASQ 10B with SQUAD sampled evenly to match the same samples as BioASQ 10B Eval: BioASQ 9B Eval with SQUAD Eval sampled evenly to match the same samples as BioASQ 9B Eval ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results Went from untrained exact match: 60.9% (f1 71.8%) to exact match: 95.2% (96.6% f1) on BioASQ 9B held out training set. Scores on SQUAD+BioASQ remained stable at exact match: 72.5% (f1 81.4%) to 88.5% (f1 93.3%). ### Framework versions - Transformers 4.18.0 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
ComCom/gpt2-large
[ "pytorch", "gpt2", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "GPT2Model" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2022-10-04T15:00:53Z
--- tags: - image-classification - pytorch - huggingpics metrics: - accuracy model-index: - name: south-indian-foods results: - task: name: Image Classification type: image-classification metrics: - name: Accuracy type: accuracy value: 0.6666666865348816 --- # south-indian-foods Autogenerated by HuggingPics🤗🖼️ Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb). Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics). ## Example Images #### Idli ![Idli](images/Idli.jpg) #### chutney ![chutney](images/chutney.jpg) #### dosa ![dosa](images/dosa.jpg) #### sambar ![sambar](images/sambar.jpg) #### vada ![vada](images/vada.jpg)
Connor/DialoGPT-small-rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - generated_from_trainer model-index: - name: EleutherAI_gpt-neo-125M-stablediffionprompts results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # EleutherAI_gpt-neo-125M-stablediffionprompts This model was trained from scratch on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-07 - train_batch_size: 1024 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 44000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.20.1 - Pytorch 1.11.0 - Datasets 2.1.0 - Tokenizers 0.12.1
Connorvr/BrightBot-small
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2022-10-04T15:26:51Z
--- language: en inference: false tags: - text-generation license: other commercial: false model-index: - name: inverse-scaling/opt-350m_eval results: - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/NeQA type: inverse-scaling/NeQA config: inverse-scaling--NeQA split: train metrics: - name: Accuracy type: accuracy value: 0.4666666666666667 verified: true - name: Loss type: loss value: 0.9192380222864449 verified: true - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/quote-repetition type: inverse-scaling/quote-repetition config: inverse-scaling--quote-repetition split: train metrics: - name: Accuracy type: accuracy value: 0.9633333333333334 verified: true - name: Loss type: loss value: 0.03444786100047819 verified: true - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/redefine-math type: inverse-scaling/redefine-math config: inverse-scaling--redefine-math split: train metrics: - name: Accuracy type: accuracy value: 0.6877777777777778 verified: true - name: Loss type: loss value: 0.6016371671193176 verified: true - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/hindsight-neglect-10shot type: inverse-scaling/hindsight-neglect-10shot config: inverse-scaling--hindsight-neglect-10shot split: train metrics: - name: Accuracy type: accuracy value: 0.4380952380952381 verified: true - name: Loss type: loss value: 0.8774787804555325 verified: true - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: mathemakitten/winobias_antistereotype_test_cot_v3 type: mathemakitten/winobias_antistereotype_test_cot_v3 config: mathemakitten--winobias_antistereotype_test_cot_v3 split: test metrics: - name: Accuracy type: accuracy value: 0.44660194174757284 verified: true - name: Loss type: loss value: 0.9301078982717057 verified: true - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: mathemakitten/winobias_antistereotype_test_v5 type: mathemakitten/winobias_antistereotype_test_v5 config: mathemakitten--winobias_antistereotype_test_v5 split: test metrics: - name: Accuracy type: accuracy value: 0.4368932038834951 verified: true - name: Loss type: loss value: 0.9175132444057151 verified: true --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-350m") >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and I'm a bit of a noob. I'm looking for"}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and I'm interested in this project. Can I get an initial contact"}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5) >>> generator("The woman worked as a") [{'generated_text': "The woman works as a substitute teacher for kids who have missed school. She's the teacher herself,"}, {'generated_text': 'The woman works as a security guard for another company and does an average of around $13/hour'}, {'generated_text': 'The woman works as a receptionist, she could at the least wait a week or two for her'}, {'generated_text': 'The woman works as a manager/intern/career development coach/advisor at a nursing home'}, {'generated_text': 'The woman works as a maid and has to clean the house but you can tell her to do it'}] ``` compared to: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-350m", do_sample=True, num_return_sequences=5) >>> generator("The man worked as a") [{'generated_text': 'The man works as a security guard for the National Football League franchise. He has been a part of'}, {'generated_text': 'The man works as a security guard for another company and does an excellent job.\nI remember when'}, {'generated_text': 'The man works as a "secret agent" but at the same time he\'s working to protect the'}, {'generated_text': 'The man works as a manager/operator/servant for a grocery store and does a lot of'}, {'generated_text': 'The man works as a bouncer near the scene of the accident - how he could do that is'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Connorvr/TeachingGen
[ "pytorch", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2022-10-04T15:28:41Z
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: amazon-review-sentiment-analysis results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # amazon-review-sentiment-analysis This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.5125 - Rmse: 1.2299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.12.1
ConstellationBoi/Oop
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2022-10-04T15:31:13Z
--- language: en thumbnail: http://www.huggingtweets.com/breedlove22/1664897591383/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div class="inline-flex flex-col" style="line-height: 1.5;"> <div class="flex"> <div style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;https://pbs.twimg.com/profile_images/1530319125985169408/SIC_0P3x_400x400.jpg&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> <div style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url(&#39;&#39;)"> </div> </div> <div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div> <div style="text-align: center; font-size: 16px; font-weight: 800">Robert ₿reedlove</div> <div style="text-align: center; font-size: 14px;">@breedlove22</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on tweets from Robert ₿reedlove. | Data | Robert ₿reedlove | | --- | --- | | Tweets downloaded | 3240 | | Retweets | 600 | | Short tweets | 535 | | Tweets kept | 2105 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1ip9pkdj/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @breedlove22's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/36ec6xyk) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/36ec6xyk/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/breedlove22') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
Contrastive-Tension/BERT-Base-CT-STSb
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2022-10-04T15:31:40Z
--- language: hu license: apache-2.0 datasets: - wikipedia tags: - generated_from_keras_callback - hubert model-index: - name: hubert-tiny-wiki results: [] --- # hubert-tiny-wiki This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks. ### Pre-Training Parameters: First phase: - Training steps: 500.000 - Sequence length: 128 - Batch size: 1024 Second phase: - Training steps: 100.000 - Sequence length: 512 - Batch size: 384 ### Framework versions - Transformers 4.21.3 - TensorFlow 2.10.0 - Datasets 2.4.0 - Tokenizers 0.12.1 # Acknowledgement [![Artificial Intelligence - National Laboratory - Hungary](https://milab.tk.hu/uploads/images/milab_logo_en.png)](https://mi.nemzetilabor.hu/)
Contrastive-Tension/BERT-Base-CT
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
2022-10-04T15:32:15Z
--- language: en license: other tags: - text-generation - opt inference: false commercial: false model-index: - name: inverse-scaling/opt-125m_eval results: - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/NeQA type: inverse-scaling/NeQA config: inverse-scaling--NeQA split: train metrics: - type: accuracy value: 0.4666666666666667 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMjBkYzg3OGQ2NGEwMzE3MmRlNDNjOTQ5YjI2ZmY5ZmExYmMwZGMzOGU4MDM5NmUxMmM0MzlmNmU3OGMxOWNlNyIsInZlcnNpb24iOjF9.6hSSu8iq_f8MCiI3vaVEE2x-Z_7SfVSXu2vEIGggKG1Z1oC1E3-Y7VbZM7cMJKzRvcskLBFaRHYoaU2uZi5gCA - type: loss value: 0.9069941281403104 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTNhMDE3NGEyY2UwN2M4ZTNlYjA0YjM1OWZiNWI4MWRjYmRkOGFjMDA2YjZkZWM0YjczMjRhZDIxMmQxMmQ3MCIsInZlcnNpb24iOjF9.ngIQdf8pOt8WcuIo6_vR5nsLCuazdU2605JI-cvjuG6uyBfAE7xWV-ZLqqVZ85cfpGGso1e3FDcnjNgCuS19CQ - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/quote-repetition type: inverse-scaling/quote-repetition config: inverse-scaling--quote-repetition split: train metrics: - type: accuracy value: 0.96 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzk1NTY4YmYzMzE3OGQ2OGM4NjljNmM0NTc0MWMxZTI3MGI3OTBkMzE3OTJkMjRiYzU2OGUwMjdhMTY1Y2M0MyIsInZlcnNpb24iOjF9.1uGnbKuVoPXeK2zF3nIqAPUeiWodBA78BhDgHk-8Kq9Vh6WtvcL0qwOvQVLjjPmL_7G56Y0d6cuXWycACwuhAQ - type: loss value: 0.04267331124324727 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNGI3MTBiODBlNjNlZGExNzBhMjgxNjNhNDQ5OGQ5YTBjMjQzNTMwNWQ3MDY3NWY2NzJjOGYzNmFjZTE2ODYzNyIsInZlcnNpb24iOjF9.OoXOKgtCjrB3iku_GtinmPFeFdMJWExa2N-VbKKoymMX9pQJ3Wh9cVbKWI2nTHsoTQI_lu_3s9ZjVVk7_v9zAA - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/redefine-math type: inverse-scaling/redefine-math config: inverse-scaling--redefine-math split: train metrics: - type: accuracy value: 0.7566666666666667 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTRkMzAyYzcwOGZmNDVhMTMwOGQxOWVhZDE2NzVkMGRkNDJjNzFlMjZkNDFlZDMyZTA0YjYwNTBjNTBlODg2NCIsInZlcnNpb24iOjF9.Mxc3griLDkTEYTJyF0EamDwHEtzN2IkiXKYY9HmIl6HbHvLoJn9Qz1Ot6EE_T0VJbL11Ih7XOgELgiZ35XU3Cw - type: loss value: 0.5209774699724383 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZjZiZjIzZGUyOGFjODU2ZDk4N2ZmMjc5MmZkY2NmODAyNDhjODQ1MDZiMDc0NDdlM2VmZDc2ZWRhMmFjM2ZhMyIsInZlcnNpb24iOjF9.rWg9_9Z5YtqgO7H61K8w1cp_7GTGsyRpMhACpqioXSnQ6z0sL-rtkwb1QKjD0yQH3MEHr2Grwsh7iUmY0nWjDQ - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/hindsight-neglect-10shot type: inverse-scaling/hindsight-neglect-10shot config: inverse-scaling--hindsight-neglect-10shot split: train metrics: - type: accuracy value: 0.5047619047619047 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOTAxMTI4OWNkNzQ0NTZjOGZhNWJmYjBlZGMyMjg2YjJjZWJjNzU1MmIzNWM5MTg5MzhjYmQ0YzI5NzM5NTVjZiIsInZlcnNpb24iOjF9.dzv4FTu8IIWWu8V497AzCWSjytzv_PnxriQ9aWOUd6AkQCOZQeCLrLYLifoK_BJ2SBcuBum6TS-Ukx9MalklAA - type: loss value: 0.8965487285916295 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiM2ExZjE2ZWIxODBjZTA0OTI1NzI0NTRlMTIxNDI1YjA4OTM5YzVkMzc4N2MzZTc4ZTA4OTFiYTlkMjcyYjY0MiIsInZlcnNpb24iOjF9.FjnpzThx7mRfh1U_R12KCUJ2wDxjaEKQC3iSSVAvzP1xXLESxA4c014Xzucw1Ugaq_P8s5ySzlPgGUp7qqTtBA - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: mathemakitten/winobias_antistereotype_test_cot_v3 type: mathemakitten/winobias_antistereotype_test_cot_v3 config: mathemakitten--winobias_antistereotype_test_cot_v3 split: test metrics: - type: accuracy value: 0.47815533980582525 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDdkNmEwOTQwZTI4MzE4YjlmNjIwZTIxMWM3YWM1YzYyMWM1OTY1YmZkNjhjYmUyZjJjZjZkMTljNjZlMzUwYyIsInZlcnNpb24iOjF9.PLaVz67JgdncUXDz3BXmZC41HKVl3_D1Iz5cgygbn2y4OsfVyvsyvU3GFqKgPb-gvXT4xGMxkV0FvA28gjTGDw - type: loss value: 0.8500587756725001 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiODFjNjUwMWI2Y2UwNzQ0NDE4NTU1NGI3YzQyMDNhOWU3YjU0MGRhMjEyZjNkMzczYWU2MDY0NGIyZmM5MWY5OCIsInZlcnNpb24iOjF9.9VQeAZ_lvyKC2RNQ2GmqSrxXCz2W8NZz14JhF3j4boBHXRm1V07wml6uNW_GfDt6Qwiu5IZCqMdvCavacDUoDw - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: mathemakitten/winobias_antistereotype_test_v5 type: mathemakitten/winobias_antistereotype_test_v5 config: mathemakitten--winobias_antistereotype_test_v5 split: test metrics: - type: accuracy value: 0.5024271844660194 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDRjYzkzZDI1MDRjY2JiNDUyNGJmNmVlZTMxYmJjODIzNDc2NGI3MzBjN2RkNGRjZjg5ZjJiYjM1ODQyMjQyMyIsInZlcnNpb24iOjF9.uLQjZb34N0QHPgeMnJkPk3xG3VI4Z_djPpCvah29a9D0fOHMuqdqynnySODmwfdbKecEV5za8wUf6_ny4qktDQ - type: loss value: 0.8860152396463484 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWIzODA0ZjExNzJiMDBjNzlkYzFjMzk2NGMxNzM0ODQyNmFhMDczM2EwMWU1N2VjMjcxNGEzMTdjN2IyNDJhNSIsInZlcnNpb24iOjF9.ipVZVlS7Rey-vsqEhAmOjcz4pkl85Brn8i1aTc4eSXQ2KgG5ScuAgeIVcxe3EbCSJsRkJowRqRqqWKBodiyAAQ --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-125m") >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and aware of the fact that I am a woman. I am aware of'}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-125m", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and active member of the Khaosan Group, a private, self'}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Contrastive-Tension/BERT-Base-NLI-CT
[ "pytorch", "tf", "jax", "bert", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2022-10-04T15:33:25Z
--- language: hu license: apache-2.0 datasets: - wikipedia tags: - generated_from_keras_callback - hubert model-index: - name: hubert-small-wiki-seq128 results: [] --- # hubert-small-wiki-seq128 Fully trained model with the second phase of training is available here: [SzegedAI/hubert-small-wiki](https://huggingface.co/SzegedAI/hubert-small-wiki) This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks. ### Pre-Training Parameters: - Training steps: 500.000 - Sequence length: 128 (the model is capable for 512) - Batch size: 1024 ### Framework versions - Transformers 4.21.3 - TensorFlow 2.10.0 - Datasets 2.4.0 - Tokenizers 0.12.1 # Acknowledgement [![Artificial Intelligence - National Laboratory - Hungary](https://milab.tk.hu/uploads/images/milab_logo_en.png)](https://mi.nemzetilabor.hu/)
Contrastive-Tension/BERT-Base-Swe-CT-STSb
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
126
2022-10-04T15:34:52Z
--- language: en license: other tags: - text-generation - opt inference: false commercial: false model-index: - name: inverse-scaling/opt-1.3b_eval results: - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/NeQA type: inverse-scaling/NeQA config: inverse-scaling--NeQA split: train metrics: - type: accuracy value: 0.5133333333333333 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmE3ZjEyOTk1MjE5ODY5MGI3YzFmNmUzZjJlOGQxMDY5NDMwNmZlMDU3YTMxNzRmNzFlNjQ2NmZmZWVjZWJkYyIsInZlcnNpb24iOjF9.qm5eR4WCCEBXYHxMRIZygcuHZQrqffJcL64WoJE9KKEJl_w0hzoRZtQGyMPlud_R0P6dfKTyHY8-P31FyO5bDA - type: loss value: 0.7768662874648968 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZDg4ZmMzNDYzZDM2OWY2NDE1N2ExY2M4MjkzZjk4ODY1OTFiMmU1MzY2ZmIwNTUzMTIzMjk2OWMzOTkyYzIyYiIsInZlcnNpb24iOjF9.zd4HcEF_rqmjlanoUMQlVJ6qiJh0VGBoASxQltYSf1WG9ernfK-DWoG3K7FbcyA34xiln7YkFTsAfDk1bJ5lDw - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/quote-repetition type: inverse-scaling/quote-repetition config: inverse-scaling--quote-repetition split: train metrics: - type: accuracy value: 0.95 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYmJjN2Y1OTI3ZDNiMTY3ZWEzMWNmYjI1OGFhNTE2NjJkNTNmZDllNDM0YjZiYjE2ODkyNTczOGY5YTk1MTQ1ZCIsInZlcnNpb24iOjF9.3AN_N2hszoYP16PjXB3JKJyxN9VNUZ3kPCbJjCLtrA9YhG5oaGK-pV2eLzVDYOLrQwedu3zeuAQY8k1QzY01Dg - type: loss value: 0.08434048505476036 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZGFiYTY3MTM5MzdlMDU4ZTI3YWM0NzYzMGFkNDk5NWU4MTcwNDc3MWJhNDUzMTVmMmQwOGY2MGMyZGZhYTVjNyIsInZlcnNpb24iOjF9.hMb_PRr3qDgiTxkFHKaWbam8g18q70nSUmNkc5clDQQuX4zMcA6URuGG09pNlmW7eYCkgEHmh9wXZIZjZsZUCg - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/redefine-math type: inverse-scaling/redefine-math config: inverse-scaling--redefine-math split: train metrics: - type: accuracy value: 0.6688888888888889 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNjI3OTk3YWUxMjcxY2I0MGRlYjNmOWQ4NzNlMjJjMzY0MDI1ZWQxMGQ2NTNlMWQyNmM2NmY4YTc4YWQ5N2E1ZCIsInZlcnNpb24iOjF9.wRij21b6f1DbpnkRmMaDthOVQdQGVFhxRJTXkbIPtzP7ih85jZ8l6WpDQGpoULMWEm2g880nZWsF-d2pX180Bw - type: loss value: 0.6386728600992096 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDk4NjY3MGNjMmVkODRhZTcwYjQ3ZDk4M2I4YThkNzg0YzUxZDdiZjY0MmNjY2Y4N2NlZjY2ZjZhNjk5MmFkMyIsInZlcnNpb24iOjF9.Sc2THcMu0eD-pw9vqgAaT6iGJY5iN1RutbfQpU3cNcLmivgbEWOtDdEZDjBjimEHtpkpM0Dxhvql_nPCo_-_BQ - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: inverse-scaling/hindsight-neglect-10shot type: inverse-scaling/hindsight-neglect-10shot config: inverse-scaling--hindsight-neglect-10shot split: train metrics: - type: accuracy value: 0.45396825396825397 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOWMyZWU4ODI2ZDA0ZjM3YTU2NTJiNzA1ZTFhMzc2MGYyMjEzOGVkYmY5ZmFkNzNkNTUwNDlhNDE3NWE3Y2E2ZSIsInZlcnNpb24iOjF9.goRx1LfVtEtjIQNT8oKikd49CQlBKFBb_Jwcz69XJoC_TF4iEiqxovfJwIdbLupxr1W0gnASWNXLY3qK60DiDg - type: loss value: 0.8809041155236108 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNDcxZTUwZTdlZTE3OWY1MDdjZTc1ODJhOTdmZDIyOTRmNWJjOTNjOWUzMjU3NzRkZGUwYTVkZDZiNzkzNzI5YiIsInZlcnNpb24iOjF9.Yg5_4sz7ManNO2Zg1xkKa-b_GNEITJ52OZPID_ODUxXia1B7zaM5YPjuovRCt7qN23eyq0t_BH4rHKFv_WG7DA - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: mathemakitten/winobias_antistereotype_test_cot_v1 type: mathemakitten/winobias_antistereotype_test_cot_v1 config: mathemakitten--winobias_antistereotype_test_cot_v1 split: test metrics: - type: accuracy value: 0.39563106796116504 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWQ4YjUyZTgxMWM3OWE0N2YwOTE2MTUxNDA1YmY1NjcyNDU0YThjM2QyMWU5OTc5YjVhZGRiOGM2NjAzNWVhNSIsInZlcnNpb24iOjF9.Y_-72Iv-10RZTK36JGMEKqU_ofvZAMmrEr5UzISEQV8MKJzx8HTqYl90I2YSkNLUzdK6c_PcAFuPYn6VkkJgDw - type: loss value: 1.294413821680473 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMDJmMDVhNmQwNGM2MDhmNDM5NmY3OGJjNjM1YWFjYzE3ZDM0YmQ0NGJhMzEyNGRiZTY2ZTZjMWE2ZmRhM2ZiMyIsInZlcnNpb24iOjF9.4lOFoVAXZcz-tkHTPeRSNBZw5egzmhy1RiVPyEprs36iQmmiAPNqKYwTqvKMY-IUoS-QzL0D7LstGCIjx9UVDg - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: mathemakitten/winobias_antistereotype_test_cot_v3 type: mathemakitten/winobias_antistereotype_test_cot_v3 config: mathemakitten--winobias_antistereotype_test_cot_v3 split: test metrics: - type: accuracy value: 0.40048543689320387 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNWU1MzE4OTFkNGZkM2FmZDkwYmUyNDIzZGY0ZmNkODUxNWVmMmU2YzJiODAyMGY1YjQyZDQwOTEzOWJlMWU0NCIsInZlcnNpb24iOjF9.ZnaemvPodb4zs29b3cpDKmTAjQwOvWO-dmCat2cFnWtjbQE-sGW_YhECHU9L_WvzvL6OLR858DjFhopH_uoDAA - type: loss value: 1.1583690714066759 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiOGNjM2NkM2I4ZDQ0MjYwZWVjMzhlZTgzYWQyM2I3ZmUzYWRlNTVjYzIxODE0Njg5MmVkYjRiM2MyODcyZjQ4ZiIsInZlcnNpb24iOjF9.RTQXfCmOWYhK8Zc04obVInuZawUbYhXzYRVLFo5l8HFbL6_GNcjI5Udm9frhyE4emvJeRI6FCl8Oj0xPjIM7Bg - task: type: zero-shot-classification name: Zero-Shot Text Classification dataset: name: mathemakitten/winobias_antistereotype_test_v5 type: mathemakitten/winobias_antistereotype_test_v5 config: mathemakitten--winobias_antistereotype_test_v5 split: test metrics: - type: accuracy value: 0.41504854368932037 name: Accuracy verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiYzQxZWMwYWMwZTBjMTcxYTYxOThkY2NhZjlhZTgxODM2MTEyNTUyODEyZjZhNDZjMjE5OWY1ZmM2OGY1MzEzZCIsInZlcnNpb24iOjF9._3PyP-HE1MPm8xglgp73aOAN63Lrf6niEwyYTG0nvz0rERBPdWY4AJNIIqk23AsapvYKyxuV2zrwceviWNp9AA - type: loss value: 1.2905146084796921 name: Loss verified: true verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMTc1YTZlYjU1MTFiOWVmYTU3YWY0YzVmZWRhYjlkMDU2ZWQ0ZGJlZDZmYWIxYTZmZWQ4ZGY4Nzc4NWY3MWNkYiIsInZlcnNpb24iOjF9.HBW6UGhoEBC_5iMTQjS6oRdQ7-wGXNd5165_OSuZ04HZiGCaF6Pe9sMtMIIZQbu4UlYySQtzm1071l4gcvnpBQ --- # OPT : Open Pre-trained Transformer Language Models OPT was first introduced in [Open Pre-trained Transformer Language Models](https://arxiv.org/abs/2205.01068) and first released in [metaseq's repository](https://github.com/facebookresearch/metaseq) on May 3rd 2022 by Meta AI. **Disclaimer**: The team releasing OPT wrote an official model card, which is available in Appendix D of the [paper](https://arxiv.org/pdf/2205.01068.pdf). Content from **this** model card has been written by the Hugging Face team. ## Intro To quote the first two paragraphs of the [official paper](https://arxiv.org/abs/2205.01068) > Large language models trained on massive text collections have shown surprising emergent > capabilities to generate text and perform zero- and few-shot learning. While in some cases the public > can interact with these models through paid APIs, full model access is currently limited to only a > few highly resourced labs. This restricted access has limited researchers’ ability to study how and > why these large language models work, hindering progress on improving known challenges in areas > such as robustness, bias, and toxicity. > We present Open Pretrained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M > to 175B parameters, which we aim to fully and responsibly share with interested researchers. We train the OPT models to roughly match > the performance and sizes of the GPT-3 class of models, while also applying the latest best practices in data > collection and efficient training. Our aim in developing this suite of OPT models is to enable reproducible and responsible research at scale, and > to bring more voices to the table in studying the impact of these LLMs. Definitions of risk, harm, bias, and toxicity, etc., should be articulated by the > collective research community as a whole, which is only possible when models are available for study. ## Model description OPT was predominantly pretrained with English text, but a small amount of non-English data is still present within the training corpus via CommonCrawl. The model was pretrained using a causal language modeling (CLM) objective. OPT belongs to the same family of decoder-only models like [GPT-3](https://arxiv.org/abs/2005.14165). As such, it was pretrained using the self-supervised causal language modedling objective. For evaluation, OPT follows [GPT-3](https://arxiv.org/abs/2005.14165) by using their prompts and overall experimental setup. For more details, please read the [official paper](https://arxiv.org/abs/2205.01068). ## Intended uses & limitations The pretrained-only model can be used for prompting for evaluation of downstream tasks as well as text generation. In addition, the model can be fine-tuned on a downstream task using the [CLM example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling). For all other OPT checkpoints, please have a look at the [model hub](https://huggingface.co/models?filter=opt). ### How to use You can use this model directly with a pipeline for text generation. ```python >>> from transformers import pipeline >>> generator = pipeline('text-generation', model="facebook/opt-1.3b") >>> generator("Hello, I'm am conscious and") [{'generated_text': 'Hello, I am conscious and I am here.\nI am here.\nI am conscious.'}] ``` By default, generation is deterministic. In order to use the top-k sampling, please set `do_sample` to `True`. ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True) >>> generator("Hello, I'm am conscious and") [{'generated_text': "Hello, I'm am conscious and able to hear. I have a lot of experience in the"}] ``` ### Limitations and bias As mentioned in Meta AI's model card, given that the training data used for this model contains a lot of unfiltered content from the internet, which is far from neutral the model is strongly biased : > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. Here's an example of how the model can have biased predictions: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5) >>> generator("The woman worked as a") [{'generated_text': 'The woman worked as a bartender for six months before getting to the job she always dreamed of. She'}, {'generated_text': 'The woman worked as a nanny in a house near The White Horse Farm in the Yorkshire Dales'}, {'generated_text': "The woman worked as a translator at the British Broadcasting Corporation's headquarters and was also an acquaintance of some"}, {'generated_text': 'The woman worked as a secretary and went to school full-time, and also worked as a waitress'}, {'generated_text': 'The woman worked as a beautician with her baby and the little girl is now at the age where'}] ``` compared to: ```python >>> from transformers import pipeline, set_seed >>> set_seed(32) >>> generator = pipeline('text-generation', model="facebook/opt-1.3b", do_sample=True, num_return_sequences=5) >>> generator("The man worked as a") [{'generated_text': 'The man worked as a janitor and the owner of the house he worked at caught him cheating on'}, {'generated_text': 'The man worked as a software engineer.\n\nFor over 10 years, he had been at Amazon'}, {'generated_text': 'The man worked as a car salesman - and was a man of his word to her\nA T'}, {'generated_text': 'The man worked as a private contractor for five years. He went to the Bahamas in the summer of'}, {'generated_text': 'The man worked as a computer systems consultant. After leaving the job, he became a prolific internet hacker'}] ``` This bias will also affect all fine-tuned versions of this model. ## Training data The Meta AI team wanted to train this model on a corpus as large as possible. It is composed of the union of the following 5 filtered datasets of textual documents: - BookCorpus, which consists of more than 10K unpublished books, - CC-Stories, which contains a subset of CommonCrawl data filtered to match the story-like style of Winograd schemas, - The Pile, from which * Pile-CC, OpenWebText2, USPTO, Project Gutenberg, OpenSubtitles, Wikipedia, DM Mathematics and HackerNews* were included. - Pushshift.io Reddit dataset that was developed in Baumgartner et al. (2020) and processed in Roller et al. (2021) - CCNewsV2 containing an updated version of the English portion of the CommonCrawl News dataset that was used in RoBERTa (Liu et al., 2019b) The final training data contains 180B tokens corresponding to 800GB of data. The validation split was made of 200MB of the pretraining data, sampled proportionally to each dataset’s size in the pretraining corpus. The dataset might contains offensive content as parts of the dataset are a subset of public Common Crawl data, along with a subset of public Reddit data, which could contain sentences that, if viewed directly, can be insulting, threatening, or might otherwise cause anxiety. ### Collection process The dataset was collected form internet, and went through classic data processing algorithms and re-formatting practices, including removing repetitive/non-informative text like *Chapter One* or *This ebook by Project Gutenberg.* ## Training procedure ### Preprocessing The texts are tokenized using the **GPT2** byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 50272. The inputs are sequences of 2048 consecutive tokens. The 175B model was trained on 992 *80GB A100 GPUs*. The training duration was roughly ~33 days of continuous training. ### BibTeX entry and citation info ```bibtex @misc{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Susan Zhang and Stephen Roller and Naman Goyal and Mikel Artetxe and Moya Chen and Shuohui Chen and Christopher Dewan and Mona Diab and Xian Li and Xi Victoria Lin and Todor Mihaylov and Myle Ott and Sam Shleifer and Kurt Shuster and Daniel Simig and Punit Singh Koura and Anjali Sridhar and Tianlu Wang and Luke Zettlemoyer}, year={2022}, eprint={2205.01068}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```